This is the first post in a series where I’ll go over a basic set up to configure Sitecore Search to crawl and query your XM Cloud content. This post will cover crawling and indexing your content using a Web Crawler and a Sitemap trigger.
To begin, it’s helpful to define some terms used in Sitecore Search. We’ll start with some structural terms.
- Entity: This is a document in the search index. Sitecore Search ships with two types of entities, Product and Content. We will be using the Content entity.
- Attribute: These are the fields on your entities. Attributes are defined on the entity and make up the schema of your content in the search index, much in the same way you’d use fields on a Sitecore template to define your content schema in the CMS.
- Source: Sources are what you use to index content.
- Connector: Sources have connectors, which control how the source receives data and how you read it to index content. A connector must be selected when creating a source. Each type of connector serves a different purpose and dictates what options are available to extract your content.
- Trigger: Triggers are added to a source to control how the source data is fetched. Each type of trigger behaves differently.
- Extractor: You use extractors to pull data from your source and write it to your entity’s attributes. The types of extractors available depend on the connector you’re using.
Before we begin, a few callouts. These apply to the state of the product as of the time of this blog post, so it’s possible future enhancements will mitigate these.
- You must have the Tech Admin role in Sitecore search. This is a role above the Admin role, and you will probably need to have your organization’s primary contact request it for you from Support as it’s not available to be assigned to users via the UI.
- When you make a change in Sitecore Search, you must Publish it for it to take effect. It is possible to revert a publish, but only to the previous version and no further. Your version history can be viewed via the UI, but it offers very little detail on what you changed.
- Make only 1 change at a time, and publish between each change. If you change more than one thing, such as defining 2 or more attributes on an entity, then attempt to save and publish, the tool will error and lose your entity altogether, effectively disabling your instance. You can Revert your changes to the previously published state to resolve the error, but you will lose all your changes.
To start, select Administration from the left menu and then Domain Settings. If you don’t see this option, you need the Tech Admin role. Select Entities, and the popup will show your Content entity. Note the Attributes template. By default it is set to “Attributes Template for Content” and has several defined attributes already. You probably don’t need these, so change it to “Base Attributes Template”. Do this now before making any other changes, because once you start editing the entity you cannot change the template. Save and publish this.
Let’s define our attributes. With the base template, we get three to start: Id, Source, and Document Activeness. Of the three, Id is required. Let’s add some basic information like Title and Url.
Click Attributes in the subnav, then Add Attribute. You’ll be presented with a form. Let’s start by adding a Title field. Fill out the form as follows:
- Entity: content
- Display Name: Title
- Attribute name: title (Note: use all lower case and no spaces for attribute names)
- Placement: Standard
- Data Type: String
- Mark as: Check “Required” and “Return in api response”
Save and publish this. Repeat to create the Url attribute.
We’re making these fields required because we want all documents in the index to have a name and a url. These are the basic pieces of data we need for every document in our index. If a document is crawled and it is missing or otherwise cannot map required field data, the document is not indexed.
We also check “Return in api response” to make the contents of this field available in the search API. We will be checking this for any attribute we want included in the documents returned by the search API.
Next let’s add a taxonomy field called Topics. In Sitecore XMC, the Topic field is a multilist that allows authors to choose multiple Topic items to tag the content with. Add another attribute and define it like this:
- Entity: content
- Display Name: Topic
- Attribute name: topic
- Placement: Standard
- Data Type: Array of Strings
- Mark as: Check “Return in api response”
We’re not making Topic required because not all of our documents will have a topic. We’re choosing Array of Strings as the data type because the Topic may have multiple values. Save and publish again.
In order to facet on Topic, we need to perform another step. Select Feature Configuration from the subnav. You’ll see a lot of options here. If you select API Response, you’ll see all the attributes you added so far (assuming you checked “Return in api response”). Select Facets, then click Add Attribute. Add the Topic attribute here, save, and publish.
Now we have defined a barebones entity with fields for Id, Title, and Url, and included a single facet field on it called Topic. The next step is to set up our Source to crawl our Sitecore XM Cloud content. We’ll be using a Web Crawler connector with a Sitemap trigger to accomplish this.
Before your crawl, you need to make the data available to crawl. First, a sitemap. If you’re using SXA, you can configure a Sitemap to be generated automatically. On your pages, you’ll want to make the data available via meta tags. If you’ve set up open graph meta tags, you’ll have Title and Url covered. You’ll need to add another meta tag for Topics in your head web app like so,<meta name="topic" content="Topic A">
First we need to create the source. Select Sources from the left menu, then Add Source. In the dialog, name your source Sitemap Crawler and choose “Web Crawler” from the connector dropdown. (We’ll cover Advanced Web Crawler in another post). Save this and open the Source Settings screen.
Scroll down to Web Crawler Settings and click Edit. Set the Trigger Type to Sitemap. Set the URL to the url of your sitemap. You can use the Experience Edge media url to the SXA generated sitemap file if you haven’t configured middleware to handle /sitemap.xml
in your head app (which you should!) Set Max Depth to 0, because we don’t want to drill into any links on the page; we’re relying on the Sitemap to surface all our content we need crawled. Save and publish.
Next we’re going to configure our Attribute extractors. On Attribute Extraction, click Edit. On the next screen, click Add Attribute. You can select the attributes to be mapped in the popup. Configure your attribute Extraction Types as Meta Tag. Set the Value to the name of the meta tag on your page, “og:title” for Title, “og:url” for Url, and “topic” for Topic. Again, I recommend doing one at a time and saving and publishing between mapping each extractor in order to avoid errors. If prompted to run the crawler while saving, do not do it yet.
In this Attribute Extractors screen you can click Validate in the top nav. That presents a dialogue that lets you put in a url and text the extractors, which is a great feature. Try pasting in some of your web pages and make sure you’re extracting all the data correctly.
Finally you can return to the Sources screen and click the Recrawl and Reindex button on the right hand side of the listed source you just created. This button looks like a refresh icon of 2 curved arrows in a circle.
It takes Sitecore Search a bit to fire up a crawl job, but you can monitor this from the Analytics screen under Sources. If all went well you should see all your documents from your sitemap in the content entity index. If not, you’ll see that an error occurred here and you can troubleshoot from there.
From here, feel free to add more attributes to fill out your content entity schema, and to put your crawler on a schedule from the Sources -> Crawler Schedule screen.
In the next post, we’ll cover using the API to query content from the index.