Those assumptions? You should validate them…

The one thing that is true of every aspect of IT is that it is always changing. And that change means that things you were confident of in the past may no longer hold true.

I was reminded of this while sitting in the pub with some developers recently, talking about querying for items by path in Sitecore. The debate about the best way to do this raged, but a common thread of the debate was that it is often said that the fastest way to find a set of items you needed is via a ContentSearch index. That assumptions has its roots in the time when most sites were using Lucene to run queries, and for queries with more complex matching rules. But does that hold true here?

Continue reading

Tripping over Liskov Substitution and search

When you’re working with a “provider” model for services in your applications you get used to the assumption that everything follows the Liskov Substitution Principle and whatever provider you plug in will work in the same way. Unfortunately, for software our in the real world that’s not always entirely true. Recently I came across an example of this which helped point out a bug in some search code in Sitecore…

The scenario

A component I found myself looking at was using the ContentSearch APIs to perform some queries and then render UI based on the results. There wasn’t anything special going on. It was just finding an appropriate index, building up a query, running it and then displaying how many items matched. The relevant bit was vaguely along the lines of:

var index = fetchContextIndex(someContentItem);
var predicate = buildTheSearchCriteria(currentState);

using (IProviderSearchContext context = index.CreateSearchContext())
{
    var query = context
        .GetQueryable<SearchResultItem>()
        .Filter(predicate);

    var fullResultsSet = query.GetResults();
    var totalResults = fullResultsSet.Count();

    // Display the number of matches
}

The confusion

The code started off running against an index managed by Lucene. With the particular set of content on the server, the value of the variable totalResults came back as 97. That seemed a sensible value, as there were roughly that number of items that matched the search criteria. But later the code got migrated to a server that was using Coveo to index the same content. And once that had happened, the value of totalResults always came back as 10, despite there being more matching pages in both the content tree and in the Coveo index.

Cue some head scratching

The solution

After a bit of fun with Google and poking about with the debugger, the subtle issue revealed itself: The code above uses the fullResultsSet.Count() method to fetch the total number of index hits that the search framework found for the query. At first glance that looks fine – the fullResultsSet object exposes the IEnumerable interface – so calling Count() seems a perfectly reasonable way to get the size of the results when there’s no pagination involved in the query.

But as some of you no doubt already spotted, that’s not the documented way you’re supposed to get the total number of results for a query. As a number of Google hits point out, the property TotalSearchResults is the thing we should be using here. And that returns the correct value for both Coveo and Lucene.

If the query had included pagination, the issue would have revealed itself straight away, as that would have highlighted the different behaviours of Count() and TotalSearchResults when your query result set is bigger than the results page size. But because the code in question didn’t do that, the bug slipped through…

Why does it behave like this?

Well getting past the initial slightly petulant “just to confuse us!” response, it’s all down to implementation details…

If you look into the code for the SearchResults<TSource> you’ll see that this class exposing both the property TotalSearchResults and an IEnumerable:

Search Results Code

The code for the TotalSearchResults property is set specifically by the provider generating the results:

public int TotalSearchResults
{
	get;
	private set;
}

That value is set by the constructor, and it can be independent of the size of results page being returned for this query.

But the value of a call to Count() for this collection will be based on the enumerator that the class exposes. The implementation of IEnumerable returns an enumeration taken from the inner Hits collection:

IEnumerator<SearchHit<TSource>> IEnumerable<SearchHit<TSource>>.GetEnumerator()
{
	return this.Hits.GetEnumerator();
}

For Lucene, a query with no pagination will return all the index items matched up to the maximum defined in the config setting for “max result set size” (The ContentSearch.SearchMaxResults setting in your config files). In this case, that was more than 97 so the whole result set was returned and hence it looked like the code was working. But Coveo seems to default to a page of 10 results if you fail to specify pagination. If you think about it, that behaviour makes some sense. Lucene is running in the same process as your site, so it’s not a big issue for it to return all the result data if you don’t explicitly apply a pagination clause to your query. (You still should though!) It’s just shuffling memory about, which is fairly fast to do. However Coveo runs out-of-process (and in the worst case might be out in the cloud if you use the SAAS version) so defaulting to only returning details for the first 10 results if there is no pagination clause could help prevent performance issues from huge result sets being pushed across the network.

So take care people – Barbara Liskov might not approve, but sometimes you need to be wary about swapping out providers. There can be justifications for why behaviour isn’t always exactly the same, and those variations can lead to subtle bugs if you’re not paying attention…

And reading the documentation so you understand the right way to use the objects in question helps too 😉

Be careful when you secure your HTTPS ciphers

One of the big things in IT security in recent times has been the successful attacks black-hats have launched against the infrastructure of cryptography. As we all come to rely on encrypted communications more and more, the vulnerabilities in old ciphers have become more of a problem to us developers and administrators. Vulnerabilities like Drown and Poodle are just two examples of a trend which means we all now have to worry about how our crypto is configured before we allow the internet to see a server.

But whenever you tie down security more tightly you risk causing problems when software relies on the thing you’ve just disabled…

I spent some time recently investigating why certain aspects of the Coveo for Sitecore search framework were broken on a client’s server, and the answer ended up being directly related to crypto security. Here’s what happened: Continue reading

Why can’t I bookmark my facets?

The other week I was commenting on shooting myself in the foot with the configuration of Coveo’s UI for Sitecore. Another issue that came up during that bit of project work was that in their default state, the facet components didn’t respond to data in the URL. Having done a bit of digging, however, one of my colleagues found an answer to this, which I figured I should write down in case anyone else is stuck on the same challenge… Continue reading

The case of the missing Coveo facet picker

Quick one this week. Mostly to try and save my own blushes, because the issue here was completely my fault. For the first time on a particular project I was trying to do some Coveo development work. I had created a page based on the default MVC templates they provide for search, but when I tried to add a Facet in Content Editor, I found myself staring at this:

Missing Picker

No picker for the field to facet on – so no way to make the Facet component work… Continue reading