A screencap of search results and autocomplete firing at OYP

Original YP

Overview

The Original YP Network is a suite of business directory advertising products. My purpose there was to maintain and improve the network's websites and their user experience.

Being the sole member of the front-end web team with limited resources, we determined that users' search journey was the most critical area to first focus on. In my first months there, I identified several high-impact areas of improvement, pitched solutions to solve them, then designed, developed, and tested their implementations.

Product Improvement Journey


OPTIMIZATION

Making Results Dynamic

BACKGROUND

OYP offers enhanced listings to local businesses across the United States. By signing up, customers receive custom graphics, ads, and detail pages driving users to their own sales funnels. They're also boosted within any search results they would organically appear in.

As I became familiar with our sites' features and back-end, I took inventory of our strengths and where we fell short. What I found was a vast wealth of well-maintained and rich business data, but few opportunities for users to tap into it successfully. Then current methods, like an unstyled autocomplete functionality, often failed to return expected or helpful results, or would fail to work altogether.


COMPETITIVE ANALYSIS

I also tested our own user journeys on competing directory services. I wanted to discover both concepts to consider, and missteps to avoid. After exploring direct competitors like Angi, other yellow pages companies, and Google, I landed on two key takeaways to shape our own product strategy:

  1. Direct competitors had bloated search journeys
  2. We would never beat search engines on convenience

Priorities and Solutions

With such a massive scope for potential for improvements, along with facing internal and external limiting factors, I chose to focus on our reason-to-be: business searching. We needed to lean into the strength of our business data, and into our mission of "Search made simple". I pitched and prioritized lean, high-impact ideas that could stand on their own; those able to seamlessly integrate into both our existing user experience and codebase.

With stakeholder input, our high-level goals set were:

ON-BRAND AUTOCOMPLETE UI

The first project I started on was putting out some fires with our autocomplete. Searching was business classification specific, and strong, helpful autocomplete matches were necessary for users to get results. As it stood, autocomplete provided few matches, too many low quality matches, and looked rudimentary and out of place.

I ran a quick design audit of our network's seven sites and set up a rough brand style guide for each. Using them for reference, I styled all dropdowns to look native to their respective pages. I also increased their text size and padding for readability, and increased the dropdowns' height and suggestion counts to increase the chances of showing a relevant match with less typing.

partial section of a style guide in Figma

WEIGHTING AUTOCOMPLETE MATCHES BY RELEVANCY

Many autocomplete suggestions weren't often searched for, yet would appear above or outright bury our core classifications. Results like "Topsoil" might occasionally be helpful, but shouldn't be competing with "Attorneys".

Wanting to further reduce characters needed for users to type before seeing their desired suggestion, I spoke with internal experts on our data and strategy, and received a holistically ordered list of key classifications. To not interfere with our standalone autocomplete logic, I increased the amount of matches it returned, sorted the matches according to a JSON object of the provided list, then truncated the quantity of matches to the original desired number.

I made other smaller changes to help align users' queries with our data, such as rewriting autocomplete logic to look for a match with any portion of a classification, rather than only its start. I also actively sought input from other perspectives and researched best practices, which lead to many subtle improvements, such as ensuring we accommodated any erroneous spacing or punctuation in input fields.

PERSONALIZED ON-CLICK AUTOSUGGESTIONS

Being a categorical directory, searches that couldn't be matched to a classification or business name were unviable. I analyzed our traffic logs for search page requests to better understand the scale and context of this issue. I found that while a significant portion of searches were dead-end, most of them didn't have to be.

There were two common patterns for no-results searches:

  1. The user didn't type or select an existing classification
  2. The user performed a valid search, but we didn't have any results in their provided location

To help address the first case, I built a list of top classifications to appear when users click or tap in. We could reliably determine and personalize suggestions to the user's state, and provide a potential chance to navigate to results without having to type. It also was an opportunity to organically show how searches are syntactically structured, aiding them in formulating their own query.

Optimizing Our Search Algorithm

While researching and testing the usability of searching, I became increasingly familiar with the shortcomings of our search results. It was not infrequent for a page to have no results when that wasn't necessary, and I shifted focus to this broader issue with the user's process of searching now in better shape.

DYNAMIC SEARCH LOCATION SPECIFITY

By far the leading issue that led to showing no results was the rigidity of our search logic.

If I was to search for plumbers in St. Petersburg, population 261,000, our server would find and display many matches for business records that are both plumbers and located in St. Petersburg. Great! However, the same search in neighboring St. Petersburg Beach, population 9000, won't yield a single match. Likewise, a very specialized classification might have few or no results anywhere in the St. Petersburg metro.

Since we had the exact location of every business record, I programmed our search algorithm to check the results count, and if it was below n, to perform the search again, aggregating results for all locations within that region. On the rare occasion that couldn't find n matches either, we would show all matches in-state.

There are drawbacks and better methods to running search this way, but all of our key user stories were either unchanged or improved immediately by this relatively quick fix. In the vast majority of real-world use cases impacted by this change, people went from seeing few or no results within their exact town for various professional services, to getting a large quantity of commutable results within their metro.

Measuring Results

Over time we implemented several internal and external methods to collect quantitative data on how our changes performed. I reinvigorated our Google Analytics and Search Console presences, built custom internal SQL tools to auto-generate usage statistics, and later helped implement heatmaps and session recordings.

I regret not setting up these more advanced systems before making significant changes. Being able to measure changes to more nuanced KPIs, such as error rates, or the average time or clicks to get to a detail page, would have helped us to make more informed decisions when strategizing product direction. That said, we were still able to draw insight from how searching had transformed for users.

Some measurable successes:

Evaluating Shortcomings

While the changes I made to enrich searching at OYP were successful and well-received, there were still opportunity costs by setting such a specific focus. There was also a lot of learning as we went for both myself and the company, which was incredibly rewarding but lead to several missteps while innovating in such a lean environment.

I should have involved more user feedback. A lot of problems were obvious, but that doesn't mean my solutions were ideal. I never performed more than guerrilla user testing and input gathering because I assumed resources to do so would never be greenlit. That was 100% a me problem, and I should've done my best to advocate for it even if I might have been denied.

There's still a lot of room for improvement as well. I was unable to tackle visually modernizing our sites, or individualizing each of them to an extent where they all had a specialized niche. Where I did focus, I was able to make many beneficial changes, but I too often neglected to polish things before moving on. Knowing what I do now, I would have taken time to determine both the cost and value added for each individual step within my projects.