How to rank track Google's hotel search - DIY SERP Research

A lot of SEO work begins by digging into rankings.

We pay a lot of money to services that track rankings and for good reason, it's really really useful.

Sometimes however we need to do it ourselves. I'm going to share one possible way and how to do it on Google's Hotel Search.

Specifically let's talk about tracking meta search engines

When we think rank tracking we immediately think of default Google SERPs:

primary_search

But I'm not here to re-invent the wheel. If you need to track SERPs there are plenty of fine tools out there, that you should take a look at.

Google does keep creating more and more meta search engines:

  • Google Hotels
  • Google Jobs
  • Google Shopping
  • Google Scholar
  • Google Flights

And all those useful things you can do with ranking data. Things like:

  • Spotting patterns
  • Checking if changes have an effect
  • Uncovering new trends
  • Validating theories about how Google works
  • Spending lots of money in a short period of time etc. etc.

Are just as useful and interesting to do on meta search engines, you just need to be able to rank track them...

{{< rawhtml >}} Drag me into your bookmark bar {{< /rawhtml >}}

If you can't see your bookmark bar, you can enable it (in Chrome at least) with the shortcut -- Ctrl Shift + B.

How can I use this?

Excellent question.

  1. Install the following chrome extension - Disable Content Security Policy.
  2. Turn it on by clicking on the extension icon (the icon on will become coloured.)
  3. Add the bookmarklet to your bookmarks bar.
  4. Make a search on Google's hotel meta search engine
  5. Click the bookmarklet
  6. A big pop-up will appear.
  7. Click "Run scrape"
  8. Repeat steps 3 to 6 as many times as you'd like.
  9. Click download scrape store as CSV.
  10. Turn off the Disable Content Security Policy Icon.
  11. Tada. Then clear the store, otherwise it will all be saved for next time.

But I need this at scale

If you need rank tracking data at scale by far the biggest problem is preventing yourself from being blocked by Google. That single barrier is one the main reasons that we pay so much money for this service.

The reason this is designed as a bookmarklet is it means you can capture the information yourself and thus circumvent all those issues, but it does leave something to be designed scaling up.

Option 1 - People

The easiest by far, is to send this bookmarklet to a group of people and have them perform the scrape every day and then send you the results.

The bookmarklet will store the results across the browser closing and opening, so you can easily track multiple days if you wish.

Option 2 - Headless browser in AWS lambda or a Google Cloud Function

Yaar there be technical mumbo jumbo ahead (even worse it be un-tested). If you're not that technical stick to one.

  • Have a function which executes and runs a headless browser (which ignores content security policy).
  • Loads the functions from the script and then runs:
(function($, undefined) {
	artoo_key = "hotels_storage";
	extract_and_store_in_local_storage(artoo_key);
	// This will attempt to save the CSV locally so, be ready for that...
	download_key_to_csv(artoo_key);
}.call(this, artoo.$));
  • That should inject Artoo (which will in turn inject jQuery) and then execute the scraping.
  • Have some sort pipelining to repeat this as desired with suitable gaps to prevent being recognised.

As you may have noticed, from the vagueness of these steps I haven't actually had to do this yet.

I was able to get by with Step 1 and was a little tight on time. But in theory that seems ok... what could possibly go wrong?

How is this built?

This is a scraping bookmarklet built on top of artoo.js. Crucial components are:

  • Artoo.js - It provides a bunch of helper functionality for scraping as well as creating a shadowDOM for putting the UI box into.
  • Disable Content Security Policy - By default most sites will send this as general good security practice however it will prevent us from injecting jQuery so we to turn off these headers.

I built it as a bookmarklet because I wanted it to be easy to share and use amongst the team. It's fairly generic and so should be pretty easy to change to a different meta search engine should you require.

I originally had nested JSON and put the page level factors only once for the set of rankings. However in practice in analysis we often ended up immediately flattening the JSON anyway and that also added an extra barrier for people using it so I removed that.

I want to change this and improve it!

Be my guest! Github here.

Enjoy

Comments & questions welcome, feel free to fork and add to it, I am not a good JS developer by any stretch of the imagination.

Dominic Woodman
By Dominic Woodman. This bio is mostly here because it looked good in mock-ups.

Comments