Back in 2013, eBay published the results of a study conducted by their Research labs into the effectiveness of their Paid Search campaigns. The paper attempted to isolate the effect of their Google Adwords campaigns on the sales volumes of products sold on eBay and concluded that Paid Search was an unnecessary expense.
In 2014, Google released the Panda 4.0 update to their search algorithm and eBay lost an estimated 80% of its traffic overnight. eBay, as we know, continued to thrive, largely because they get a lot of direct traffic, but they were hurt and it’s taken them some time to recover.
You may wonder why I’m bringing this up so many years after it happened. Well, the effects of this little exchange are still being felt today leading to eBay’s structured data initiative and a world full of confusion for many sellers. It’s also one of the reasons Listabl exists.
For the uninitiated, it’s probably worth explaining a little of the context: when you enter a query on Google, the search results will typically display a mixture of paid for ads and free results. Ads are paid for on a Cost Per Click (CPC) basis and have been Google’s main source of revenue for some time but, for many years, advertisers have debated the value of bidding on terms that you would get anyway – in the form of natural search results. You can see this if you search for IBM: the top result is an Ad – for which Google will charge IBM when someone clicks on it – while the second result is the free “organic” result. Both take users to the same page.

Google search results for IBM

Ebay’s share of search rankings (dates are US format): Moz.com
Back to eBay’s paper. eBay researchers used a combination of geo-targeting (showing ads only to people in specific locations) and comparison with other search engines (Yahoo and Microsoft to be precise) to give them a baseline against which they could measure results. Their conclusion was that products with Google ads sold almost exactly the same number of items when ads were removed.
eBay was the second biggest spender on Adwords at the time and Harvard Business Review predicted that Amazon, Walmart and Walgreens would soon follow suit and ditch Adwords. One can only imagine the conversation at Google.
In fairness, Google had already been rolling out changes to their algorithm for some years and the result is undeniably a better user experience but, the release after this paper came out had a devastating effect on eBay’s traffic. Ostensibly, it was designed to deal with duplicate content – this was the heyday of comparison shopping engines (CSEs) and Google’s results pages were awash with duplicate ads for the same thing just from different companies.
Long story short, Panda 4.0 had the desired effect. It killed the CSE ads but it also killed a lot of marketplace ads and it absolutely slaughtered eBay who had already shut down most of their paid search campaigns and had to quickly scramble to get them back up and running to recover at least some of their lost traffic.
eBay actually had a bigger problem than they had first realised. If a business sold a product through their own website and also on eBay, unless the content has been materially changed before listing to eBay, Google now recognised the ads as duplicates and would only show one of them – the one that ranked the highest and, since eBay’s listings tended to be short lived (there were a large number of auctions) Google barely had time to find them and index them before the item sold and the page ended. Suffice to say, when it came to duplicate content, lots of eBay pages got removed.
Amazon didn’t have this problem. Because sellers shared a common product listing page, they were, effectively, permanent. Sellers may come and go but the page remains. And, since Amazon is a more popular domain than… well, almost everyone, their page would rank higher than the original seller’s, meaning theirs wins and the others would be the ones to get removed (something for all sellers to bear in mind).
Essentially, what I’m talking about here is a structured data problem.
Structured data generally refers to data that is organised in an agreed way. If you’re accepting data from multiple sources, it will all be structured the same way. It has some technical implications – Structured Data is commonly stored in a relational database and users communicate with relational databases using SQL (structured query language).
Unstructured data is often stored in non-relational databases, also known as NoSQL databases. Unlike relational databases, there’s no one language used for NoSQL database queries.
eBay’s data is essentially, unstructured and the Structured Data Initiative is their attempt to structure it. They want their own Buy Box.

eBay’s aspects are intended to improve the buyer experience.

The Schema.org vocabulary is recognised by Google and Microsoft
Merging offers from sellers on to a single listing page is the goal and makes all kinds of sense: it’s a better user experience because your search results pages aren’t cluttered up with loads of offers for the same thing and; once merged, you can then show the buyer the offer you think they would most like.
In March 2015, they announced that they would require GTINs… on some items. But after 4 years, they’ve barely touched the sides. Their cataloguing process is clunky and inefficient when compared with Amazon’s and they knew it would take them forever to get around all of their 1.3 billion unstructured listings.
So, in 2019, they announced a new focus, this time on Aspects (or Item Specifics) and in the same year, eBay’s Research division announced a competition involving some of the top Universities in the US would compete using Machine Learning to work out which items are identical. Instead of asking sellers to list against a catalogue, they are now just asking them to supply the data points and they’ll take care of the rest.
It’s an ambitious programme and only something that a company with pots of cash would entertain. It may well lead to something but it’s got a long way to go if it’s going to clean up the current issues and, even when implemented it’s just the catalogue that’s going, they’ll still require the core information in the first place.
For me, eBay’s struggles with structured data illustrates just how important it is to anyone who sells online and how important an ability to translate that data efficiently. But they’re just one marketplace and each channel behaves differently: Amazon is still investing in their barcode system – they recently(ish) integrated with GS1 to check the validity of barcodes; La Redoute want titles with only 40 character; Zalando have more stringent requirements than anyone.
Meanwhile, in websiteland, another structured data program exists (two in fact) designed to make it easier for search engines to understand what a page is about.
The first of these is JSON-LD — “JSON for Linked Data” (which Google likes); and the other is Microdata as structured by Schema.org. Both of these are attempts to standardise product data – this time for websites to be better understood by Google and Microsoft.
Then there are the affiliate networks and the retargeting platforms and the comparison sites: they all want the same data, just in different ways: if it’s a shoe, it might be Red or Burgundy or Rouge depending on where you’re sending it; if it’s a TV it might be 42in or 107cm.
Your website too, needs consistently formatted data. The only difference is that you control the format.
As sellers, you want access to them all and then you want to be able to manage them all. That’s the tricky part. How do you do that effectively without spending a fortune in either time or money?
For me, that is the question. Because, if you can get that right, you open up a whole world of possibilities.

Breakdown of a product page by Comalytics