Bill Slawski is an SEO legend.
He’s been doing SEO consulting professionally since 1996 (yes, that’s before Google was founded) and the community knows him as the guy who loves diving deep into search engine patents and whitepapers.
If you really want to understand how search engines work, his blog SEO by the Sea is one of the most comprehensive resources on this topic.
In this interview, he’s sharing some of his wisdom with us…
You’ve been in the SEO industry for 24 years. Is there something that can still surprise you in the SEO world?
I am surprised sometimes when searchers change how they search for something based on a change in what they might call it, or decide what they find most interesting about a topic has changed.
Some things aren’t surprising such as knowing that people would be interested in watching a diamond getting crushed by a hydraulic press, or wanting to see where around Washington DC people chose to become engaged.
Your audiences are the most important part of your SEO campaigns; learn about them, and their interests, and pay attention to them. Social listening can pay off.
What is the biggest change you’ve noticed in SEO since you started?
The biggest change (and one that is slowly rolling out) is the movement towards indexing real-world objects.
In 2012, when Google’s knowledge base rolled out, search transformed from returning results based on matching strings of text in queries to strings of text found on documents.
Learning about entities, and optimizing pages for entities, related entities, attributes of those entities, and classifications of entities and the relationships between entities, between entities and attributes, and between entities and classifications means that a knowledge graph has become at least as important as a link graph to SEO.
Google is indexing images based on recognizing what objects are in images, instead of what text might be associated with those images (it is likely still looking at both at this point.) Google has gotten better at indexing images and audio in videos now, and isn’t limited to text associated with videos as well.
Many SEOs complain about Google taking up more and more space in the SERPs, leaving less space for organic results. What’s your take on this?
Google is augmenting SERPs with universal results and with knowledge-based results such as knowledge graphs, featured snippets, structured snippets, related entities, “people also ask” questions, “people also search for” related queries.
These universal results and knowledge-based results are all organic results, and can be optimized for. They can provide more information to searchers about particular clients and topics.
They offer more opportunities for people to learn about particular businesses that they might find online which can therefore strengthen the share of searches for those brands in Google Search.
You often stress the importance of learning how Google works directly from the source. But does an average webmaster really need to read Google research publications or patents to learn SEO?
Someone who drives a car doesn’t need to understand how a transmission works, and someone who cooks doesn’t need to be trained at a Culinary Institute. But if they want to fix a car or cook a fancy meal, having some knowledge can be helpful.
There are some sources of information that Google provides like their guidelines and their blog posts, that an average webmaster may find worth following since they are making a living on the Web.
When someone wants to get more involved in making changes to a site or working with someone to get their pages to rank better, or taking on the responsibility of improving rankings on their own, they might want to develop more expertise, and that may be helped by some more in-depth learning.
“Your audiences are the most important part of your SEO campaigns; learn about them, and their interests, and pay attention to them.”
If a webmaster is going to rely upon someone else to help with rankings and making changes to their site, the people they work with should develop expertise, which can be from a combination of experience and education.
Patents and whitepapers can help with the development of that expertise. They can help point out possibilities and an awareness of how a search engine might be approaching different aspects of search, and provide some ideas that can be helpful in performing SEO.
What official resources do you consider the “bare minimum” every website owner should be familiar with?
I would recommend to website owners to read the Google Webmaster Central Blog and The Keyword Blog, to keep up with changes taking place at Google.
There are a number of Google support pages, and Google developer pages that are probably worth looking over as well.
Also, spending some time developing a familiarity with Google Search Console and Google Analytics allows site owners to have an awareness of how their sites are doing and enables them some control in the success of their sites.
Google algorithm updates are a big topic in the SEO community. What would you recommend to someone who’s been hit by an algo update negatively?
There are a number of reasons why a site might find itself losing traffic.
These can be because of:
- changes with competitors (they may start doing SEO, or doing it better)
- changes with Searchers (they may change how they search, or what they are interested in)
- changes to a site (problems with a server, or a need to update content, or improve user experience)
- an update by a search engine
- a penalty
I would recommend that most site owners read the blog post by Amit Singhal from Google: “More guidance on building high-quality sites” because it asks some very important questions about site quality, that site owners want to be able to answer.
It is likely that any site going through a core update at Google has to meet certain quality thresholds to continue to rank well.
You’re also kind of an SEO myth-buster. What do you think is the most harmful SEO myth out there?
The Web is filled with information, and it is also filled with misinformation. The most harmful myth is one that you fall for, and that causes harm to a site because it results in lost opportunities.
There are ways to avoid such harm. Those involve exercising critical thinking and testing things out before relying upon them.
You often say that LSI keywords are nonsense. Can you please explain why, in the simplest way possible?
Latent Semantic Indexing is a method of indexing content in small static databases that was developed a couple of years before the Web was launched by inventors at Bell Labs.
It was built for enterprise databases that weren’t changed very frequently. It needs to be run every time content is added to the database it has indexed.
LSI keywords do not use LSI, and are not keywords.
— Bill Slawski ⚓ 🇺🇦 (@bill_slawski) March 31, 2019
“LSI Keywords” is often used as a slang term by a number of people to stand for adding words to content on a page that supposedly help that page rank higher for whatever term or phrase that page may be optimized for.
It isn’t based on LSI, and if it was, it wouldn’t necessarily help with a database the size and the quickly changing nature of the web.
Social signals – are they a ranking factor? Do they impact SEO?
There is no indication that social signals are used by any search engine as ranking factors.
There may be a correlation between pages with many positive social signals, but there has never been shown to be any actual causation between those social signals and rankings.
Google did mention in a patent about news stories that are selected at the topics of “top stories,” that it may do so based upon increased popularity in query terms, or in that topic being one that trends highly on social sites. That is potentially how a topic involved in top stories might be selected at Google, but it isn’t how stories based on those topics might be ranked.
It is possible that pages that are shared more frequently may be chosen by site owners and page editors as ones they may want to link to, so there may be indirect SEO value to sharing and liking pages on social networks.
There’s still a lively discussion about how to interpret the bounce rate. What’s your take? Good thing? Bad thing? Should you try to lower it or not?
There can be a good amount of value in having effective calls to action on pages that persuade people to click through to another page on a site from one they first find through search. If that CTA leads to a conversion, then there is a lot of value to improving the bounce rate on that page.
If a page provides a positive experience and fulfills a searcher’s informational or situational needs, then it could lead to return visits, and possibly referrals, links, and shares.
A visit to a page from someone in search of a phone number or an address may lead to a conversion offline and could be considered a bounce, but in that case, it isn’t harmful.
Another popular myth is keyword density. Is there any case when you should care about it?
There was a myth that a word needed to be on a page a certain number of times compared to all of the words found on a page (at a certain density) and that density would vary based upon the industry or niche that a site was in.
There has never been any actual proof of such a keyword density existing, and varying based upon industry or niche…
However, pages are often ranked based on a combination of query independent scores such as PageRank and query dependant scores such as the presence of a query term appearing on a page (or in anchor text pointing to a page).
So having a term or phrase actually appearing upon a page can be used as part of an information retrieval score to rank a page.
“The most harmful myth is the one that you fall for, and that causes harm to a site, because it results in lost opportunities.”
This usage of that term or phrase is often referred to as “term frequency.” The Information Retrieval score may be based upon that term frequency and an inverse document frequency based upon how often that term or phrase might appear in documents from the corpus that the page is indexed in.
There are many “SEO gurus” but not all of them are reliable. How to determine what SEO advice to trust?
- Is there enough supporting evidence to believe in the advice?
- Are there sound arguments instead of logical fallacies or unproven statements?
- Are there wild conclusions based on things such as “gut instincts?”
Who would you recommend following in the SEO industry?
- Eric Enge – @stonetemple
- Lily Ray – @lilyraynyc
- Jason Barnard – @jasonmbarnard
- Hamlet Batista – @hamletbatista
- Paul Shapiro – @fighto
- Patrick Stox – @patrickstox
- Alexis Sanders – @AlexisKSanders
- Dawn Anderson – @dawnieando
- Brian Gorman – @briangormanGFD
- Roger Montti – @martinibuster
- Ammon Johns – @Ammon_Johns
- David Harry – @theGypsy
- Doc Sheldon – @DocSheldon
- Terry van Horne – @terryvanhorne
- Kim Krause Berg – @kim_cre8pc
- AJ Kohn – @ajkohn
What blog post made you “wow” recently?
Hamlet Batista’s How Natural Language Generation Changes the SEO Game
If there’s one piece of advice you should give to an SEO beginner, what would it be?
Understand Schema, because it is one of the fastest-growing aspects of SEO at this time.
Finally, let’s play a game. Never have I ever:
- Published an article I’d consider garbage now: I published a post about the most popular and least popular TLDs using the site search operator to see how many pages were published under it, and would consider it garbage because Google’s site search only provides an estimate of how many pages fulfill site searches based on rough estimates, and I didn’t know that at the time I wrote it.
- Thought about quitting my SEO career: I have never thought about quitting my SEO career because there is so much to learn and to teach.
- Bought a backlink: I have never bought a backlink because people will link for free if you give them something worth linking to.
“I have never bought a backlink because people will link for free if you give them something worth linking to.” – Bill Slawski
Bill Slawski @bill_slawski
Bill Slawski grew up on the Jersey shore, went to the University of Delaware and Widener University School of Law. He worked at the Superior Court of Delaware, first as a legal administrator, and then as a technical administrator.
He learned how to build websites, and built one for a couple of friends, and promoted that site until he came across SEO. He then started doing SEO.
He started blogging in 2005, and has written over 1,500 posts about Search related patents and whitepapers, not because he like patents, but because they are one of the best sources of information about search engines.
He’s the Director of SEO Research at Go Fish Digital and the founder of SEO by the Sea.