Bill Slawski is an SEO legend.
He’s been doing SEO consulting professionally since 1996 (yes, that’s before Google was founded) and the community knows him as the guy who loves diving deep into search engine patents and whitepapers.
If you really want to understand how search engines work, his blog SEO by the Sea is one of the most comprehensive resources on this topic.
In this interview, he’s sharing some of his wisdom with us…
I am surprised sometimes when searchers change how they search for something based on a change in what they might call it, or decide what they find most interesting about a topic has changed.
Some things aren’t surprising such as knowing that people would be interested in watching a diamond getting crushed by a hydraulic press, or wanting to see where around Washington DC people chose to become engaged.
Your audiences are the most important part of your SEO campaigns; learn about them, and their interests, and pay attention to them. Social listening can pay off.
The biggest change (and one that is slowly rolling out) is the movement towards indexing real-world objects.
In 2012, when Google’s knowledge base rolled out, search transformed from returning results based on matching strings of text in queries to strings of text found on documents.
Learning about entities, and optimizing pages for entities, related entities, attributes of those entities, and classifications of entities and the relationships between entities, between entities and attributes, and between entities and classifications means that a knowledge graph has become at least as important as a link graph to SEO.
Google is indexing images based on recognizing what objects are in images, instead of what text might be associated with those images (it is likely still looking at both at this point.) Google has gotten better at indexing images and audio in videos now, and isn’t limited to text associated with videos as well.
Google is augmenting SERPs with universal results and with knowledge-based results such as knowledge graphs, featured snippets, structured snippets, related entities, “people also ask” questions, “people also search for” related queries.
These universal results and knowledge-based results are all organic results, and can be optimized for. They can provide more information to searchers about particular clients and topics.
They offer more opportunities for searchers to learn about particular businesses that they might find online and can strengthen awareness of a particular brand, or product or business.
Someone who drives a car doesn’t need to understand how a transmission works, and someone who cooks doesn’t need to be trained at a Culinary Institute. But if they want to fix a car or cook a fancy meal, having some knowledge can be helpful.
There are some sources of information that Google provides like their guidelines and their blog posts, that an average webmaster may find worth following since they are making a living on the Web.
When someone wants to get more involved in making changes to a site or working with someone to get their pages to rank better, or taking on the responsibility of improving rankings on their own, they might want to develop more expertise, and that may be helped by some more in-depth learning.
If a webmaster is going to rely upon someone else to help with rankings and making changes to their site, the people they work with should develop expertise, which can be from a combination of experience and education.
Patents and whitepapers can help with the development of that expertise. They can help point out possibilities and an awareness of how a search engine might be approaching different aspects of search, and provide some ideas that can be helpful in performing SEO.
There are a number of Google support pages, and Google developer pages that are probably worth looking over as well.
Also, spending some time developing a familiarity with Google Search Console and Google Analytics allows site owners to have an awareness of how their sites are doing and enables them some control in the success of their sites.
There are a number of reasons why a site might find itself losing traffic.
These can be because of:
I would recommend that most site owners read the blog post by Amit Singhal from Google: “More guidance on building high-quality sites” because it asks some very important questions about site quality, that site owners want to be able to answer.
It is likely that any site going through a core update at Google has to meet certain quality thresholds to continue to rank well.
The Web is filled with information, and it is also filled with misinformation. The most harmful myth is one that you fall for, and that causes harm to a site because it results in lost opportunities.
There are ways to avoid such harm. Those involve exercising critical thinking and testing things out before relying upon them.
Latent Semantic Indexing is a method of indexing content in small static databases that was developed a couple of years before the Web was launched by inventors at Bell Labs.
It was built for enterprise databases that weren’t changed very frequently. It needs to be run every time content is added to the database it has indexed.
“LSI Keywords” is often used as a slang term by a number of people to stand for adding words to content on a page that supposedly help that page rank higher for whatever term or phrase that page may be optimized for.
It isn’t based on LSI, and if it was, it wouldn’t necessarily help with a database the size and the quickly changing nature of the web.
There is no indication that social signals are used by any search engine as ranking factors.
There may be a correlation between pages with many positive social signals, but there has never been shown to be any actual causation between those social signals and rankings.
Google did mention in a patent about news stories that are selected at the topics of “top stories,” that it may do so based upon increased popularity in query terms, or in that topic being one that trends highly on social sites. That is potentially how a topic involved in top stories might be selected at Google, but it isn’t how stories based on those topics might be ranked.
It is possible that pages that are shared more frequently may be chosen by site owners and page editors as ones they may want to link to, so there may be indirect SEO value to sharing and liking pages on social networks.
There can be a good amount of value in having effective calls to action on pages that persuade people to click through to another page on a site from one they first find through search. If that CTA leads to a conversion, then there is a lot of value to improving the bounce rate on that page.
If a page provides a positive experience and fulfills a searcher’s informational or situational needs, then it could lead to return visits, and possibly referrals, links, and shares.
A visit to a page from someone in search of a phone number or an address may lead to a conversion offline and could be considered a bounce, but in that case, it isn’t harmful.
There was a myth that a word needed to be on a page a certain number of times compared to all of the words found on a page (at a certain density) and that density would vary based upon the industry or niche that a site was in.
There has never been any actual proof of such a keyword density existing, and varying based upon industry or niche…
However, pages are often ranked based on a combination of query independent scores such as PageRank and query dependant scores such as the presence of a query term appearing on a page (or in anchor text pointing to a page).
So having a term or phrase actually appearing upon a page can be used as part of an information retrieval score to rank a page.
This usage of that term or phrase is often referred to as “term frequency.” The Information Retrieval score may be based upon that term frequency and an inverse document frequency based upon how often that term or phrase might appear in documents from the corpus that the page is indexed in.
Hamlet Batista’s How Natural Language Generation Changes the SEO Game
Understand Schema, because it is one of the fastest-growing aspects of SEO at this time.
Bill Slawski @bill_slawski
Bill Slawski grew up on the Jersey shore, went to the University of Delaware and Widener University School of Law. He worked at the Superior Court of Delaware, first as a legal administrator, and then as a technical administrator.
He learned how to build websites, and built one for a couple of friends, and promoted that site until he came across SEO. He then started doing SEO.
He started blogging in 2005, and has written over 1,500 posts about Search related patents and whitepapers, not because he like patents, but because they are one of the best sources of information about search engines.
This post was last modified on September 7, 2021 11:07 am