All On-Page SEO Factors 2023
On-page SEO (EN: Search Engine Optimization – Search Engine Optimization) factors can have a major impact on your page's ranking ability if they are properly optimized.
Since SEO, or search engine optimization, is one of the most important online marketing disciplines and should be carried out by every company in Switzerland, you should definitely understand in more detail how a website is ranked.
Good on-page optimization means that your website is easier for every user to find and is ranked higher by Google in the search engine.
Compared to off-site SEO, on-page SEO is much more difficult to implement because it involves many technical aspects such as:
- Meta Title (Title Tag)
- Meta Description
- Page speed / website loading time
- Page structure optimization
- Core Web Vitals Customization & Improvement
- Content creation & continuous optimization
- Duplicate Content Removal
- And many more elements
Off-page SEO refers to backlinks, quotes from other websites, reviews, domain authority, etc.
With every small on-page optimization, you come one step closer to the desired rankings in Google search results.
The biggest on-page factors (also called on-site factors) that influence search engine rankings are:
Positive On-Page SEO Factors
OnPage SEO Factor #1: Keyword in URL
Keywords and phrases that are in the URL of the page, outside of the domain name, help to increase the relevance of the content for a specific search term.
If the URLs are longer, this has a lesser effect, or if the keyword has already been used multiple times in different URLs within the same website.
Source(s): Patent US 8489560 B1, Matt Cutts
On-Page SEO Factor #2: Keyword Order in a URL
The order in which keywords are displayed in a URL is important.
It was discussed that keywords that were previously displayed in a URL carry more weight.
At least Matt Cutts has confirmed that the weight of a keyword decreases "after about five" words.
Sources): Matt Cutts
On-Page SEO Factor #3: Keyword in the title tag
Title tags define the title of a document or page on your website.
They are frequently displayed both in the SERP and as snippets for social sharing.
It should not be longer than 60-70 characters, depending on the number of characters.
As with the URL, theoretically keywords that appear closer to the beginning will receive more weight.
From an SEO perspective, including a keyword in the title tag is a MUST.
Source(s): US 20070022110 A1
On-page SEO factor #4: Keyword density of the page
The percentage in which a keyword appears in the text.
The training SEOs used to shape all content so that a single keyword/phrase appeared in 5,5%–6% of cases. In the early 2000s, this was very effective.
Google has since improved its content analysis with other methods, making these tactics hardly relevant in 2015.
And keyword density, although referenced in Google patents, is almost certainly just a simplified concept within TF-IDF, which we will discuss next.
Source(s): Patent US 20040083127 A1
On-Page SEO Factor #5: TF-IDF of a page
TF-IDF weighs the density of keywords on a page against what is "normal", rather than just looking for a flat, raw percentage.
This is to ignore words like "the" in the calculation and establishes how many times an educated person should mention a term like "Google Ranking Factors" in a single document dealing with such a topic.
Users like it when a wine website is authentic.
Sources): Dan Gillick and Dave Orr, Patent US 7996379 B1
OnPage SEO factor #6: Key phrases in heading tags
Keywords in title tags carry significant weight in determining the relevant topic of a page.
An H1 tag or title tag carries the most weight, H2 has less, and so on.
This tag also improves accessibility for screen readers, and clear, descriptive headings reduce bounce rates according to various studies.
Sources): In The Plex, Penn State
On-Page SEO Factor #7: Words with striking formatting
Bold, italic, underlined, or larger fonts carry more weight in determining the relevant topic of a page, but less than words in a heading.
This is confirmed by Matt Cutts, SEOs, and a patent which states: "Matches in text with larger font size or bold or italic text can be weighted more heavily than matches in normal text."
Sources): Matt Cutts, Patent US 8818982 B1
On-Page SEO Factor #8: Keywords in close proximity
The proximity of the words to each other implies association.
For anyone who has ever mastered the English language, this comes as no surprise.
A section about your SEO work in Chicago will therefore contribute more to ranking for "Chicago SEO" than two paragraphs, one about SEO and one about Chicago.
Source(s): Patents: US 20020143758 A1, US 20080313202 A1
OnPage SEO Factor #9: Keyword in ALT text
The ALT attribute of an image is used to describe an image for search engines that cannot display the image.
This creates relevance, especially for image search, and simultaneously improves accessibility.
Sources): Matt Cutts
On-Page SEO Factor #10: Exact Keyword Match
Although Google only returns search results that contain only part of a search term as it appears on your page (or in some cases not at all), a patent states that a higher IR (Information Retrieval) value is given for an exact match.
In particular, it is stated that "a document that matches all terms of the search query may receive a higher score than a document that matches one of the terms".
Source(s): Patent US8818982 B1
On-Page SEO Factor #11: Partial Search Phrase Match
It has been established through a Google patent that when a page contains an exact match to a search term on the page, this is perceived as significant for the query results, a process known as "Information Retrieval (IR)".
This confirms that you may rank higher for certain search queries if a page contains a search term that is not exactly as it was entered into Google.
This is further confirmed by a simple Google search.
Source(s): Patent US8818982 B1
On-Page SEO Factor #12: Keyword Placement
There is a natural trend in how we write English: earlier is usually more important.
This applies to sentences, paragraphs, pages, and HTML tags. Google seems to apply this across the board, giving more weight to content that appears earlier and becomes more visible.
This is at least one function of the page layout algorithm, which prioritizes what is displayed at the top of your website.
Sources): Matt Cutts
OnPage SEO factor #13: Keyword stemming
Keyword stemming is the practice of taking the root or "stem" of a word and finding other words that have that stem in common.
If you avoid this, for example to achieve a keyword density score, readability will be poor and it will have negative effects.
This was introduced in 2003 with the Florida Update.
Sources): Matt Cutts
OnPage SEO Factor #14: Internal Link Anchor Text
The anchor text of a link tells the user where that link leads. This is an important component of navigation within your website.
If not misused, it helps to determine the relevance of specific content via vague alternatives such as "click here".
Sources): Google's SEO Starter Guide
On-Page SEO Factor #15: The domain is the keyword
Also known as Exact Match Domain or EMD.
A powerful ranking bonus is assigned when a keyword exactly matches a domain and a search query meets Google's definition of a "commercial query".
This was designed so that brands would rank by their own names, but it was frequently exploited and consequently made less powerful under various circumstances.
Source(s): Patent EP 1661018 A2, US 8046350 B1
OnPage SEO Factor #16: The domain contains the keyword
A ranking bonus is assigned when a keyword or phrase is present in a domain name.
The specified weighting seems to be less significant than if the domain name exactly matches that of a specific SEO query, but more important than if a keyword appears later in the URL.
Source(s): Patent EP 1661018 A2
On-Page SEO Factor #17: Keyword Density Across Domain
Krishna Bharat identified a problem with PageRank when he introduced Hilltop: “A web page that is generally authoritative may contain a page that matches a particular query, but is not an authority on the subject of the query.”
Hilltop improved search by examining the relevance of entire web pages labeled "experts".
Since TF-IDF determines relevance at the page level, we assume that Hilltop defines an "expert" domain using the same tools.
Source(s): Krishna Bharat, Patent US 7996379 B1
On-Page SEO Factor #18: TF-IDF across Domain
If you say "keyword density" instead of "term frequency", many SEO specialists get angry, even though they are perfect synonyms.
What is important regarding "keyword density" factors is, in turn, the second half of TF-IDF: Inverse Document Frequency.
Google ejects words like adverbs using TF-IDF and dynamically evaluates the natural density according to the topic.
Comparative indicators regarding "how much is natural" have seemingly decreased over time.
Sources): Dan Gillick and Dave Orr, Patent US 7996379 B1
On-Page SEO Factor #19: Distribution of Page Permissions
As a rule, pages that are linked across the entire page are given a high ranking, pages linked from them are given a lower ranking, and so on.
A similar effect is often seen on pages linked from the homepage, as this is the most linked page on most websites.
Creating a site architecture to maximize this factor is generally referred to as PageRank sculpting.
Source(s): Patent US 6285999 B1
OnPage SEO factor #20: Old domain
This is somewhat confusing, as a brand new domain name might also receive a temporary boost.
Older domains receive a little more trust, which Matt Cutts points out is fairly insignificant (while recognition is present).
Speculatively, this could be a reward for websites that had the chance to prove themselves as part of short-term black-hat projects.
Sources): Matt Cutts
OnPage SEO factor #21: New domain
New domains may receive a temporary boost in rankings.
A patent discussing methods for determining new content states that "the date on which a domain in which a document is registered can be used as an indication of the document's incorporation date".
However, according to Matt Cutts, the actual impact this has on one's own rankings is relatively small.
Speculatively, this could mean that a brand-new website or a timely niche site offers just enough opportunity to get started.
Source(s): Patent US 7346839 B2, Matt Cutts
On-Page SEO Factor #22: Hyphenated URL words
The ideal method for separating keywords in a URL is to use a hyphen.
Underscores can work, but are not as reliable because they can be confused with programming variables.
Mixing words together in a URL likely leads to words not being considered as separate keywords, thus preventing a keyword from being included in the URL bonus.
Apart from these scenarios, the use of a hyphen does not result in a higher rank.
Sources): Matt Cutts
OnPage SEO Factor #23: Keywords earlier in
An SEO theory manifested itself in the early 2000s as the first third rule.
It found that our language – sentences, titles, paragraphs, or even entire web pages – is generally used according to importance.
Although Northcutt's results from word-series experiments have not been confirmed by Google, this often suggests that it is a factor.
Source(s): Speculation
On-Page SEO Factor #24: Long domain registration time
In this patent, Google directly states that longer domain registration terms predict the legitimacy of a domain.
Source(s): Patent US 7346839 B2
On-Page SEO Factor #25: Use of HTTPS (SSL)
SSL was officially announced as a new positive ranking factor in 2014, regardless of whether the site processed user input.
Gary Illyes downplayed the importance of SSL in 2015, calling it a tiebreaker.
However, in an algorithm based on the numerical evaluation of billions of websites, tiebreakers have often made the difference in competitors' search queries.
Sources): Google, Gary Illyes
OnPage SEO Factor #26: Schema.org
With the advent of Schema.org, a joint project between Google, Yahoo!, Bing and Yandex to understand logical data entities via keywords, we are moving further away from the traditional search for “10 blue links”.
Currently, the use of structured data can improve ranking in a variety of scenarios.
There are also theories that schema.org can improve traditional search rankings by incorporating a ranking method known as entity agility.
Sources): Schema.org, Matt Cutts
OnPage SEO Factor #27: New Content
The full name of this file is technically "fresh content if the query deserves freshness".
The term "query deserves freshness" refers to search queries that would benefit from more up-to-date content.
This does not apply to every query, but it does apply to a whole range of information, especially for informational purposes.
These SEO benefits are just another reason why brand publishers are generally very successful.
Sources): Matt Cutts
On-Page SEO Factor #28: Age of Content
A Google patent states: "For some queries, older documents may be cheaper than newer ones."
The following describes a scenario in which a search result set can be re-sorted according to the average age of the documents in the retrieved results before it is displayed.
Source(s): Patent US 8549014 B2
On-Page SEO Factor #29: Quality Outbound Links
Google rewards authoritative outbound links to "good websites".
To quote the source: "Parts of our system promote links to good websites."
Sources): Matt Cutts
On-Page SEO Factor #30: Relevant outbound links
Given that Google analyzes your incoming links for authority, relevance, and context, it seems logical that outgoing links should be both relevant and authoritative.
This would likely refer to the Hilltop algorithm, simply the reverse of the way that is generally accepted for incoming links.
Sources): Moz
On-Page SEO Factor #31: Good spelling and grammar
This is a Bing ranking factor. Amit Singhal explained: "These are the types of questions we ask" regarding spelling/grammar in Google's definition of quality content.
Matt Cutts said "no" in 2011, but "rankings" correlate anyway.
Our agency determined that the first Panda update had led to this playing a major role.
Nevertheless, many content-related factors are clearly influenced by spelling/grammar.
Sources): Amit Singhal
OnPage SEO Factor #32: Reading Level
We know that Google analyzes the reading level of content because they created such a search filter for the results page (now removed).
We also know that content mills, which Google doesn't particularly like, are considered very easy, while academic writing was very advanced.
What we don't yet have is a concrete source or study that directly relates reading levels to rankings.
Sources): Correlation Study, speculation
On-Page SEO Factor #33: Lots of Content
A lot of content applies not only to inline image and video search, but is also considered part of "high-quality, unique content".
With Panda 2.5, video seemed to be the decisive factor.
Northcutt's work has also shown a positive correlation.
However, there is currently no official, public source that identifies this factor.
Sources): SEL on Panda 2.5
On-Page SEO Factor #34: Subdirectories
Categorical information architecture has long been a topic of discussion in SEO, as it seems Google analyzes the topic coverage for entire websites.
The exact impact of this ranking is unclear, but Google is now referring to structured data and will at least start displaying breadcrumbs on the results page and thus rank more pages.
Sources): Google Developers
OnPage SEO factor #35: Meta keywords
Some SEOs claim that the meta keywords tag was never important for SEO.
That's a myth. The idea that Google ranks meta keywords in 2015 is also a myth.
Both facts were confirmed in the same way: by placing a pre-made word without competition in a meta keyword tag, indexing that page, and searching for that word.
However, please note that Google is not the only search engine and theoretically countless other dynamic websites can be indexed that could benefit from this tag.
Sources): Matt Cutts, Experiment Page
On-Page SEO Factor #36: Mobile Friendliness
Mobile-friendly websites receive a significant ranking advantage.
At the moment, the effects of this ranking appear to be limited to users searching on mobile devices.
This found its way into mainstream SEO conversation and became more stringent during the Mobilegeddon update in 2015, although experts had been speculating about this topic for almost a decade.
Sources): Various Studies
On-Page SEO Factor #37: Meta Description
A good meta description serves as a search ad.
Considering the number of AdWords agencies that rely almost entirely on A/B testing of AdWords ads, the marketing value here cannot be underestimated.
Although keywords used in meta descriptions were previously widely regarded as a direct ranking factor, Matt Cutts stated in 2009 that this is not the case.
Sources): Matt Cutts
OnPage SEO factor #38: Google Analytics
Many have suggested that Google Analytics is, or could become, a Google ranking factor.
All current evidence, as well as very clear statements from Matt Cutts, indicate that all the benefits of Google Analytics, now or in the future, are an absolute myth.
However, it is an amazingly powerful tool in the hands of the right marketer.
Sources): Matt Cutts
OnPage SEO Factor #39: Google Webmaster Tools
As with Google Analytics, there are no confirmed ranking advantages to using Google Webmaster Tools.
Webmaster tools are still useful for solving problems with other ranking factors on this page.
Especially in connection with manual penalties and certain crawler errors.
Source(s): Speculation
OnPage SEO factor #40: ccTLD in the national ranking
It is assumed that TLDs for country codes such as .ch and .de bring a ranking bonus for searches from the same country, which is particularly useful for internationalization.
They should perform far better compared to a ccTLD from another country.
Source(s): Speculation
OnPage SEO Factor #41: XML Sitemaps
Sitemaps can be useful, but they are not required to get more pages of your website included in the Google index.
The idea that an XML sitemap will improve Google's ranking is a myth.
This comes directly from Google and is confirmed by various studies.
Sources): Susan Moskwa & Trevor Foucher
On-Page SEO Factor #42: The Analysis of Words and Phrases
Over time, Google seems to be doing more to analyze ideas and logical entities before words and phrases.
It analyzes how we prefer things that are preferred over exact search queries on a page.
This process, put simply, makes it possible to search for "how to cook meat" and deliver results for steak recipes where the word "meat" may not be directly mentioned.
Sources): Dan Gillick & Dave Orr
On-Page SEO Factor #43: Phrasing and Context
Since keyword density is practically no longer a factor, a basic understanding of sentence-based indexing will show that if you write thoroughly and in detail about content, you have a far better chance of ranking than if you write generic content with random keywords.
A clear part of a Google patent describes this as "identifying related phrases and clusters of related phrases".
Source(s): Patent US 7536408 B2
On-Page SEO Factor #44: Web servers located near users
Google works differently for many local queries, supplementing traditional results with results from Google Maps and potentially modified organic listings.
The same applies to national and international research.
If you host your website at least loosely near your users, for example within the same country, you will likely get better rankings.
Sources): Matt Cutts
On-Page SEO Factor #45: Use of rel="canonical"
The tag rel="canonical" suggests the ideal URL for a page.
This can prevent downgrades and penalties for duplicate content if multiple URLs can lead to the same content.
Our experience has shown that this is only a suggestion from Google and is often ignored.
According to Google, it doesn't directly improve the ranking. Nevertheless, it's a very good idea.
Sources): Google
OnPage SEO Factor #46: Use of rel="author"
The use of rel="author" was once a widespread SEO piece of advice and was regarded as a positive ranking factor, but Google's use of this factor has been omitted along with an entire practice known as authorship.
The idea that rel="author" is useful for any reason is now considered a myth.
Sources): John Mueller
OnPage SEO Factor #47: Use of rel="publisher"
Just like rel="author", the use of rel="publisher" was once a widespread SEO tip and was also accepted as a positive ranking factor.
Just like with rel="author", Google's use of rel="publisher" went away along with an entire practice known as authorship.
Sources): John Mueller
OnPage SEO Factor #48: URL uses "www" subdomain
A widespread misconception, propagated by SEO bloggers, suggests that a website may rank better if its URLs begin with "www".
This stems from the idea that we often need to resolve all pages of a website under "www".
The reason we actually do this is simply to avoid two URLs displaying the same content at the same address, which would result in a negative ranking factor.
Source(s): Speculation
On-Page SEO Factor #49: Dedicated IP Address
Web server IP addresses can be helpful for geographic targeting of specific demographics.
This can be a negative ranking factor if they are in the middle of a significant private webspam operation, or they are used by the Hilltop algorithm to identify two websites as having different owners.
The notion that only a dedicated IP address offers a direct ranking advantage has been repeatedly debunked.
Sources): Matt Cutts
On-Page SEO Factor #50: Subdomain Usage
Subdomains (name.yourwebsite.com) are often treated as separate websites by Google, compared to subfolders (yourwebsite.com/name/), which are not.
This has obvious implications for many other factors on this page.
Matt Cutts called subfolders/subdomains "roughly equivalent" in 2012, confirming that this happens less often now, but still does happen.
Panda recovery reports from after 2012, such as the migration of HubPages from subfolders/subdomains, prove that this can still be an important factor.
Sources): Matt Cutts
On-Page SEO Factor #51: Number of Subdomains
The number of subdomains on a page seems to be the most important factor in determining whether subdomains are treated as their own pages (as is the nature of free web hosting services and hybrid hosting/social sites like HubPages) or only as part of a common page.
Presumably, thousands of subdomains are involved, meaning they don't all belong to a single thematic website and each website is likely independent.
Source(s): Speculation
On-Page SEO Factor #52: Keywords in HTML Comments
This is an early SEO theory that is very easily debunked through a ten-second experiment and a little patience.
In the example given, we insert an extremely uncontested word into our source code and then link it prominently so that it is indexed.
If this word appears in the search results, we have proof that Google is associated with this word. In this case, no.
Sources): Experiment Page
OnPage SEO Factor #53: Keywords in CSS / JS comments
Another variation of an early SEO theory, which is very easily exposed by a ten-second experiment and a little patience.
In the example given, we insert an extremely uncontested word into our source code and then link it prominently so that it is indexed.
If this word appears in the search results, we have proof that Google is associated with this word. In this case, no.
Sources): Experiment Page
On-Page SEO Factor #54: Keywords in CLASSES, NAMES, and IDs
Once again, we can debunk theories about whether words in an odd position influence search engines by placing a non-competitive sentence there and waiting.
It's not worth thinking about what Google tells us or what's in a patent.
And here too we can confirm that this factor is a myth, at least at the time of writing.
Sources): Experiment Page
OnPage SEO Factor #55: Privacy Policy Usage
In 2012, a single experience was published in Webmaster World, which then sparked a wider discussion: Does a privacy policy establish a hierarchy?
For what it is worth, 30% of the Search Engine Roundtables agreed that it fits well with the philosophies stated by Google.
This is still very theoretical.
Sources): SER Discussion
On-Page SEO Factor #56: Verifiable Address
A physical address is theorized as a legitimizing feature in standard search rankings.
This is loosely supported by the notion that Google considers citations for local SEO (also known as Google Maps SEO) as mentions of name, address, and phone number (sometimes abbreviated as "NAP"). "Highly satisfactory contact information" is also instructed by Google reviewers for quality control.
Sources): Search Engine Land
On-Page SEO Factor #57: Verifiable Phone Number
A phone number is theorized as a legitimizing feature in standard search rankings.
Loosely supported by the notion that Google considers citations for local SEO (also known as Google Maps SEO) as mentions of name, address, phone number (sometimes abbreviated as "NAP") together.
"Very satisfactory contact information" They are also instructed by Google auditors for quality control.
Sources): Search Engine Land
On-Page SEO Factor #58: Accessible Contact Page
Theorized as a sign of legitimacy. It appears that this may originate from, or at least be best supported by, a document called the Google Quality Rater Guidelines.
In this document, Google asks quality control auditors to look for "very satisfactory contact information".
Sources): Search Engine Land
On-Page SEO Factor #59: Low Code-to-Content Ratio
This SEO theory seemed to gain traction in 2011, suggesting that more content and less code is beneficial. Here's what we know:
- Speed is a confirmed factor.
- Google's own PageSpeed Insights tool even suggests a reduction in payload of 5 KB.
- Certain subtle coding errors can cause demotions and penalties. At the very least (and more likely), this is an indirect correlation.
Sources): SitePoint PostSEOChat Tool
OnPage SEO Factor #60: Meta Sources Tag
The meta source tag was created in 2010 for Google News to better attribute sources.
There are two forms:
- Syndication source (when a third party is syndicated) and
- Original source (they are the source).
In situations where content is syndicated, this can theoretically help to avoid double content penalties.
If they are the original source, this tag will be overwritten by rel="canonical" anyway.
Sources): Eric Weigle
On-Page SEO Factor #61: More Content per Page
SerpIQ conducted an interesting correlation study comparing content length with top rankings, which significantly favored content with 2.000 to 2.500 words.
It is unclear whether this is an indirect function of other factors, e.g., that these pages are more favored and therefore attract more links/shares, or that they become increasingly popular through ranking for more and longer variations of the search query.
Source(s): SerpIQ
OnPage SEO Factor #62: Meta Geo Tag
Unlike IP addresses and ccTLDs, Matt Cutts states that they “hardly look at this tag”, although he suggested considering this tag if you are using it on a gTLD site (e.g., “.com”) and trying to restrict it to a country.
Although this turns out to be almost useless, it has been suggested that Google at least consider it and view it as very, very rare for internationalization.
Sources): Matt Cutts
OnPage SEO Factor #63: Display keywords earlier in the title
More than a decade of studies and correlation analyses suggest that titles that usually begin with a keyword (but not always) rank better than titles that end with a keyword.
It is easy to test and usually confirms: Keywords placed at the beginning of the title rank better..
But our chosen source suggests otherwise.
Thumback.com conducted a study in which the order of the title words changed traffic by 20% to 30%.
Their best-performing titles did not begin with a keyword, but were modified (as Google sometimes does) to appear in Google's results.
Source(s): Thumbtack Study
OnPage SEO Factor #64: Display keywords earlier in headings
Headings are another place where the order of words really matters.
Here too, something was often referred to as the "third rule", which suggests that words that appear earlier carry more weight.
Source(s): Speculation
On-Page SEO Factor #65: Number of Comments
We know from countless sources, and even from certain Webmaster Tools messages, that Google can separate and analyze user-generated content differently.
One theory suggests that Google may be looking at large amounts of comments on content to assess content quality.
Currently, however, there is no clear evidence for this factor that goes beyond a "if I Google" description.
Speculatively, it would also be one of the simplest factors for the game.
Source(s): Speculation
OnPage SEO Factor #66: Positive Feeling in the Comments
Theoretically, Google considers the opinions expressed in blog comments to determine the quality of the content.
There is a patent and confirmation from Google that they evaluate the sentiment expressed in product reviews for an entire website.
According to Amit Singhal, however, they cannot apply this to content because "if we demote websites that contain negative comments against them, you may not be able to find information about many elected officials."
Sources): Amit Singhal, Patent US 7987188 B2
OnPage SEO Factor #67: Use of rel="hreflang"
Behind the HTML tag, , there is no clear evidence (that we know of) to help you better understand.
However, it seems to be an advantage to define clear signals for different regional/linguistic variations of a location.
Often, several such signals are advantageous.
Sources): Google
Negative on-page factors
Negative ranking factors are factors that can be fulfilled to damage your existing rankings.
These factors can be divided into three categories: Accessibility, devaluation, and penalties.
Accessibility issues are merely indicators for Googlebot that could prevent your website from being properly crawled or analyzed.
A downgrade is an indicator of a lower quality website and can prevent your website from progressing further.
A penalty is far more serious and can have a devastating impact on your long-term performance at Google.
On-page factors, in turn, are those factors that are under your direct control as part of the direct management of your website.
Negative On-Page Factor #1: Keyword Density
Keyword stuffing penalties arise when a once extremely effective tactic is abused: the keyword density is set to a high level.
Our own experiments have shown that penalties can occur even at a density of 6%, although TF-IDF (discussed earlier) is likely involved, and this is sensitive to topics, word types, and context.
Sources): Matt Cutts, Remix
Negative On-Page Factor #2: Keyword Dilution
This factor manifests itself from the logic: if a higher keyword density or TF-IDF is positive at a given time, the overall frequency/density decreases.
Since Google has improved its understanding of natural language, this can be described as follows: Writing content that wanders without a clear theme.
The same basic concept is at play in both cases.
Sources): Matt Cutts
Negative On-Page Factor #3: Keyword-dense title tag
Aside from the entire page, keyword stuffing penalties appear to be possible within the title tag.
An ideal title tag should definitely contain fewer than 60-70 characters and hopefully still offer enough value to appear as a good search ad in Google results.
At an absolute minimum, using the same keyword five times in the same day offers no advantage.
Sources): Matt Cutts
Negative On-Page Factor #4: Keyword-dense headings
Headings such as H1, H2, H3, etc. can give additional weight to certain words.
Those who try to abuse this positive ranking factor will find that they cannot cram as many keywords as possible into these tags, even if the tags themselves are no longer than usual.
Keyword stuffing penalties appear to be possible simply as a function of the total space within these tags.
Sources): Matt Cutts
Negative OnPage Factor #5: Overuse of headings
If you want a concrete answer as to whether or not an SEO penalty exists, try pushing a positive ranking factor far beyond what seems a reasonable value.
One easily verifiable penalty involves placing your entire website within an H1 tag. Too lazy for that?
Matt Cutts gives a not-so-subtle hint in this source about too much text in an H1 file.
Sources): Matt Cutts
Negative On-Page Factor #6: URL Keyword Repetition
While there appear to be no penalties for using a word multiple times in a URL.
However, the added value from repeating keywords in a URL appears to be essentially nothing.
This can be easily verified by inserting a word five times instead of just once into a URL.
Source(s): Speculation
Negative On-Page Factor #7: Excessively long URLs
Matt Cutts notes that after about five words, the added value behind words in a URL diminishes.
It is theoretically and fairly reproducible that this also occurs with Google, although it is directly unconfirmed.
Although they behave somewhat differently, Bing has also done everything to confirm that blocking URL keywords is a penalty in their search engine.
Sources): Matt Cutts
Negative On-Page Factor #8: Keyword-dense ALT tags
Since the ALT tag text is generally not directly visible on the page, ALT tag keyword stuffing has frequently been abused.
Using a few descriptive words is good and actually ideal, but doing more than that can lead to penalties.
Sources): Matt Cutts
Negative OnPage Factor #9: Long internal link anchors
At least, a really long internal anchor text brings no additional value – it's a devaluation.
Under extreme circumstances, it appears possible to draw keyword stuffing webspam penalties from an excessively long anchor text.
Source(s): Speculation
Negative OnPage Factor #10: High Link-to-Text Ratio
It is theoretically claimed that the existence of a website that contains all links and no substance is the hallmark of a low-quality website.
This aligns with the quality of narrative content and not with ranking pages that look too much like search engine results pages, but are not currently supported as evidence by a study.
Source(s): Speculation
Negative OnPage Factor #11: Writing too much "list style"
Matt Cutts has suggested that any writing style that simply lists a large number of keywords will also fit the description keywords.
Example: Listing too many things, words, phrases, ideas, terms, keywords, etc., is not a natural form of writing. Too much of it leads to negative reviews and possibly penalties.
Sources): Matt Cutts
Negative On-Page Factor #12: JavaScript-hidden content
Google recommends against inserting text into JavaScript because it cannot be read by search engines.
However, this does not mean that Google does not crawl JavaScript.
In extreme cases where JavaScript can be used to disguise non-JavaScript text on the page, it may still be possible to receive a penalty for deception.
Sources): Google
Negative On-Page Factor #13: CSS-hidden content
One of the first and best-documented on-page SEO penalties, intentionally hiding text or links from users, especially to load the page with keywords intended only for Google, can result in a severe penalty.
A certain degree of flexibility appears under legitimate circumstances, e.g., when using tabs or tooltips.
Sources): Google
Negative OnPage Factor #14: Foreground matches background
Another common problem that leads to mutilation measures occurs when the foreground color matches the background color of certain content.
Google may use its page layout algorithm to actually visually view a page and prevent false positives.
In our experience, this can still happen accidentally in a handful of scenarios.
Sources): Google
Negative OnPage Factor #15: Single-pixel image links
Once a common webspam tactic to disguise hidden links, there is no question that Google will only treat "really small links" as hidden links.
This can be done with a 1 x 1 x 1 image or simply very small text.
If you try to expose Google using these methods, there's a good chance Google will eventually catch you.
Sources): Google
Negative OnPage Factor #16: Empty Link Anchors
Hidden links, which are often implemented differently than hidden text, for example through empty anchor text, can also lead to deception.
This is a dangerous area and another widespread webspam tactic. So check your code.
Sources): Google
Negative OnPage Factor #17: Copyright Infringement
Publishing content in a manner that violates the Digital Millennium Copyright Act (DMCA) or similar codes outside the USA can result in severe penalties.
Google attempts to automatically analyze unattributed sources and unlicensed content.
However, users can go so far as to report potential violations in order to enable manual action to be taken.
Sources): Google
Negative OnPage Factor #18: Doorway Pages
A website that uses doorway pages or gateway pages describes the creation of masses of pages intended to serve as target pages for search engines, but which offer no value to the user.
An example of this would be creating a product page for every city name in America, which is called Spamdexing, or Google's index of pages.
Sources): Google
Negative OnPage Factor #19: Excessive use of bold, italics, or other emphasis
If you place all the text on your website within a bold tag, you haven't cracked any code that simply takes over the entire site rank, because such text often receives additional weight compared to the rest of the page.
This type of activity fits Google's frequently broad description of "spam activity" and we have verified such penalties in our own non-public studies for clients.
Sources): Matt Cutts
Negative OnPage Factor #20: Broken Internal Links
Broken internal links make it more difficult for a search engine to index and for users to navigate.
It's a telltale sign of a bad website.
Make sure your internal links are never broken.
Source(s): Patent US 20080097977 A1, Google via SEL
Negative OnPage Factor #21: Redirected Internal Links
The PageRank algorithm suffers the usual degradation when navigating through redirects.
This is a simple trap to fall into, especially when considering links to "www" or "non-www" parts of a page, or addresses with/without a trailing slash.
Source(s): Patent US 6285999 B1, Matt Cutts via SER
Negative OnPage Factor #22: Text in Images
Google has come a long way in image analysis.
Overall, however, it is highly unlikely that text you present in rich media will be searchable in Google.
There is no direct devaluation or punishment for inserting text into an image.
It simply prevents your website from having a chance to rank for those words.
Sources): Matt Cutts
Negative OnPage Factor #23: Text in Video
Just like with images, Google cannot reliably access the words you use in videos.
When publishing videos, it is to your advantage to always publish a text note so that the content of your video can be fully searched.
This applies regardless of the rich media format, including HTML5, Flash, SilverLight and others.
Sources): Matt Cutts
Negative OnPage Factor #24: Frames / Iframes
In the past, search engines were unable to fully crawl content within frames.
Although they have overcome this weakness to some extent, frames are still a stumbling block for search engine crawlers.
Google is attempting to link framed content to a single page.
However, there is no guarantee that they will be processed correctly.
Source(s): Google
Negative OnPage Factor #25: Dynamic Content
Dynamic content can present a number of challenges for search engine crawlers, which they need to understand and assess.
It is assumed that using noindex and minimizing the use of such content, especially when accessible to Google, will lead to a more positive overall user experience and is likely to receive preferential treatment in rankings.
Sources): Matt Cutts
Negative OnPage Factor #26: Thin Content
Although it has always been better to write more detailed content that thoroughly covers a topic, the introduction of Nanveet Panda's "Panda" algorithm has created a situation where content that is basically nothing of unique value is severely penalized in Google.
An industry-recognized case study on Dani Horowitz's profile page "DaniWeb" in the forum is an excellent example of the most fundamental impacts of Panda.
Sources): DaniWeb Study
Negative OnPage Factor #27: Thin content across the entire domain
Google has long strived to understand the quality and unique value of your content.
With the introduction of the Panda algorithm, this became a problem that was evaluated domain-wide rather than page-by-page.
Therefore, it is now usually advantageous to improve the average quality of content in search engines, while 'noindex' is used on pages that tend to be repetitive and uninteresting, such as blog tag pages and user profiles of forums.
Sources): Google
Negative OnPage Factor #28: Too many ads
Pages with too many ads, especially those outside the usual scope, lead to a poor user experience and are treated as such.
Google appears to be basing this on an actual screenshot of the page.
This is a function of the page layout algorithm, also known as Top Heavy Update.
Sources): Google
Negative OnPage Factor #29: Use of pop-ups
Although Matt Cutts of Google answered this question with no in 2010, John Mueller of Google answered with yes in 2014.
Pop-ups definitely hurt your search ranking.
Sources):
Negative OnPage Factor #30: Duplicate Content (Third-Party)
Duplicate content displayed on another website can lead to a significant devaluation, even if it does not violate copyright guidelines and a source is properly cited.
This aligns with an ongoing theme: content that is truly unique and special in the context of the entire web will achieve better results.
Negative OnPage Factor #31: Duplicate Content (internal)
Similar to content copied from another source, any content snippet that is duplicated within a page or even the entire page can suffer a loss of value.
This is a very common problem and can range from too many indexed tag pages to WWW versus non-WWW versions of web pages, to variables appended to URLs.
Sources): Google
Negative OnPage Factor #32: Linking to a penalized page
This was introduced as the "Bad Neighborhood" algorithm.
To quote Matt Cutts: "Google trusts websites less if they link to spam websites or bad neighbors."
It's that simple. Google has suggested using the rel="nofollow" attribute when you need to link to such a site.
To quote Matt again: "If you use nofollow, you will be separated from this neighborhood."
Sources): MC: Bad Neighbors, MC: Nofollow
Negative On-Page Factor #33: Slow Websites
Slow websites are not rated as highly as fast ones.
This factor is implemented taking the target group into account.
Therefore, consider the geography, devices, and connection speed of your target audience.
Google has repeatedly suggested "under two seconds" and says they are aiming for under 500 ms.
Sources): Google
Negative OnPage Factor #34: Page NoIndex
If a page contains the meta tag for "robot", it will never be indexed by Google.
If it lands on one side, that's a bad thing.
Removing pages that are not beneficial to Google users can also be a good thing, as it can improve the average experience for visitors coming from Google.
Source(s): Logic
Negative OnPage Factor #35: Internal NoFollow
This can occur in two ways: If a page contains the meta tag "Robots" with the value "nofollow", this means that the rel="nofollow" attribute is added to every link on the page.
Or it can be added to individual links.
In any case, this means "I don't trust it", "don't crawl any further" and "don't give this PageRank".
Sources): Matt Cutts
Negative OnPage Factor #36: Block robots
If your site contains a file named robots.txt in the root directory with a "Disallow:" directive followed by "*" or "Googlebot", your site will not be crawled.
This will not remove your website from the index.
However, this prevents updates with new content or positive ranking factors related to age and freshness.
Sources): Google
Negative On-Page Factor #37: Poor Domain Reputation
Domain names tend to gain a good reputation with Google over time.
Even if a domain changes ownership and you now operate a completely different website, you may still face webspam penalties caused by the bad behavior of previous owners.
Sources): Matt Cutts
Negative OnPage Factor #38: IP address, bad neighborhood
While Matt Cutts has sought to expose the long-standing practice of “SEO web hosting” on dedicated IP addresses that provide no real benefit, this contradicts the notion that Google has, in rare cases, penalized entire server IP ranges.
Sources): Matt Cutts
Negative OnPage Factor #39: Meta or JS Redirects
A classic SEO penalty that is no longer very common. Google recommends not using scheduled meta refreshes and/or JavaScript redirects.
These confuse users, lead to bounce rates, and are problematic for the same reasons as camouflage.
Instead, use a 301 redirect (if permanent) or a 302 redirect (if temporary) at the server level.
Sources): Google
Negative OnPage Factor #40: Text in JavaScript
While Google continues to improve its crawling of JavaScript, there is still the possibility that Google may have problems crawling content printed with JavaScript, and concerns that Googlebot may not fully understand the context of when and for whom it is printed.
While printing text with JavaScript has no disadvantages, it is an unreasonable risk and therefore a negative factor.
Sources): Matt Cutts
Negative OnPage Factor #41: Poor Uptime
Google cannot (re)index your website if it is unreachable.
Logic would also dictate that an unreliable website leads to a poor user experience on Google.
It is unlikely that a failure will destroy your ranking, but it is important to achieve reasonable uptime.
One or two days should be fine. Anything more than that will cause problems.
Sources): Matt Cutts
Negative OnPage Factor #42: Private Whois
While it is often pointed out that Google cannot always access Whois data from every registrar, Matt Cutts clarified at PubCon 2006 that this data is still being reviewed and that private Whois in combination with other negative signals can lead to a penalty.
Sources): Matt Cutts
Negative OnPage Factor #43: Incorrect Whois
Similar to the situation with private whois data, it was made clear that representatives from Google are aware of this common trick and treat it as a problem.
Unless it is for any reason other than a violation of ICANN guidelines, and you might be allowing a domain hijacker to steal your domain in a legal dispute without giving you any say in the matter, do not use false information to register a domain.
Sources): Matt Cutts
Negative OnPage Factor #44: Penalized Registrant
If you agree with the view that private and false Whois records are bad, and consider that Matt Cutts has discussed this as a signal for identifying webspam, it is reasonable to assume that a domain owner can be flagged and penalized on numerous websites.
This is unconfirmed and purely speculative.
Source(s): Speculation
Negative OnPage Factor #45: ccTLD in global ranking
ccTLDs are country-specific domain suffixes like .uk and .ca. They are the opposite of global gTLDs. These are helpful for international SEO but can also be problematic when trying to rank outside of those countries.
One exception to this rule is that a small number of ccTLDs are frequently used for other purposes, such as .co, and have been labeled by Google as "gccTLDs".
Sources): Google
Negative OnPage Factor #46: Too many internal links
Matt Cutts once stated that there was a fixed limit of 100 links per page, which was later retracted to say, "keep it at a reasonable number".
This was because Google once downloaded no more than 100 KB of a single page.
That's no longer true, but since every link divides your PageRank distribution, this potential is still useful without changing how Google works.
Sources): Matt (video)
Negative OnPage Factor #47: Too many external links
As a simple function of the PageRank algorithm, it is possible to remove the PageRank from your domain.
However, note that the negative factor here is "too many" external links.
Linking to a reasonable number of external websites is a positive ranking factor, as confirmed by Mr. Cutts in the same source article on this factor.
Sources): Matt Cutts
Negative OnPage Factor #48: Invalid HTML / CSS
Matt Cutts said no to this. Nevertheless, our experience has consistently shown yes.
Code probably doesn't need to be perfect, and this can have an indirect effect.
However, the negative effects of bad code are supported by logic when you consider other code-related factors.
Faulty code can cause countless, possibly invisible problems, including the use of tags, page layout, and cloaking.
Source(s): Matt Cutts
Negative On-Page Factor #49: Outgoing Affiliate Links
In the past, Google has taken a hard line against affiliate websites that offer no added value.
It's in the guidelines.
There is a lot of SEO paranoia surrounding the hiding of affiliate links using a 301 redirect in a directory blocked by robots.txt, even though Google can display HTTP headers without navigation.
Several affiliate marketers have published sound scientific case studies on penalties for using too many affiliate links. Therefore, we consider this likely.
Sources): Affiliate Marketer's Study
Negative OnPage Factor #50: Parked Domain
A parked domain is a domain that does not yet have a real website on it.
Often, they sit unused at a domain registrar, outside of machine-generated advertising.
Furthermore, it doesn't meet so many other ranking criteria that it probably wouldn't be very successful on Google anyway.
They used to have some. However, Google has repeatedly made it clear that parked domains should not be ranked in any way.
Source(s): Google
Negative On-Page Factor #51: Search Engine Results Pages (SERPs)
In general, Google wants users to land on content, not on other pages that look like listings of potential content, such as the Search Engine Results Page (SERP) from which such a user just came.
If a page is too similar to a search results page because it consists only of a series of further links, it is likely that it will not rank very well.
This may also apply to blog posts that are located above the tag/category pages.
Sources): Matt Cutts
Negative OnPage Factor #52: Automatically Generated Content
Machine-generated content based on user search queries is "absolutely penalized" by Google and is considered a violation of Google's Webmaster Guidelines.
There are a number of qualified methods listed in the guidelines.
Machine-generated meta tags appear to be an exception to this rule.
Sources): Matt Cutts, Webmaster Guidelines
Negative OnPage Factor #53: Too many footer links
It was made clear that links in the footer of a website do not carry the same weight as links in an editorial context.
It is also true that when Google first spoke about taking action against paid links, the practice of spamming website footers with dozens of paid external links was widespread, and therefore too many external footer links can draw this type of penalty.
Sources): Matt Cutts
Negative OnPage Factor #54: Infected Site
Many website owners would be surprised to learn that most compromised web servers are not unreadable.
Often, the attacker even goes so far as to close your security gaps to protect his newly acquired property without you ever noticing.
This manifests itself in malicious activities carried out in your name, such as the distribution of viruses/malware and other exploits, which Google takes very seriously.
Sources): Webmaster Guidelines
Negative OnPage Factor #55: Phishing Activity
If Google has a possible reason to mistake your website for a phishing scheme (e.g., one that replicates another user's login page to steal information), prepare for a world of violations.
Google usually only uses a general description of "illegal activities" and "things that could harm our users".
However, in this interview, Matt explicitly mentions the anti-phishing filter.
Source(s): Matt Cutts
Negative OnPage Factor #56: Outdated Content
There is a Google patent for outdated content, which can be identified in various ways.
One such method for defining outdated content is essentially just to be old.
It is unclear whether this factor affects the ranking for all search queries or only when a specific search query is associated with something Google calls Query Deserves Freshness (QDF).
Source(s): Patent US 20080097977 A1
Negative OnPage Factor #57: Orphaned Pages
Orphaned pages, i.e., pages of your website that are difficult or impossible to find using your internal linking architecture, can be treated as doorway pages and act as a webspam signal.
At the very least, such pages probably do not benefit from the internal PageRank and therefore have far less authority.
Sources): Google Webmaster Central
Negative OnPage Factor #58: Sexually Explicit Content
While Google X indexes and returns X-rated content, this feature is not available when the "Safe Search" feature is enabled.
This is Google's default state.
It is therefore important to consider that unmoderated user-generated content or one-off content that accidentally exceeds a certain line will be blocked by the Safe Search filter.
Sources): Google SafeSearch
Negative OnPage Factor #59: Selling Links
Matt Cutts presents a case study in which the PageRank of a domain's toolbar dropped from seven to three as a direct result of outgoing paid links.
Selling links that exceed the PageRank is a violation of Google's Webmaster Guidelines and can result in penalties for both the on-page and off-page aspects of a website.
Sources): Matt Cutts
Negative OnPage Factor #60: Number of Subdomains
The number of subdomains on a page seems to be the most important factor in determining whether the subdomains are treated as separate sites.
The use of an extremely large number of subdomains can theoretically lead to Google treating one website as many websites, or many websites as one website.
Source(s): Speculation
Negative OnPage Factor #61: HTTP Status Code 4XX / 5XX on page
If your web server returns pretty much anything other than the status code 200 (OK) or 301/302 (Redirect), it means that the corresponding content was not displayed.
Please note that this can happen even if you can view the intended content in your browser.
In cases where content is actually missing, Google has clarified that a 404 error is acceptable and actually expected.
Source(s): Speculation
Negative OnPage Factor #62: Domain-wide Error Page Rate
The possibility for users to land on pages that return 4XX and 5XX HTTP errors is probably a sure sign of a website of overall low quality.
We suspect this is a problem, in addition to pages that are not indexed due to such an HTTP header and pages that contain broken outgoing links.
Source(s): Speculation
Negative OnPage Factor #63: Code error on page
If a page is full of errors generated by PHP, Java, or another server-side language, it likely meets Google's definitions of a poor user experience and a low-quality website.
In any case, error messages in the page text are likely to negatively affect Google's overall analysis of the text on the page.
Source(s): Speculation
Negative OnPage Factor #64: Soft Error Pages
Google has repeatedly advised against using "soft 404" pages or other soft error pages.
These are essentially error pages that continue to return the HTTP code 200 in the document headers.
This is obviously difficult for Google to process.
Even if your users see an error page, Google may (at least) treat it as a low-quality page on your website, which significantly impacts the overall quality of your domain's content.
Sources): Google
Negative OnPage Factor #65: Outbound Links
At a certain level, there is something called "PageRank Leakage": you only have to distribute so many "points" and "points" that leave your site cannot immediately return.
But Matt Cutts has confirmed that there are other controls that specifically reward some really relevant and authoritative outbound links.
Websites are meant to be interfaces, not dead ends.
Sources): Matt Cutts, Nicole V. Beard
Negative OnPage Factor #66: HTTP expires from header
Setting "expires" headers on your web server can control browser caching and improve performance.
Unfortunately, depending on how they are used, they can also cause problems with search indexing by telling search engines that content may no longer be up-to-date for a longer period of time.
In any case, they can instruct Googlebot to exit for longer than desired, as its analysis attempts to emulate a real user experience.
Sources): Moz Discussion
Negative OnPage Factor #67: Sitemap Priority
Many theorize that the "priority" attribute assigned to individual pages in an XML sitemap affects crawling and ranking.
Similar to other signals you send to Google via Webmaster Tools, it's unlikely that some pages will actually rank higher just because you requested it.
This is mainly useful for demoting less important content.
Sources): Sitemaps.org
Negative OnPage Factor #68: Sitemap ChangeFrequency
The variable ChangeFreq in an XML sitemap is intended to indicate how often the content changes.
It is suspected that Google may not recrawl the content faster than you would expect a change to occur.
However, it is unclear whether Google actually follows this attribute or not.
However, if this is the case, it would apparently lead to a similar result as adjusting the crawling speed in Google Webmaster Tools.
Sources): Sitemaps.org
Negative OnPage Factor #69: Foreign Language Non-Isolation
If you write in a language that isn't part of your target audience's vocabulary, almost no positive on-page factors can have their intended effect. Matt Cutts acknowledges that incorrectly isolated foreign-language content can be a stumbling block for both search engines and users.
In order not to negatively impact positive ranking factors, Google must be able to link content on the page as well as sections of a website.
Sources): Matt Cutts
Negative OnPage Factor #70: Automatically translated text
Using Babelfish or Google Translate to quickly “internationalize” a website is a surprisingly common practice for things that Matt Cutts explicitly states violate webmaster guidelines.
For those who speak fluent Google language, this usually means "it's not just a devaluation, it's a punishment and probably a pretty bad one."
In a Google Webmaster video, Matt categorizes machine translations as "automatically generated content".
Sources): Matt Cutts
Negative OnPage Factor #71: Missing robots.txt
Since 2015, Google Webmaster Tools recommends that website owners add a robots.txt file to their website if one is missing.
This has led many to believe that a missing robots.txt file is bad for rankings.
We find this odd, while John Mueller from Google Search recommends removing robots.txt completely if Googlebot is fully welcome.
We cross this myth all the way to departmental miscommunication.
Sources): John Mueller via SER
Negative OnPage Factor #72: Everything is nofollow
In an impressively inconclusive video, Matt Cutts says that Google would like to manually review sites like Wikipedia by selecting some links that are not "nofollow" but never specifying the value.
The apparent ranking success of websites with 100% "nofollow" on their outgoing links, such as Wikipedia, seems to indicate that no significant damage has been done.
If anything, they may lose a positive value that is attributed to good outbound links.
Sources): Matt Cutts
Negative OnPage Factor #73: Weak SSL Ciphers
SSL encryption is confirmed as a positive factor.
This suggests that Google wants to reward higher security for its users.
Is it therefore possible that Google also rewards the quality of security?
It would be incredibly easy for Google to test SSL encryption – even easier than current, confirmed malware tests. However, there is currently no evidence to suggest that this isn't logical.
Source(s): Speculation
Negative OnPage Factor #74: X-Robots-Tag-HTTP header
The most common methods for blocking search engine crawlers are found in your HTML code or in a separate robots.txt file.
However, this is also possible at the server level. If used correctly, this can be useful for blocking thin content.
However, if this is not the intention, the consequences are more often negative, as the ambiguity of this approach is more common in our experience.
Sources): Google Developers
Off-site SEO factors
The list of off-page factors can be found here.
If you need help optimizing off-page or on-page factors, feel free to contact one of our SEO experts.
