A noindex tag is an on-page instruction that tells search engines not to index the page. It is one of the methods used to block indexing on a website.
A noindex tag tells search engines not to include the page in their search results. The most common way to "noindex" a page is to add a tag to the head section of the HTML or to the response headers. For search engines to see this information, the page must not already be blocked (disallowed) in a robots.txt file.
If the page is blocked via your robots.txt file, Google will never see the noindex tag and the page could still appear in the search results.
To tell search engines that your page should not be indexed, simply add the following to the -area added:
Alternatively, the noindex tag can be used in an X-Robots tag in the HTTP header:
X-Robots-Tag: noindex
If you have decided to mark some pages with noindex, here are some best practices.
Pages blocked in the robots.txt file can still be indexed by search engines. Sometimes, these pages can be indexed without their content being crawled.
And if you add a noindex tag to a page, Google has to recrawl the page to read this instruction. So make sure the page is accessible to the crawler.
Please note that pages with the noindex tag will not pass on link equity to other pages in the long term.
John Mueller from Google explained that if Google sees Noindex for a long time, the pages will be completely removed from the index, and the links on them will no longer be crawled.
Even if the page is set to "noindex, follow", Google will not see it as different from "noindex, nofollow" in the long run. However, what "long run" means here is not obvious and depends on several factors.
Using the noindex tag is not the best way to handle duplicate content on your website.
To consolidate duplicate pages on your website, use canonical tags. Proper canonicalization instructs search engines to index only the main (canonical) version of the page.
However, the link signals from all non-canonical versions of a page are consolidated, giving the canonical version a boost.
Monitoring your website for SEOThis can protect you from traffic losses related to indexability issues. For example, pages or entire sections of a website might be accidentally marked as "noindexed".
You can use Ahrefs' site audit tool to keep an eye on the SEO health of your website.
In terms of search engine optimization, the noindex meta tag offers an elegant way to avoid duplicate content. Especially considering that Google and other search engines can penalize pages with duplicate content, influencing website indexing is very important.
Adding "follow" to the tag retains the option to follow all links on the non-indexed page.
Many content management systems (CMS) automatically create a large number of archive pages that can be quickly indexed. In extreme cases, such a flood of indexes can be considered spamming. The noindex directive can be used to avoid such risks.
Noindex can also be useful when relaunching a website or launching a new version of a page. Everyone involved in the project can test the functionality of the new page "live" without certain areas being indexed by a search engine.
It is important that the noindex directive is removed from the source code after the website has started. Only then can Googlebot or Bingbot index the page. Only indexable URLs can achieve a ranking.
You should use the noindex tag to prevent pages from being indexed by Google.
It's crucial to prevent less important pages from being indexed, as Google doesn't have the resources to crawl and index every page on the web. At the same time, you need to identify your valuable pages that should be indexed and prioritize their optimization.
Let's see what types of pages you should put the noindex tag on to make them unindexable.
Set the noindex tag to:
There are many reasons why pages that shouldn't be indexed are indexed. But why?
Noindex means that a website should not be indexed. A page with this directive will be crawled, but not indexed.
Make sure your robots.txt file contains the following:
Pages linked from other websites can be indexed even if they are excluded in the robots.txt file. If this happens, only the anchor text and URL will be displayed in search engine results.
Here is a screenshot showing how these URLs appear on the SERP: Image source: Webmasters Stack Exchange. This issue (robots.txt blocking) can be resolved by:
Let's say you've created a new website or even new content and added a noindex rule to your robots.txt file to prevent indexing. Or perhaps you recently signed up for Google Search Console.
There are ways to fix the problem "blocked by robots.txt":
It's also possible that Google Search Console (GSC) is sending you these notifications even if you don't have a robots.txt file. Content management systems (CMS) like WordPress may already have a robots.txt file created, and plugins can also create robots.txt files.
Overwriting the virtual robots.txt files with your own robots.txt files can lead to complications in GSC.
There are several tools you can use to implement noindex tags on your website:
The Google Search Console allows you to control how your website appears in Google search results by inserting noindex tags directly into your website's HTML code.
If your website was created with WordPress, you can use the Yoast SEO plugin to add noindex tags to specific pages or sections of your website.
You can also use your website's robots.txt file to control how search engines index your pages. To add a noindex tag, simply add the following line to your robots.txt file: «Disallow: /».
Another way to add a noindex tag is to use the HTTP header. This can be done by adding the following code to the header of the page you want to prevent from being indexed: «X-Robots-Tag: noindex».
Regardless of the tool you choose, it's important to note that noindex tags only prevent search engines from indexing a page. They do not guarantee that the page won't be crawled or appear in search results.
Request free SEO consultation
Enter your details and we will contact you 📅

© 2012-2025, MIK Group GmbH | General Terms and Conditions | Imprint | Privacy policy