Header Ads Widget

Why My Website Not Showing In Google Search Results

Why my website not showing in Google search results

It can take a week or so for Google to index your page. If it does not appear in the index, check back in a week to see if it has been indexed. If you make significant changes to your site recently, check back in a few days to see if it is still missing.

Today I am going to tell you about all those issues due to which why my website not showing in Google search.

So let's know about those issues one by one and know what these issues are and how can we fix them?

Given below are the 7 issues due to which the indexing issue comes.

  • Alternate page with proper canonical tag blogger
  • Blocked by robots.txt
  • Duplicate without user-selected canonical
  • Page with redirect Google Search Console
  • Discovered but currently not indexed
  • Crawled but currently not indexed
  • Redirect error Google Search Console

Now we will know about all the above issues one by one, so let's start.

Alternate page with proper canonical tag blogger

Google Search Console displays the “Alternate Page with Proper Canonical Tag” message when there are two versions of a page having the same canonical URL. Google will simply exclude the duplicate version, and index the main version of the page.

To check the canonical error status of a page, go to Coverage and select Alternate page with proper canonical tag. Check what pages are listed there and whether they should be canonicalized. If you find pages that shouldn't be canonicalized, update the canonical link to point to it. Then move onto the next step.

If your site is a Blogger site, an automated canonical is applied to the ?m=1 (mobile) page, so that the desktop page is correctly indexed (as the desktop age is where the "is it mobile or not?" judgement is taken.

If so, there is nothing to "correct" because the behavior is normal.

We can surely take a closer look if you would prefer to disclose your URL and it's not a Blogger site. If you don't want it to be viewed directly, you can always utilize a URL shortener (tinyurl.com, bit.ly etc) and put that link in its place.

Blocked by robots.txt

Google discovered links to URLs that your robots.txt file has blocked. Therefore, in order to fix this you must examine those URLs and decide whether you want them indexed or not. Then, you'll need to make the appropriate changes to your robots.txt file. Let's go over the procedures you must follow to fix this error.

  • The list of URLs can be exported from Google Search Console.

The Google Search Console URLs that are marked as "Indexed, but banned by robots.txt" can be exported.

  • Examine the URLs and decide if you want to have them indexed or not.

Check which URLs you want search engines to access and index, and which ones you don't.

  • The robots.txt file should now be edited. To complete that, Log into your Blogger site.

You will be in the Blogger Dashboard. 

  • In the Blogger admin menu > Go to SETTINGS 

In the settings menu on the left-hand side, click Custom robots.txt

  • In the Settings screen, click custom robots.txt

To allow Google to access the URLs that you wish to have indexed and to restrict Google from reading the URLs that you don't want to have indexed, update your robots.txt file.

  • Visit Google Search Console once more, then select Validate fix.

Navigate to the Issues page after visiting the Index Coverage Report. Click the Validate fix button there. You'll then ask Google to compare your URLs with your robots.txt once more.

Duplicate without user-selected canonical

Duplicate without user-selected canonical, which takes place when two or more web pages contain identical content without a user-selected canonical tag, is a common index coverage problem/error that stops a web page from being indexed.

In this case, Google may declare canonical URLs to all related web sites and index the page it considers to be the original. Thus, this occasionally prevents the potentially web pages from being indexed.

How to fix Duplicate without user-selected canonical issue on a website?

When you transfer the URL, you must verify whether the web page is a canonical or prospective page. It's important to fix any canonical pages that are discovered to be ignored due to this coverage problem.

  • Step 1: Modify the original page's canonical tag

Canonical tags are an HTML element that needs to be updated in the HTML file's Header. These tags inform search engine crawlers about the uniqueness of a web page.

You should change the web page's canonical tag if an original page that has to be indexed is excluded under duplicate without the user selecting canonical.

The canonical syntax is as follows:

<link rel="canonical" href="https://yourdomain.com/original-page">

The web page that Google has designated as the canonical web page must be removed from the Google index using the following actions once the canonical tag has been updated.

  • Step 2: Use 410 and eliminate the duplicate page from the XML sitemap:

Use the 410 HTTP response code to eliminate the duplicate pages when updating the canonical tag on the original page (Content is gone).

You can also use the Google Search console's Removal option.

Remove the URL from the XML sitemap after removing them by using the GSC or 410 HTTP status code.

If a website is located in an XML sitemap, crawlers will read and index it immediately. Therefore, it is necessary to remove all duplicate web pages from the sitemap.xml file except for the canonical version.

  • Step 3: Redirected to the Source Page

You may be able to redirect traffic from the HTTP version of a website to the HTTPS version in scenarios where Google has chosen HTTP as the canonical protocol rather than HTTPS.

By doing this, you help Google and other search engines identify HTTPS web pages as the authoritative version.

Additionally, try to delete the HTTP version from the XML sitemap. As a result, indexing will not include it.

  • Step 4: Now click on the VALIDATE FIX button

You can utilize the Validate Fix option from the Google search dashboard once you've finished all the steps to identify the cause of this problem.

This requests Google-bot to check the resolution of this issue and remove any duplicate content from the web-page that lacks a user-selected canonical dashboard. From the date of the appeal, it typically takes up to 28 days.

Page with redirect Google Search Console

The Google Search Console Index Coverage report details how a specific website was indexed by Google. As a result, most of the time, a page with redirect issues doesn't need to be validated because it is a frequent thing when operating a website.

How to fix page with redirect errors in Google Search Console?

For approximately 99% of websites, Coverage > Page with redirect issues do not influence Google results as long as these are present because there are countless multiple ways a website can be set up:

  • Make sure the relevant URLs are listed in XML Sitemaps for Google to index.
  • Ensure that all Redirection rules are correctly configured (including both versions of domain such as www. non-www).
  • Make sure there are no broken links on the website.
  • Make sure the right URL structure is used for all internal links.

Why Do Pages with Redirect Problems Continue to Display in Index Coverage Reports After Fixing?

Even after validating the fix for the page with redirect difficulties and applying all of the recommended website maintenance settings, Search Console will still display URLs that have redirect problems. There are only two possible explanations of that.

  • Google keeps observing that your website's old URLs are redirected to new URLs because of some setup issue. So you've verified the fix and checked everything, but you're still complaining.
  • Indicates that a large number of automated bots and website scrapers search the internet for web pages. The majority of these automated internet bots that transmit spam steal content from websites and post it on their own spammy websites. These spam-type automated internet bots typically contain wrong URL patterns for a certain website they have grabbed while doing this. Google might choose these kinds of broken links and display them in Index Coverage Page with Redirect reports as results.

Discovered but currently not indexed

That message signifies that Google has discovered the page but has not yet crawled it. 
Despite being aware of a page's availability, Google may choose not to crawl it for a number of reasons.

It can be because of technical issues. Although the website was overcrowded, Google may have tried to crawl the URL.

The frequency that Google crawls a website varies, explaining why it might take more time to crawl your pages. Some factors that influence how frequently Google crawls a website are: 

  • How relevant Google considers the website;
  • How frequent the website publishes new content; 
  • The speed of the website and servers; you have too many URLs to crawl; 
  • You have errors on the site, wasting crawl budget.

Over time, Google will adjust how frequently they crawl pages on your site depending on these signals.

Why Google is taking time to get crawled my website pages?

Google scans websites at periodic intervals, which explains why it might take longer to crawl your pages.

Several elements affect how frequently Google crawls a page, including:

  • What Google thinks about the website's relevancy;
  • How frequently new information is published on the website;
  • The servers' and website's speed;
  • You need to crawl much too many URLs;
  • Your website contains problems that suck crawl budget.

Depending on these signals, Google will gradually change how frequently they crawl the pages on your website.

How to fix ‘Discovered - currently not indexed’

Kindly ask that Google manually crawl the page.

It's time to manually request Google to crawl the page if you uploaded it some time ago and they haven't yet done so.

Follow these procedures to ask Google to index a page:

  • Use Google Search Console's URL inspection tool (either at the top or in the sidebar);
  • Indicate the URL you want Google to crawl;
  • Wait for the URL report after pressing Enter (or Return);
  • If you want Google to add this URL to their crawl queue, click "Request Indexing."

You should only perform the process once, just in case. Pressing "Request Indexing" more than once won't help Google to crawl the page more quickly.

Crawled but currently not indexed

What is ‘Crawled – Currently Not Indexed” in Google Search Console?

The article is eligible to appear in Google's index, but Google has chosen not to include it, according to the "Crawled —currently not indexed" report. There may be further low-quality pages to which Google does not apply this reason.

How to Fix Crawled – Currently Not Indexed Status

There is no need to manually request that pages with this status be re-indexed because Google Search Console Help states that they will eventually reconsider the page.

Here are some actions you may take, nevertheless, to ensure Google indexes these pages the next time they crawl your website.

1. Improve Content

A website's content can be improved by including content that is helpful to visitors as well as by adding extra more words to the page's word count. Consider the part a certain page plays in a user's experience via your website.

For pages that you want users to view, of course, this is the case. It is completely acceptable to leave these pages alone if they are useless pages like archives and feed pages. In addition to saving your crawl budget, you might even want to think about stopping them from being crawled.

2. Increase internal links

Two issues are resolved when there are more internal links pointing to the page: first, Google will crawl the page more frequently, and second, Google will give it more value. Add a few internal links to pages Google has crawled but not yet indexed if you have more content on your website, such as blog entries.

3. Reduce Click-Depth

The amount of clicks a user must make to arrive at a particular page is known as the click-depth. It is terrible for the user experience and the website may be regarded irrelevant by Google if a person must click several times before they can access the targeted page. One to two clicks for the most crucial sites would be a fair number. I wouldn't go more than four clicks because it's already too deep.

Redirect error Google Search Console

If you encounter a "redirect error," it signifies that the Google bot was unable to reach the website because it redirected to a broken or nonexistent page.

Redirect error google search console: Why this error

Generally there are two types of crawler given below: 

  • Google-bot smartphone
  • Google-bot desktop

Your website's mobile URL will finish with?m=1 if it is hosted on Blogger. When you submit a URL for indexing without including?m=1, if Google-bot for mobile crawls it first, a redirect appears. However, if Google-bot desktop has already crawled it, then there won't be a problem.

According to a blogger, a URL that ends in?m=1 is indexed very quickly because it is a mobile phone URL. Nowadays, the number of visitors view websites on their smartphones. As a result, Google-bot, a smartphone crawler, usually examines websites and indexed the page.

You're using your original URL on a desktop. When Google-bot's desktop version crawls your website, this url will also be indexed. It really does lesser website crawls than the mobile Google-bot.

May You Like: Flowchart and its Advantages || What is Search Engine Optimization

Frequently Asked Questions (FAQ)

  • Why pages are not getting indexed?
Technical problems, poor on-site content quality, or a low overall domain authority could all be factors that contribute to Google not indexing your website or specific pages.
  • How do I fix Google indexing?
The method for fixing the redirect errors given below:
  1. To inspect the URL, click it.
  2. Learn more about the errors.
  3. To test a live URL, click it.
  4. Correct the mistake and REQUEST INDEXING.
  5. Return and select VALIDATE FIX.
  • How long does it take Google to index a site?
You are not allowed to ask for the indexing of URLs that you do not control. This could take a few days to a few weeks to start crawling. Be calm and keep an eye on everything by using the URL Inspection tool or the Index Status report.
  • How often will Google crawl my site?
Google should crawl your website every four and thirty days, depending on how frequently it is used. Given that Google-bot often looks for new content first, sites that are updated more frequently likely to get crawled more quickly.

Post a Comment

0 Comments

email-signup-form-Image

Subscribe Now

Pro Jaankari - To Get More Updates About Blogging Tips and Tricks