Yao Yining

How To Ensure Baidu Crawls And Indexes Your Site Correctly

baidu-crawl-websites

Sometimes, you might not want search engines to crawl or index some pages of your website – for example if they contain confidential documents, are only meant for registered users or are out-of-date.

Normally, you can achieve this by implementing approaches such as meta robots tags, robots, or 301 redirects. Unfortunately, different search engines follow different rules – and this is especially the case with Baidu. If you want to introduce your site to Baidu, you have to consider what Baidu supports. In this blog post I will share a summary of what Baidu supports compared to Google.

Canonical Tag

Canonical Tag

You should use a canonical tag when you have two very similar or duplicate pages and want to keep both of them, but don’t want to be penalised for duplicate content issues. Adding a canonical tag in the source code will tell the search engines which page is the correct version to index.

Both Baidu and Google support canonical tags, but don’t 100 percent comply with them. It is recommended to use canonical tags only on duplicate or very similar pages. If Baidu finds out that your site doesn’t use canonical tags correctly, it will simply ignore all canonical tags on your site.

Meta Robots

baidu Meta Robots

By adding meta robots to the source code, you can tell robots how to handle certain pages on your site, such as disallowing search engines from following certain links, indexing a page or showing snippets in the search engine result pages. At the moment, Baidu only supports “nofollow” and “noarchive”. I believe Baidu does not yet support the rest. This means that even if you implemented “noindex” on a page, Baidu will still index it. On the other hand, Google supports all of the above.

301 Redirect

baidu 301 redirect

301 redirects are used for permanent URL redirection. When you have a page that you don’t want to use anymore, you can use a 301 redirect to point to a new page. Both Baidu and Google support 301 redirects.

Robots.txt

baidu robots txt

Robots.txt is a text file placed in the root of the website. It is required when you want to have some content on your site excluded from the search engines. Both Baidu and Google support Robots.txt.

Below is the list of Baidu Spiders. You can disallow some of them by adding corresponding commands in Robots.txt. For example, if you want Baidu to only crawl the video files in the folder www.yourdomain.com/video/ , you can add this in the Robots.txt:

User-agent: Baiduspider
Disallow: /

User-agent: Baiduspider-video
Allow: /video/

Baidu spider user agent

HTTP and HTTPS

Baidu HTTPS

I also want to mention Hypertext Transfer Protocol Secure (HTTPS). As opposed to Google, Baidu doesn’t index HTTPS pages. To get the content indexed, Baidu advises webmasters to create a corresponding HTTP page with same content as the HTTPS page and to redirect Baidu Spiders to the HTTP page. Then Baidu will be able to index that HTTP page.

Below is how Baidu cached Alipay. Baidu cached its HTTP page.

Baidu cache 1

Cached page of Alipay by Baidu

But visitors will be redirected to the HTTPS page:

Baidu cache 2

Visitors will see the HTTPS page.

Personally, I don’t think this is a very good method. Since Google indexes HTTPS pages, this method might result in a duplicate content issue with Google. So if you have some content that may need to be indexed by both Google and Baidu, I suggest using only a HTTP page.

get-in-touch-contact-webcertain

Yao Yining

Yao Yining

General Manager at Webcertain
As a General Manager, Yao leads Webcertain's Chinese Beijing-based team. He has more than 7 years experience in the online marketing field, focusing especially on SEO and SEM. Prior to heading up the Beijing office, Yao was based in the UK.

9 Responses to How To Ensure Baidu Crawls And Indexes Your Site Correctly

  1. Véronique Duong says:

    Hello all, it seems that Baidu can’t recognize the rel=”canonical” for the moment. Instead of this attribute, we could use rel=”index”. Thanks! Veronique

  2. Sean Brunner says:

    Hi Yao,

    Is this article still up to date? I have heard that as of May 25th Baidu actually prefers https over http but have found little information to confirm.

    Appreciate any updates,

    Sean

    • Yao YiningYao Yining says:

      Hi Sean,
      Thanks for your reply. Yes, Baidu official announced they can index HTTPS pages on 25th May, and they prefers HTTPS pages if both HTTP and HTTPS pages exist.
      Thanks for your comments. This updated the article. :)
      Best,
      Yao

  3. Joseph Barrington-Lew says:

    Could someone please help get my website (above) crawled by Baidu

    Thank you

    Joseph

  4. Awais Irshad says:

    Thank you very much. I have added sitemap and it started showing the crawls and indexed pages. :)

  5. Devesh Verma says:

    Hi Yao

    Thanks for the above information. I agree baidu does not seem to index HTTPS sites , however only those that have extremely high traffic should even consider doing this as it seems to hurt smaller sites. I believe the main reason behind this because it is also not common for Chinese sites to be HTTPS since most payment methods are done via mobile QR codes such as AliPay, so payment details are never really provided over websites.

    Hope this adds value to your post!

  6. Pingback: Der SEO-Wochenrückblick KW 45 - SEO Trainee | Ab hier geht´s nach oben

Leave a Reply

Yandex.Metrica