SEO — Search Engine Optimization — is a process that affects the online visibility of a website in a web search engine’s results (Wikipedia). So, why would you want to improve the SEO of your website?

Let’s say that you own a website for your business or personal blog, and you would like to increase its exposure as a marketing strategy (by helping people to find your website faster and easier). You can achieve this by editing some contents of your website that will help most of the popular search engines such as Google or Bing to increase its visibility.

Take into account that the SEO is used by search engines to increase the visibility of the website as well as ranking its relevance in the search result. This ranking depends on several factors, such as the page’s title and keywords inclusion.

How does Google’s search engine find a page? According to Google’s documentation, identical search results will be different for diverse users. Why? Because Google’s algorithm considers some of the following:

  • Your personal search history.
  • The type of device used for the search (tabled, phone, laptop, etc.).
  • Whether you are logged in to a Google Account or using an incognito window while searching.
  • Your geographic location and time zone.
  • The type of browser you are using.
  • The type of search you are doing.

Keep in mind that optimizing the SEO of your website is an iterative process which requires maintenance and monitoring. After some years of adding and improving the SEO of a couple of websites, I came up with my own checklist for accomplishing it.

This list should not be considered complete and fully detailed checklist of suggestions to follow, but more as a starting point to improving the SEO of your own website or web pages.

1. Choose a proper Domain Name

The domain that you will assign to your website is one of the most difficult things to do (comparable to choosing variable names in your code :P). It should be clear, short, descriptive, easy to remember, meaningful, and it must be available!

Ideally, the domain name you choose should contain at least one of the keywords that shows up in most of the pages in your website. In case you are having a hard time picking a domain name, take into account that there are some free tools (like Panabee or Namestation) that will help you with ideas.

Avoid buying low-quality TLDs (Top-Level Domains) like .biz because those are usually associated to spammy websites. In order to maximize the direct traffic to your website, try purchasing domain names that contain the .com or .net TLDs.

2. Optimize page URIs

Pick a custom URI with semantic meaning for each page of your website. This means URIs which are easy to read and understand, as well as easy to search for. It will be helpful as well to define a hierarchy structure for your URIs. For instance, if you are building an e-commerce website, the page that corresponds to each item that you are selling can be structured like /:brand/:item.

Avoid using numbers and abbreviations in your URIs and use words instead. You should also avoid embedding the content type as an extension to your URIs (e.g.: /tables.html, /tables.html). The HTTP 1.1 standard provides the Content-Type header for this purpose.

Finally, avoid having several URIs that target to the same page (including the ones that do not share the same subdomain). If you have no option, make use of canonical URIs (discussed below).

3. Define the title for each page

Your website should have a distinctive title tag for each one of its pages. It should be unique, short and should concisely describe the content of the page. Avoid including redundant information such as “Page 1”, as well as including redundant information that does not show up in the page. Some search engines like Google render the title that you define as part of the search result.

4. Define keywords for each page

Identify a list of keywords that best match the content of each page, and include a meta tag inside the head tag of the document with them. Meta tags represent meta-data that will not be rendered in your page, so you should not worry about any UX.

There are several online tools (free and paid) that help you choose the best keywords that identify the page content. Whether you use a tool or think about the keywords on your own, try to choose specific words that best adapt to the page’s content.

For instance, if your page is about shoes that are for sell, do not simply stick to “shoes sale”, but add more context that will allow your website to compete with other popular ones. For instance: “west coast California adidas shoes sale”.

5. Define page description

Try to come up with a unique description for each one of the pages of your website and include a meta tag inside the head tag of the document with that wording. Avoid repeating descriptions across different pages of your website.

Although description tags do not affect the search result ranking generated by search engines, they provide search engines with a summary about what the page content is about. Some search engines such as Google render this description as part of the search result as well.

6. Consolidate duplicate URIs — Canonical URIs

There are certain scenarios in which the same page can be addressable by different URIs. Such is the case of some e-commerce web applications which sell the same product on multiple URIs. For instance, a page showing a pair of Adidas shoes for man may be addressable through the brand or through the man category:

  • /shoes/adidas/550
  • /man/shoes/550

How could you suggest search engines that the above two URIs are the same? You can make use of canonical URIs, which tell search engines that certain similar URIs are making reference to the same page.

Add a link tag inside the head tag of the document with the attribute rel=canonical and the attribute href having the value of the unique URI that will identify the page.

7. Optimize image names and alternative text

Whenever you include images in your pages by making use of the img tag, keep in mind that you are adding visual content which can include (non-visible) alternative text that identifies this image. I encourage you to make use of the alt attribute and include some words that describe the image.

<img src=”adidas-men-shoes.jpg” alt=”adidas men shows blue” />

This alternative text is displayed in place of the image in case that the image file cannot be loaded. It is also used by search engines to index the image properly.

But this attribute is not only useful for SEO purposes: it aligns to the principle of web accessibility. People suffering from vision disabilities benefit from this alternative text by making use of browser readers that read the text out loud.

Try customizing the name and the alternative text of the images by considering some of the keywords used in the meta keywords of the page. Avoid using names such as image1.jpg or pic.png.

8. Use a sitemap.xml

Sitemap files are known as URI inclusion protocols: they suggest search engines what URIs in your website should be crawled. It consists of an XML files usually named sitemap.xml which lists all the URIs in your website (it should be placed at the root of the application):

https://www.website.com/sitemap.xml

You can also create a separate sitemap that will list the images used in your website.

There are many free and paid tools out there that can auto-generate these sitemap files for you (I find ScreamingFrog to be one of the best ones). In case that you are developing your website or web application using a web framework, you may find free tools that will auto-generate the sitemaps at deployment time. In the Ruby On Rails world, I used to make use of the sitemap_generator gem.

9. Use a robots.txt

In opposition to sitemaps.xml, the robots.txt file is known as a URI exclusion protocol: it tells search engines what parts of the site should not be crawled.

Its usage is simple. Say a crawler (or robot) visits a website URI such as https://www.website.com/home. Before it does so, it firsts checks for http://www.website.com/robots.txt, and finds the following content in that file:

User-agent: *
Disallow: /

The “User-agent: *” means that this section applies to all crawler. The “Disallow: /” tells the robot that it should not visit any pages on the site. Although the usage of the * character may seem like a regular expression, it is not such thing. For a deeper understanding about how to define the robots.txt, please refer here.

As well as the sitemap.xml, the robots.txt file should be placed at the root of your project:

https://www.website.com/robots.txt

10. Make use of structured data

Structured data is meta-data that search engines use to get an understanding about the content being rendered by your website. Say that your website is an e-commerce application. It may seem clear to you that your website is all about selling products, but search engines may not be able to easily interpret what type of products your website is trying to sell.

At the implementation level, structure data is extra meta-data code that gets added to the document which will provide extra information to crawlers about what the page content is all about. Usually, this is information is used by search engines to rank the search result higher.

Google makes use of structured data to enable special search result features and enhancements. For instance, recipes or restaurant pages may be eligible to appear in a graphical search result, which adds a lot of value to your website’s ranking.

You will generally find the structured data standard in two different formats:

On one hand, Schema.org encourages the addition of micro-data to your HTML code. This is, the inclusion of meta attributes to the HTML which has a specific meaning and purpose.

On the other hand, JSON-LD encourages the usage of JSON text format to transfer linked data. This-LD format was developed by the World Wide Web Consortium (W3C), and one of its great benefits is that its data is not interleaved with the HTML code. Instead, you include this micro-data in a script tag in the document (in the head or at the end of the body).

11. Other social media tools

The Twitter Cards are a way of enriching your tweets: when your website’s URI gets included in a tweet, a card section will appear attached to the tweet.

To do so, you simply need to add a few lines of meta-data markup to your website. It’s quite simple to implement:

  1. Choose a card type (there are four types of cards available).
  2. Add the meta tags to the page.
  3. Validate that the card is being correctly generated by testing your website with the twitter validator tool.

In addition to Twitter Cards, there are other meta-data tags that can be used to enrich your websites in the social networks. For instance, the Open Graph protocol enables websites to become a rich object in a social graph. This is used by Facebook to allow web pages to have the same functionality as any other objects on Facebook.

Each page in your website can be transformed into a graph object by adding some basic meta-data tags to the page’s content. This allows your website to become closely integrated with Facebook: several Facebook plugins can be integrated to it, such as Like buttons, Activity feeds, Login buttons, etc.

Conclusion

Improving the SEO of your website is a must in case that you are looking to increase the visibility and exposure of your website. The checklist above provides only a summary of some items that you need to take care when working on the SEO.

There are many useful tools online that can help you measure your website’s SEO and provide you with feedback and suggestions to improve it. Google Analytics is also a great tool to measure your website’s traffic and allows you to upload your website’s sitemap.xml file.

It is also important that you prioritize the SEO in your estimations. Bear in mind that optimizing the SEO is an iterative process which requires maintenance, testing and monitoring.




SOURCE

LEAVE A REPLY

Please enter your comment!
Please enter your name here