What is SEO technically

Technical search engine optimization (technical SEO)

Anyone who has ever started to deal with topics related to search engine marketing will quickly have noticed that everyone involved in the project has to pay attention to a multitude of aspects and relationships: Concepts, web designers, programmers and online editors form a strategic and operational one Team.

A typical example is a blog

The editor writes an article, which should be provided with important keywords in the right places, especially in the meta tags (title and description) and subheadings (headlines, levels h1 to h4), conceptuals and designers have agreed in advance how and where on the website the articles appear, and the programmer has ensured that the programming framework, which cannot be influenced by the editor, is semantically suitable for search engines, in particular with regard to the overview page on which the articles are teased.

The technical search engine optimization is therefore not the only instrument for successful placement "at the top" in the search results of Google & Co. Nevertheless, it is an important component for search engine optimization and can save a lot of effort in other areas.

Technical search engine optimization is therefore the focus of this article.

The programmer - or often also called web developer - creates the technical prerequisites to be able to create the content (text and images) in an SEO-compliant manner.

A good time to point out that the areas of responsibility cannot always be clearly delimited from one another. As a rule, a website is based on a CMS (Content Management System), which gives the editor a greater area of ​​responsibility than if the programmer received the texts and still had the opportunity to influence the text formatting and meta tags himself. Ultimately, Google does not care whether the website was generated via CMS or without it, and whether a programmer or editor set up the order of the subheadings incorrectly (e.g. headline 2 before headline 1 or after headline 1 jump to headline 3 directly), the evaluation is "Technically incorrect".

In order to get into the nitty-gritty, here is a list of examples of topics for technical search engine optimization:

  • HTML structure, CSS, Javascript
  • Images and SEO
  • Meta tags
  • Text formatting
  • Schema.org
  • DuplicateContent / overview pages
  • Page load speed / Pagespeed
  • Mobile SEO and ResponsiveDesign
  • googleSearchConsole (previously: googleWebmasterTools) and Sitemaps

HTML structure, CSS, Javascript

Not only browsers (Firefox, Chrome, Safari), but also search engines like google find correct, i.e. valid, source code well. One more reason to pay attention to an error-free structure of HTML and CSS (stylesheets), i.e. to create layers (div tags), lists, paragraphs, image tags in the correct sequence, with closing tags, etc. Whereby “correct” is not free of discussion, because with the W3C standard there has been an attempt to establish uniform web standards since 1994, but ultimately these are only recommendations that browser manufacturers only follow to a limited extent. In addition, these recommendations do not quite keep up with the actual development, e.g. with regard to the new standards in HTML5 and CSS3. In practice this means: what a W3C validator evaluates as error-free and therefore valid programming code is not always state-of-the-art, or vice versa: What is evaluated as incorrect by the W3C validator can be the new HTML5 standard, or "old standard" for older browsers.

What does Google say about a not entirely error-free W3C validation?

The short summary is: not so strict. Means: Google wants to be able to orient itself to the (latest) standard structures, so they should be available, but if a closing tag is missing, that is not particularly important. For google, more important than a zero-error validator result is a lean HTML code, that is, with as little “trimmings” as possible in order to get to the content that is of greatest interest to google (and the users!) As quickly as possible.

Javascript / jquery is also rather ballast for google, which is primarily used for presentation effects (slideshows, show-hide effects, etc.), but delays the loading of the content. An economical use or even complete renunciation of jquery is welcome from an SEO point of view, but not exactly hip. Which reminds us once again that it's not about subordinating everything to the topic of SEO, but always keeping it in mind and then setting the priority, in this case user-hip (jquery) versus SEO-cool (no jquery) .

Use jquery sensibly and not as a pure decorative element

Search engines also value good usability and positive user experiences. And modern forms of animation and presentation using jquery make a significant contribution to this. It is therefore important to use these instruments with caution: where they make the user experience really more positive (for example, clearer and more informative) and thus increase contact or other conversions.

Images and SEO

In addition to the text, images are the most common content elements and are rated accordingly by google. But even if an image file can say more than a thousand words to a human eye, google would rather have words. And that can also be assigned to a picture, which starts with the picture file name: a JPG picture file with the name “red-snail-on-green-sheet.jpg” is more meaningful than “DCS12345.jpg” ". Because the file name was not originally intended by the camera to describe the image motif, there is one (actually: even two) tag for it, the alt tag, which should always be filled in sensibly, which is otherwise rightly rated as an error , also from google.

An important task of the online editor or copywriter is therefore also the writing of texts for the old texts of all images to be used.

IPTC metadata or Exif data

And then there is the IPTC metadata or Exif data, which contains information such as the photographer, location, subject, date of the photo, etc. Every digital camera saves this data by default, which you can preset yourself on the camera, e.g. the photographer - and this can be read out by google, but also by CMS systems such as WordPress. Officially, they do not yet play a role for google, they are only a "potential ranking factor" (Matt Cuts 2014), which is probably also due to the fact that google knows that this topic is hardly on the radar of any web designer, programmer or content manager and therefore contains hardly any usable information. But if you have that on your screen, you might be ahead of the game on this front.

In practice it looks more like this:

  1. Someone, ideally a professional photographer, has shot a subject with a digital camera
  2. The image file (usually JPG) is post-processed by him (contrasts, brightness, sharpness)
  3. This version is sent to the web designer, who processes the image again for the web (72 dpi, suitable width / height and image section)
  4. Then it comes to the web programmer or content manager, who may also crop it again (because the web designer did not know the exact size information required)
  5. It is then entered into the website.

In this case, the metadata of the camera, e.g. the date of the recording, has been adopted and can generally also be called up as information on the website. This is quite interesting for google (and website visitors), but sometimes as a website operator you don't want to disclose the date and time a photo was taken.

Another practice case:

The JPG file is not edited in the original by the web designer but a section of the motif is taken over as a layer in Photoshop, e.g. as part of a web design template, and a new JPG file is then generated by the programmer. In this case, the IPTC metadata of the camera is lost and the data is taken over by Photoshop, but it usually does not say anything about the origin of the image.

Meta tags

The two most important meta tags for search engine optimization are page title and description (title and description), which should always be filled in sensibly.

Again, this is actually more of an editorial and not a technical topic, but because this content is not (directly) visible to "normal" people on the website but is under the hood (in the source code), it is mostly perceived as technical, and should not go unmentioned at this point. It is important to limit yourself to the specified number of characters (55 for title and 150 for description) and to find the right wording with the corresponding terms (keywords).

For some editors who are not used to writing for a search engine, this is too technically demanding. Nevertheless, it is important: For people, with a title like “Hang gliders over the Wahner Heide” from the context of the text that follows, it is about the subject of “dragonflies”. It's not that easy for a search engine. Here the machine would create a completely different context based on the title.

Text formatting

Especially when it comes to the structure and design of the text, the technical and editorial areas of responsibility are mixed. As an editor, you recognize some things that you are used to from your Office application (e.g. Word), but some things are also slightly different. Paragraphs, lists, tables, and subheadings are also found in Word, and the formatting bar in a CMS like WordPress is very reminiscent of the Word formatting bar. And then “WordPress” almost sounds like “Word”.

Subheadings (headlines)

Unfortunately, the subheadings are often neglected because they are not so common and important for the print sector, and there are also several levels (H1, H2, H3, ...) that are also used should be in the correct hierarchical order (no H3 before an H2, see above). And because google likes structured data, you should also make use of it, e.g. by using lists, not in the form of manually preceding dashes but as "correct" lists with bullet points. If you want to emphasize a term, you should use bold type instead of underline, because bold means "important" for google, underline (underline) means decoration. The good news is: those are the most important rules! And: they are google-friendly BECAUSE they are reader-friendly. So you are writing for your readers and - almost as if by the way - also for better search engine optimization.

Internal links

The SEO topic of “internal links” is not quite suitable for the keyword “formatting”. It is attached here for the following reasons:

  • It saves time to think about which words or word combinations you want to link to other internal pages while you are writing and formatting the text.
  • In the end, “I'll do it later” often turns into “why do we (still) have no internal links at all?”

Schema.org

Schema.org / item-properties is a relatively new feature for structuring data and has not yet fully reached the consciousness of all editors and web programmers. Probably because it has no visual impact and the logic can only be grasped by someone who looks at the logic, i.e. a search engine.

What is this structured data about?

For the structured presentation of data sets, especially on people, companies, address data.

Classic example website footer:

The address and contact data of the company are often stored there, and it is clear which data are to be expected: company name, street, zip code, city, telephone, email, etc. And that is why exactly these data can also be stored with the corresponding tags mark. If "Stromberg AG" is the company name and "Hengasch" is the location, then that is clear and google doesn't have to wonder whether it is a mountain, a television series or a part of the body. Google has not yet clearly stated that schema.org has a noticeable relevance for the ranking, but it can be assumed that it is already so. And in the medium term it can be assumed that it will play an even greater role in the ranking factor.

Duplicate content / overview pages

It has long been known that text content that appears in the same formulation in different places in the WorldWideWeb is rated negatively by google and must therefore be avoided at all costs. This is entirely logical and consistent: because only in this way, if this content is only available in a single place, can one say with certainty which is the original. References and links to this original can be from 1000 other places. Better but not a copy.

What does it look like in practice and why does duplicate content arise in the first place?

A short answer is: because copy-paste is easier than writing your own texts. But there are also more complex answers.

Sometimes the content appears unintentionally several times or a website operator is not even aware that google sees the content several times. So for google "www.smart-interactive.de "and" smart-interactive.de "two different websites. If the web server is set in such a way that you land on the same web presence under these two web addresses, the entire website is duplicate content - equally large point deduction for a basically only minor technical failure. There are no rules here as to which provider deals with the phenomenon, whether it is redirected by default or you have to take care of it yourself. It is also important that this is checked manually by the programmer and adjusted if necessary.

Contents can also appear several times within a web presence.

Classic case: news blog.

In addition to the actual page with the complete news article, there is at least one place, usually several, from which this article is teased and linked. First on the news overview page.

Here the title and teaser text appear, usually 2-3 sentences, which are then also exactly on the article page itself - that's double the content, strictly speaking! And if the article is teased elsewhere, e.g. on the home page or under another news category, even multiple content.

Fortunately, google doesn't seem to be so strict about it. It was once discussed whether that would result in a noticeable point deduction. In practice, however, this can hardly be taken into account. After all, who, when they have finally written a blog article with proper teaser text, writes this teaser text in three other variants? Apart from the fact that the technical requirements must first be there. A CMS usually automatically generates a teaser text from the article's opening text, which of course is always the same. So you would need several input fields for the individual teasers, and we have never seen such a CMS with the usual software developers - (However, we did program something like that ourselves. At a time when it was still unclear whether google “that want to have "or whether in this special case that is not rated as" bad duplicate content "after all ...)

It is more important that the news article always only appears under exactly one URL from all places where it is accessed. So not under “www.hallo.de/ategorie-eins/toller-artikel/” and then again under “www.hallo.de/ategorie-zwei/toller-artikel/”. Either ... or. WordPress, for example, knows this and ensures that it doesn't happen. For self-made projects you should check that again.

Again on the topic of news overview pages:

These represent a special case in yet another respect, because of their dynamism. Apart from the fact that the teased content changes constantly because new articles are constantly being added (at least that is how it should be), the number of overview pages also changes.

Although both the first page (www.hallo.de/ategorie-zwei/) as well as the second (www.hallo.de/ategorie-zwei/seite-2) and third (www.hallo.de/ategorie-zwei/seite -3) do exactly the same in principle, namely anteaser content, but do not display it completely. That is why it is interesting for a search engine to know that it is such an overview page or subsequent page and should therefore not try to rate this page according to normal content criteria.

This is shared via the so-called Canonical tag with, and via corresponding tags (no-index, no-follow) in the forward-back links between these pages. WordPress doesn't do this entirely automatically. If you have installed the Yoast plugin, which is recommended, you only have to remember to check the appropriate box under the numerous settings. Then it fits - unless the theme developer has set something else.

If you then have to redirect a page to another because a service offer has been lost or a url path contains typographical errors, this is technically possible via the so-called rewrite module, via an htaccess file.Important for google here: that it is a 301 redirect. Programmers know that.

Page load speed - "Pagespeed"

A wide field. Lots of wheels that you can turn, with different impacts and different levels of effort. But it deserves all the more attention because it has become the official ranking factor on google. An overall technology-heavy point, but as so often other aspects also play a role. A simple web design, for the implementation of which a minimum of HTML and CSS code is required, naturally makes a page faster. And if the design and conception have provided few and only small pictures, e.g. because the message of the website cannot be expressed in pictures, or because there are no (usable) pictures, the biggest speed hog is already switched off in advance. And if you can do without Javascript / jquery completely because there is no need for slideshows and other smart effects for this project for conversion reasons, then it is hardly worthwhile to think about further optimizations, because the page too so fast enough. But on the one hand, the requirements are rarely so spartan right at the beginning, on the other hand there is still a tenth of a second somewhere that can be extracted with very little effort.

Okay, we're a web developer and want to tackle this, where do we start? Either with the usual suspects, some of which have already been mentioned (images, lean source code, ...), or directly with the recommendations of the PageSpeed ​​tool from google. With this you can have a website analyzed online, get recommendations for action and then consider how consistently you implement them, i.e. what you are willing to do without in case of doubt in order to get the value zero. Incidentally, it is not said that if you do not get to zero, it has a measurable impact on the ranking. The green area is enough.

We'd rather start with the usual suspects, starting with the images already mentioned.

The higher the demands on the images, filling the entire width of the monitor and still of high quality, the more it pays to invest as much effort as possible here. First measure: go down to 72dpi if you have a higher resolution. Next: Try out how far you can go down with the quality of a JPG, as a rule of thumb you can go to 80%, without any visible loss of quality, but with significant weight savings. Either you leave it at this manageable amount of effort, or you keep screwing. With Photoshop you get no further at this point, because this tool is more focused on quality (for the print area) than on effectiveness (in the web area). From here you can try other programs that specialize in speed, but you have to think a little about your own priorities when selecting and setting, e.g. whether you want to do without the IPTC metadata. Jpegmini should be mentioned as a tool to try out, which promises weight reductions of up to 70% without loss of quality.

Another fundamental cornerstone is the web server

For four to six EUR per month you can get a web package, which basically can do almost everything you need (PHP / MySql database, htaccess, ...), but this is usually a shared hosting package that offers performance depends on its neighbor and its capacity consumption. Better to invest directly in your own server variant (dedicated server), which is around EUR 20 per month. Depending on the scope of the project, e.g. in the case of shop systems, it is worthwhile not to rely on a package but to talk to the technical support of the provider about even more individual solutions, with a correspondingly higher financial outlay.

In any case, what you can do as a programmer yourself is the structure of the source code

In general, keep as slim as possible, follow a few basic rules, and think about a few more things. Include jquery at the end and not at the beginning if possible, compress jquery and CSS files as much as possible (minify), avoiding spaces, blank lines and comments if possible. If a CMS is used (which is usually the case), things get a little more confusing because themes, plugins (WordPress) and extensions (TYPO3) are used, which are often not geared towards loading speed. If you've just installed three plugins, the page is often only half as fast as before, because 13 more jquery files have been integrated in the header. There are also Pagespeed plugins, some of which also do a good job, but the resolution to use as few plugins as possible in order to keep the potential for conflict as low as possible has already been thrown overboard.

Mobile SEO and ResponsiveDesign

Since April 2015, the mobile suitability of a website has also been an official ranking factor on google. What impact cannot be precisely quantified yet, but it will increasingly play a role, even for those website operators who have received confirmation from Google Analytics that less than 5% of website visitors come via smartphones, and also not to the specifically defined one Target audience. Here, too, google has again provided its own analysis tool with recommendations for mobile optimization, which should not be slavishly followed, but can be taken to heart.

In any case, you can't avoid the topic, either you ignore it anyway, for which there may be reasons (admittedly rather defiant) (target group, effort), or you take it on. And of course it is much easier to design a new project directly mobile-friendly than to roll an existing one to mobile. Either way, there are different approaches. The most prominent keyword in connection with mobile-friendly websites is “responsive”, but that doesn't quite get to the heart of the matter. Because responsive means that a website should fit all possible screens, including smartphone screens, but also large desktop screens. The range is now unmanageable and an end is not yet in sight, there are all possible options to consider when designing and designing a website, but this is hardly possible in the first planning step, there are many more agreements and plan-B to plan-x- Steps necessary as "then" (3-4 years ago), which also makes it necessary to rethink the usual workflow within the project team.

One way is to make greater use of templates that have already done preparatory work and have taken all possible options into account. For responsive, these are frameworks such as Bootstrap and Foundation, which provide a dynamic HTML and CSS framework. However, as already mentioned, responsive is not the same as the mobile version, there are other ways of doing this, so it can make sense to think about your own mobile version, because the content structure and design of a desktop website is not for a smartphone -Screen are to be implemented. If a CMS is in use, there are, for example, a large number of plugins for WordPress, with which you can have a mobile version on the screen very quickly. Basically, because the more individuality you have done in theme and template development for the desktop version, the more effort you have to put in to do the same for the mobile version. Normal: Individuality means effort. Google doesn't really care whether it is a responsive framework or a WordPress plugin, the main thing is that the google criteria are (largely) met.

GoogleSearchConsole (previously: googleWebmasterTools) and Sitemaps


If you do not have to worry about the fact that the data distributed and generated via your own website could be used somewhere where it is not intended (Google is known not to be a data protection association!), It is advisable to set up a Google account and to use the services, especially the GoogleSearchConsole (previously called googleWebmasterTools). You can not only tell google some details about your own website but also request feedback and recommendations from google, e.g. when tracking down broken links and 404 errors. With dynamic websites with regularly changing and updated content, this is almost inevitable, since the lowest possible error rate is a ranking factor for google, it is worthwhile to check it out regularly and tidy it up. This also includes a sitemap, in the form of an xml file, which reflects the website structure and makes it easier for Google to index it. Not to be confused with a sitemap, which is often found as a separate page as a website visitor and which gives you an overview of the entire website structure, which makes sense especially with extensive websites. But this sitemap is not only interesting for people but also for google because it contains internal links to all website content, so it is also an SEO factor.

And if you want to know more about the behavior of your website visitors in order to gain knowledge for your marketing strategy, it is worth using googleAnalytics. Although this does not give a direct boost to the ranking, the findings can help to then focus the content in such a way that both visitors and google rate positively.

For local companies there is another option with Google My Business to position themselves in the search results, you don't need a technical programmer for this, but it should be mentioned here, for the sake of completeness and because this topic is always passed on to the technology department is given.

Conclusion

SEO in itself is a broad field, technical SEO as a part of it, but it is worth at least a look, because you can achieve a lot with a few but important measures. And if you want to achieve even more, you can always dig deeper into the next topic. Those who prefer to deal with image quality and size will find enough leeway for experiments on this front; those who prefer to mess around with the web server or jquery code do just that. The holistic and complete approach is of course still the best, but not only the web programmer has to deal with it, as we have seen, because the transitions to the other disciplines are fluid, right up to the website owner himself. But who has enough curiosity and brings the time to let us advise you on SEO, we are of course happy to help, the topic is definitely exciting.

Stephan Czysch, Benedikt Illner and Dominik Wojicek from the SEO specialists Trust Agents, who wrote a book about it: "Technical SEO - with sustainable search engine optimization for success" thought so too. We (smart interactive) still like to get current recommendations from Stephan Czysch and his colleagues, there are plenty of other topics around websites and web marketing that we can deal with in the meantime - e.g. conversion.