How We Improved Core Web Vitals & What Correlations We Found [Case Study]

Core Web Vitals have been part of Google’s ranking algorithm for almost a year. And all the while, we’ve been working hard on optimizing link-assistant.com for this ranking factor.

After months of work and all the interesting challenges we faced, we can finally see the results coming in. And we can’t help but share our major takeaways with you in our own Core Web Vitals case study.

Keep reading this Core Web Vitals case study to learn how we improved Core Web Vitals, and how that helped us recover from a sudden rank drop.

What are Core Web Vitals, in brief

Core Web Vitals (CWV) are a set of metrics that help Google assess the overall performance of web pages.

These three metrics are:

  • Largest Contentful Paint (LCP)
  • First Input Delay (FID)
  • Cumulative Layout Shift (CLS)

Each of the three Core Web vitals represents a distinct aspect of the user experience.

Together with other web vitals, LCP, FID, and CLS are part of Google’s Page Experience signals that are used for ranking purposes.

CWV measure and evaluate three aspects of web pages – loading, interactivity, and visual stability.

The first Core Web Vital is LCP (Largest Contentful Paint). The LCP metric is used to measure the time it takes for the largest image or text block within a default viewport to become visible from the moment a page starts loading. Current Google’s benchmark for Largest Contentful Paint stands at under 2.5 s.

Google's second Core Web Vital is FID (or First Input Delay). The FID metric evaluates page responsiveness and measures the time it takes for a page to react to the user input (click, tap, or keypress). A good First Input Delay is considered to be anything less than 100 ms.

Last but not least, the CLS metric measures the visual stability of a page. If web pages have any elements that are unstable while the page loads, there’s going to be a poor CLS score. A good Cumulative Layout Shift score should be equal to or less than 0.1.

Tip

Since this is a case study, I won’t dive into too much theory. However, If you want to dig deeper, make sure to check this comprehensive post about all three Core Web Vitals. Also, you can read this one to learn more about Core Web Vitals optimization tips.

How Core Web Vitals correlate with organic impressions and clicks

If you read any random Core Web Vitals case study, you’ll eventually find that there’s lots of contradictory information as to how much a good CWV score correlates with organic impressions and clicks.

I’ve purposely put this section right at the beginning of the study so to showcase how the optimization of CWV worked for us, and why we made it one of our top priorities for the coming year.

Tip

The CWV report in Google Search Console has a 3-month cap on available data. So to track our progress in a more convenient way compared to Google tools, we’ve set up an integration between Google Search Console’s API and Microsoft’s PowerBI. This way, we’ve managed to combine all datasets in one single dashboard. Plus, we were able to collect all the historical data without any limits.

Link-assistant.com is a relatively old website with tons of pages. Expectedly, the major part of them used to have all kinds of page performance issues. All these issues were flying under our radar up until June 2021 when we experienced a significant drop in organic clicks.

Drop in clicks due to poor Core Web Vitals
A significant drop in clicks in June 2021 pointed us to possible issues with Core Web Vitals

This drop was only worsening within the course of the following months, so finding the reasons behind it and possible solutions was a must for us.

One of the hypotheses our SEO suggested was that this could have been connected to Core Web Vitals. So we rushed to analyze our LCP, CLS, and FID to see where we can improve, and how organic clicks and impressions would respond to this. 

In October 2021, we started making the first adjustments aimed at the optimization of CWV on link-assistant.com. 

What we saw during the first months of optimization was the consequent growth of organic clicks (blue graph).

This was a good indicator for us that we were moving in the right direction and that Core Web Vitals were most likely correlating with organic clicks.

The period of growth was later followed by a sensible drop in May 2022.

As it appeared from the research we made, this drop followed exactly after we saw worsened CWV.

In the screenshot below, you can see the mark pointing to the moment when a portion of “green” URLs turned into “amber”, and the consequent drop in clicks.

If we compare absolute numbers, the week that followed the spike in the Core Web Vitals report (graph 19) resulted in a whopping 30% decrease in organic clicks.

The issue with “amber” URLs lasted until May 24.

As we suspected, when this portion of URLs rolled back to the “green” zone, organic clicks did the same (graphs 22-25).

We can’t say for sure if this happened due to the improved CWV exclusively. But the correlation seemed pretty convincing for us to believe so.

What’s more, we found a remarkably vivid correlation between organic impressions and a good Core Web Vitals score. Take a look at the chart below.

Entry point

The whole CWV optimization process for us started with checking our status quo in Google Search Console.

The results were as much disappointing as they could be:

Important note

In addition to Search Console, you can use Google’s PageSpeed Insights to check CWV on your website.

However, it’s important that you understand the key difference between those two and how they assess CWV.

Search Console is powered by field data, namely Chrome User Experience Report (CrUX data), which means that CWV scores are based on how real users experience your page. This data is updated every 28 days, so you will have to wait this long until you can see the results of your performance improvements. 

PageSpeed Insights determines the scores based on the lab data – a simulation of what Google believes to be an average web user. You can check any of your pages right here and right now.

Thus, PageSpeed Insights is good when you want to get a quick estimate of what’s going on with CWV on a given page.

To get an actual picture of your CWV scores, use Search Console.

We then used WebSite Auditor to bulk-check all our pages against separate metrics (LCP, FID, CLS).

The major Core Web Vitals metrics that needed to be improved were LCP and CLS:

The initial checkup explicitly demonstrated that there was a lot of space for improvement both on desktop and mobile.

So without further ado, we rushed to rehabilitate ourselves in the eyes of Google and make all reds-and-yellows green.

When we first learned about the three Core Web Vitals, their optimization seemed like a trivial task to us. 

You consult the Lighthouse report, PageSpeed Insights, and Search Console and follow the recommendations they give you. What could possibly go wrong?

It was only a few weeks later when we found that there could be hundreds of issues leading to poor CWV scores.

So, to share our experience, and save you tons of time, I’ve made my best to collect all that has been done in an easily-digestible format.

0. Setting up geo-specific servers & CDNs

– Affected metrics: TTFB, LCP –

Server response time is crucial and can render all CWV optimization efforts null if not taken care of in advance.

One of the metrics that help Google assess the response time of a server is called Time to First Bite (or TTFB) and though not core, it’s part of other Web Vitals.

TTFB measures the time it takes between when a user lands on a website and the moment the first bite of information is loaded. As a rule, TTFB should be anything less than 600ms. 

Poor TTFB directly affects LCP in a negative way, so making it as low as possible is crucial.

In most cases, the TTFB score can be improved by simply switching to a better hosting provider. However, that was not the case with link-assistant.com. 

Link-assistant.com serves a worldwide audience, and Google uses Real User Monitoring (RUM) data to determine if a specific page meets Web Vitals thresholds. This means that if TTFB is good for, say, the USA and bad for India, the latter will eventually drag the final TTFB score down.

Before CWV made so much buzz, we used to have only one server location, in the US. Obviously, that was not enough for a website that had such a multinational audience:

So it was not too much of a surprise to see such results of the first TTFB assessment:

The first TTFB assessment showed our weak spots in all locations but the US ones
The first TTFB assessment showed our weak spots in all locations but the US ones

As expected, the TTFB test went well for the US locations, while all other regions returned enormously high numbers, with India being a leader in that case.

To solve this issue, we’ve set up an additional US-based server, and added two new ones in Asia and Europe.

The locations of the new servers
The locations of the new servers

Adding a new server in Europe helped us big time and we managed to cut the TTFB time there by up to 80%.

At the same time, the tests revealed that there were still issues with TTFB in Asia. Even after setting up an additional server there, TTFB was good only in three locations. Such countries as Japan, Korea, and Australia were still lagging behind.

This pointed us to a hypothesis that the traffic volumes coming from India were too high for a single server. So most likely the server was overloaded thus resulting in poor TTFB for all locations other than India. So we added two new servers – one in Osaka, and the other in Singapore.

Since the additional servers have been added only a week or so ago, we will have to wait for the final results to kick in. However, the improvement can be already seen for most of the locations.

The same problem with TTFB was observed in the Americas.

Not only did the TTFB scores not improve for the US-based locations, but there were also enormously high server response times for users in Brazil and Chile.

Here, we made a few conclusions.

First, our organic traffic in the Americas is as heavy as it is in India, so two servers are most likely not able to handle the flow without compromising TTFB.

Second, one of our servers was on Wowrack. Our dev team found out that this very server was heavily overloaded, had pretty outdated tech specifications, and was located in the infamous zone of the “oldest Internet” on the US West Coast – all this led to significant connection lags between users and the server itself.

As the final solution, we’ve recently set up an additional AWS server, so now we have three servers in the US. The results of the improvements we made are not significant so far, but we’ll keep an eye on them and update this study later on.

Overall, the optimization of TTFB for link-assistant.com allowed us to significantly improve this metric, which, in turn, resulted in LCP improvement as well. A win-win.

Additionally, when we were certain that TTFB was good enough, we made sure all our static assets (images, CSS, Javascript) were offloaded from our servers to CDNs. Though indirectly, having all your static assets on CDNs helps improve the TTFB scores, too, because servers are not overloaded with additional requests to load images, CSS, or JS.

Note

Acquiring additional servers is a pricey solution, which is not compulsory in 100% of cases. If your website is serving a local audience, you can safely go with one server close to your target audience. If you encounter any issues with TTFB, try less radical solutions, such as changing the hosting provider. If issues still occur, consult this article from Google.

1. Deferring third-party JS

– Affected metrics: LCP –

Third-party Javascript is anything from the Facebook share buttons to Google Analytics trackers on a page. These page elements always consume tons of rendering resources. Plus, each time a browser spots such a piece of JS it stops rendering the HTML until it finishes rendering JS. All this eventually leads to a poor LCP.

What we did there was that we first analyzed what render-blocking JS we had on our pages. 

From the research, we saw that the major blockers were social sharing buttons, Facebook comments blocks, YouTube’s embeds, and Sleeknote, which we used for pop-ups.

For instance, embedded YouTube videos alone could have contributed to up to 30% of LCP time savings:

Though it was not compulsory, we got rid of all third-party scripts where it was possible for the sake of better LCP scores. 

At the same time, there were pages where we wanted to keep social sharing buttons, Sleeknote, or GA trackers.

To keep these page elements without destroying our LCP, we moved them out of the critical rendering path. This was done by adding one of the following attributes to the <script> tag:

  • Defer. This attribute tells the browser to load the script only after all other elements are fully loaded. We used it for all scripts that can be executed at the very end of the rendering path (social sharing buttons):
    <script id="facebook-jssdk" src="https://connect.facebook.net/en_US/sdk.js" defer></script>
  • Async. This attribute tells the browser to execute the script asynchronously without pausing the parsing of a page. We used this attribute for scripts that were sensitive to the delayed loading (Google Analytics):
    <script async src="//www.googletagmanager.com/gtm.js"></script>

Just in case, here’s a visual explanation of the defer and async attributes:

It’s worth noting that you need to avoid using the defer attribute and use async for various tracking scripts. In our case, we were missing almost 15% of data on Goals in Google Analytics after we moved the tracking script to the end of the rendering path.

Note

The same technique was also applied to our own JS. We went through all Javascript used on our pages and analyzed what pieces were irrelevant so as to get rid of them, and what pieces needed to be kept – in most cases we made them load asynchronously with the help of the async attribute.

2. Rationalizing font usage

– Affected metrics: LCP, CLS –

While web fonts are key to appealing design, better readability, and branding, they are also heavy files that can take a while to load and harm both LCP and CLS.

If a third-party font is used for a piece of text that is the LCP element, the LCP score can be negatively affected because it will take some time for the browser to load and fetch this font.

When it comes to Cumulative Layout Shift, the major issue with third-party fonts is that before any given font is loaded, a browser will display a system font instead. Once loaded, the third-party font may take more screen real estate, thus affecting the visual stability of a page, causing a layout shift and leading to a poor CLS score.

This is how CLS issue looks like
Here's how the layout shift looks like

Before Core Web Vitals became a thing, we had many different fonts on a single page. 

Sometimes, these fonts were not even used but were still loaded. Like on the screenshot below.

The Roboto font was used exclusively in the pop-up tooltip, which was not visible on mobile devices but at the same time the font was still loaded.  

We also used a lot of external fonts, like Google Fonts, which were stored outside our server.

So the first thing we did was that we got rid of all external fonts and switched to system ones (Arial, Helvetica, Verdana, Tahoma, etc.). This simple solution worked extremely well for us and we saw a great improvement in both LCP and CLS scores. 

We also understood that there might be instances when we would need to use third-party fonts. For such cases, we make a font self-hosted on our servers, and preload it in the head section of a page’s HTML.

As the finishing touch, we also made sure that we didn’t use any icon fonts. These are the fonts used to replace schematic images, such as a magnifying glass, which is used for search bars.

Such icons were replaced with SVG images hosted on our servers so that the browser won’t need to load them from external sources each time.

Important note

Self-hosting web fonts may not help with improving LCP if your site doesn’t use CDN and HTTP/2. Try experimenting with both self-hosted and third-party fonts to see what works best for you.

3. Extracting critical CSS & JS

– Affected metrics: LCP –

CSS is a render-blocking resource, which means that a page cannot be rendered until the browser fetches and parses all CSS files. If CSS is heavy, it will take lots of time to load thus directly affecting the LCP score.

In our case, it was not only the size of the CSS we used. 

Before the optimization work started, we had one huge CSS stylesheet for all pages with over 70,000 lines in it. The same heavy CSS was loaded for each page, even if it wasn’t eventually used there.
  
To solve this, we first consulted the Coverage report in Chrome’s Dev Tools.

We then reviewed all the contents of that CSS file and got rid of all lines that were irrelevant. This allowed us to significantly cut its actual size and the percentage of unused bytes.

Note

To cut the size of our CSS files even more and make sure they don’t negatively affect our LCP, we also ran them through the CSS minification process.

CSS stylesheets often contain unnecessary comments, spaces, and line breaks – getting rid of all of these may often help cut the final size of the file by up to 50%.

We used CSS Minifier to minify our CSS, and JSCompress to do the same for render-blocking Javascript.

Eventually, our dev team has developed a dedicated tool, which now handles minification automatically.

Additionally, just as it was with JS, there was no need to load the same huge CSS for each page every single time.

So we first extracted only those styles that were required for the above-the-fold area of a specific page and added them to the <style> tag of the <head> section.

To see a live example, you can check the source code of Rank Tracker’s page:

Critical CSS is moved to the head section
Critical CSS is placed in the style tag of the head section

Second, the remaining CSS, which was not critical, was made to load asynchronously with the help of the defer attribute.

All this helped us significantly boost our LCP scores, which eventually made our pages load faster and improved the way users interact with our website.

4. Compressing HTTP content

– Affected metrics: LCP –

Another technique that greatly helped us with improving LCP for link-assistant.com was compressing HTTP content transferred between our servers and the browser.

There are several algorithms available for compression – gzip and Brotli are the most popular ones. As a rule, both of these solutions allow compressing CSS files, Javascript,  HTML, and SVG files. 

A few years ago, gzip was a golden standard for compression. Today, Brotli is considered more effective when compared to gzip, allowing higher size reduction for a final file.

Since we were using exclusively gzip to compress our HTTP content, we first tried to implement Brotli and see how it would affect our LCP scores. As a result, we did see significant improvements and decided to switch to Brotli instead of gzip.

At the same time, there were still browsers that didn’t support Brotli compression:

What we did there was that we kept two types of compressed content for each page, and served the required one depending on a user’s browser. As you might see from the screenshot above, 96% of browsers now support Brotli so most of our pages are served using this type of compression.

5. Optimizing images

– Affected metrics: LCP, CLS –

Images that are not optimized properly will most likely negatively affect Core Web Vitals in several ways:

  • If the image on a given page is the LCP element and it’s heavy, not compressed, or served in a format that is not efficient, the LCP score will inevitably be poor. The same will happen if all images are loaded simultaneously.
  • If any of the images on a page do not have their dimensions specified, the visual stability of a page, as well as the Cumulative Layout Shift score will suffer.

Below, I’ve described the exact steps we’ve taken to make sure that every single image on our website is well-optimized.

Compressing images

To avoid situations where enormously heavy images cause page speed issues and consume tons of data, we first had to locate those images on link-assistant.com.

To do this, we consulted the Core Web Vitals report in WebSite Auditor and checked if there was any room for reducing the size of existing images. As a result, we did see that many of our images could have been compressed in a more efficient way.

For compression purposes, we used pngquant and a bash script for compressing PNG/JPEGs with TinyPNG.

Using these two solutions, we were able to save up to 50% on the image size.

Later on, we’ve incorporated the TinyPNG bash script into our own utility, which is now dealing with image compression sitewide automatically.

Choosing the best image format

To save on image sizes, we tried to find a better and more efficient format for our existing images. Sitewide, we have way more PNG images than any others.

For testing, we chose the WebP format, which is more progressive according to Google. However, after transforming our PNG images to the WebP format, we discovered that the image size became bigger in most cases. So we ended up keeping PNG as a standard extension for images.

Note

As of today, WebP is supported by most modern browsers. Still, there are a few that don’t support this image format yet.

We also had a few pages with JPG images, so we tried transforming those to WebP as well. As a result, we did see a significant decrease in the size of final files. But interestingly, smaller image sizes didn’t affect our LCP score in any way, so we decided to keep the existing formats of images.

While the dilemma of image formats is still unresolved, we keep experimenting with various formats and will update this case study as soon as we have something to share.

Setting image dimensions

When image dimensions are not specified in the code, the browser may need some time to properly size the images on a page. This leads to content layout shifts and negatively affects the CLS score.

While this was a common issue with many of our pages, that was an easy fix, which instantly translated into improved CLS scores.

The dimensions are set right inside the <img> tag, like this:

<img src="pillow.jpg" width="640" height="360" alt="purple pillow with flower pattern"/>

At first, we’ve been doing this manually for each image. Then, our dev team added this functionality to the utility that compresses images, so now this part of image optimization is also automated.

Note

In some cases, specifying only image dimensions may not work to eliminate layout shifts.

This was the case with one of Rank Tracker’s pages on mobile devices.

Logo was causing a layout shift

The logo was causing a layout shift, so we first set the dimensions for the image. This didn’t work and CLS was still poor. We then set the dimensions for the container where the logo was placed, and that worked.

Deferring offscreen images

To secure good LCP scores throughout link-assistant.com, we made sure that offscreen images didn’t create network loading contention for key resources.

To do this, we made all images that were not critical for the above-the-fold content of web pages lazy-loaded.

To enable lazy-loading for images, it’s usually enough to specify that in the <img> tag, like this:

<img src="pillow.jpg" width="640" height="360" loading="lazy" alt="purple pillow with flower pattern"/>

Likewise, all images that were visible within the default viewport were made to load at the beginning of the rendering path. Because if you try to trick Google and lazy load any image that is the LCP element, you will eventually get this reflected in your Lighthouse report:

Where it was possible, we also reduced the number of heavy background images for mobile devices, thus improving our mobile LCP score.

Serving responsive images

Last but not least in our image optimization list was serving responsive images depending on a user’s viewport width.

An attribute known as srcset allows to collect many resolutions of the same image and serve the one that best fits the size of a user’s screen. This method helps improve the LCP score, especially on mobile devices, when a lighter image can be loaded instead of a heavy desktop one.

The srcset attribute is now applied for most images on link-assistant.com. The implementation looks as follows:

<img src="image1.png" 
srcset="image2.png 100w
image3.png 200w
image4.png 300w
width="500" height="380" 
alt="" decoding="async" sizes="(max-width: 200px) 100px, (max-width: 300px) 200px, (max-width: 400px) 300px">

The above piece of code has a set of sources for different image sizes (srcset), and hints for the browser when to pick this or that image (100w, 200w, 300w). I.e. when a user’s viewport width is 100 pixels, the 100w hint will trigger image2.png to load and so on.

6. Making final adjustments

– Affected metrics: LCP, CLS, TTFB –

In addition to what I’ve described above, we also made a few minor adjustments sitewide. Though not that prominent, the activities listed below did help improve our CWV, so I thought it would be nice mentioning them as well. 

Reducing DOM size

The Document Object Model (or DOM) is a tree-view representation of a web page, which is created for each page when it loads. 

Each element of a page’s HTML (body, div, H1, etc.) creates a node in the DOM tree. So every time a new element is added to a certain page (link, image, or paragraph), its DOM size gets bigger.

A large DOM tree negatively affects the page loading time, and thus the LCP metric.

Throughout optimization, we encountered a few cases where the DOM size could be reduced.

For instance, on the page with all Rank Tracker’s updates, we moved from loading all 500+ updates at once and made only 10 of them visible. The remaining information was made to load upon user request after clicking View older updates.

This minor fix helped us significantly reduce the DOM size for this and all similar pages.

Optimizing database requests

The older the site is, the more database requests it uses. Over the course of time, many of our requests became outdated, needed to be optimized or completely rewritten. So we did a complete refactoring of our SQL requests. In some cases, we were able to cut the request execution time by 95%. This helped us improve our TTFB, and eventually LCP scores. 

Enabling caching of static elements

To reduce network contention, we also enabled caching of all static elements such as images, CSS, and JS. This way, the browser didn’t have to load the very same assets of a page multiple times.

Additionally, where possible, we reviewed our cache policy and extended the lifetime for cached elements.

Eliminating bootstrap.css

Bootstrap is a popular CSS Framework, which helps with website development. At the same time, Bootstrap often comes with a lot of excessive CSS that causes page speed issues.

In our case, bootstrap stylesheets were not only destroying our LCP scores but also came unused in most cases.

The decision was to extract only those things that we really needed and get rid of the remaining bootstrap.css. And that actually helped us improve our LCP.

Pre-connecting to third-party resources

As a rule, establishing connections, especially secure ones, takes a lot of time. The thing is, it requires DNS lookups, SSL handshakes, secret key exchange, and some roundtrips to the final server that is responsible for the user's request. 

To save this precious time, we used the preconnect attribute for such resources as CDN and Google Tag Manager.

To pre-connect our website to a third-party source, we simply added a link tag to our page’s HTML:

<link href="https://cdn1.link-assistant.com" rel="preconnect">

Preloading key resources

The preload attribute helped us cut the LCP time for background images.

For instance, our Backlink API page has this background image with gradients:

Before, we used to lazy load such background images via the <img> tag. We then tried preloading it using the <link rel="preload""> construction inside the <head> section. And that worked extremely well for us.

We also use the preload attribute now for any fonts that are not default.

7. Profiling the code

– Affected metrics: LCP, CLS, TTFB –

After months of work on CWV, our dev team was unhappily surprised as to how many obvious issues were still present in the code. Coming unnoticed, all these issues were stopping us from getting good page experience scores in some cases.

This is why it's important to review your code several times – best at the very beginning of optimization, and then at the very end.

In most cases, Google’s PageSpeed Insights test will direct you to the problematic piece of code and you’ll be able to timely fix all possible issues.

The results of Core Web Vitals improvements

As of today, months of hard work translate into this:

All the above-listed activities made us reach the point where the majority of our pages are in the “green” zone of the CWV assessment.

Many site owners would barely call it an outstanding achievement, especially if their website is relatively new and modern. However for us, with a 17-year-old website that has over 1K indexed pages, it’s a very meaningful result.

Back in October 2021, none of us was certain if we would ever be able to conquer Core Web Vitals. Today, we know exactly where we need to be heading to finally make this chart in Search Console all green.

Optimizing for Core Web Vitals is an ongoing process

CWV optimization is not something you do once and can forget about. It’s an ongoing process that requires continuous attention.

If you look at the screenshot I took from Google Search Console, you’ll see that poor URLs can pop up all of a sudden.

But the good thing is that if you regularly monitor your CWV, you’ll be able to timely spot all possible issues and address them accordingly.

Final thoughts 

The methods described in this case study should help with improving Core Web Vitals for your own website. However, some of them are specific to our website only. Make sure to consult the Lighthouse report in Google Chrome, PageSpeed Insights, WebSite Auditor, or Google Search Console and tailor your own optimization strategies to the issues reflected there.

Have you already achieved some success with Core Web Vitals optimization? If so, I invite you to share your experience in our Facebook community.

Article stats:
Linking websites N/A
Backlinks N/A
InLink Rank N/A
Data from Seo SpyGlass: try backlink checker tool.
Got questions or comments?
Join our community on Facebook!