Improving Google PageSpeed Insights

I’ve been wanting to update my portfolio site for a while now, but have been putting it off for one reason. My Google PageSpeed Insights score was crap and I wanted to see how high I could get it before doing anything else.

Google Page Speed Insights Results
Current page speed insights gave me a 91/100 on mobile and 97/100 for desktop – nice!

The image above shows my desktop score at 97/100, and my mobile score turned out to be 91/100 – not bad eh?

Honestly, I’m a little ashamed to admit it, but I haven’t paid too much attention to these metrics in the past. I don’t completely ignore page speed, but I’ve never pushed to get a very high rating.

Since my portfolio site is relatively small, I figured there was no excuse for poor page load times. So I decided to see how well I could do if I really tried.

For those who followed this link and are lost: Google has an analytic tool called PageSpeed Insights.

It doesn’t necessarily measure how fast your page loads, but it checks for issues that affect page load times negatively. The higher the score, the faster your page probably loads, and thus you get a bit of an advantage when Google crawls your site for rankings.

The tool also gives you feedback and suggestions for fixing any issues. The main criteria the tool checks against are these:

  1. Reduce server response time
  2. Leverage browser caching
  3. Minify CSS
  4. Minify HTML
  5. Minify JavaScript
  6. Optimize Images
  7. Prioritize visible content
  8. Eliminate render-blocking JavaScript and CSS in above-the-fold-content

I won’t go into great detail about each of these, for that you can consult Google. But I’ll gloss over what each point means, and explain what I did to satisfy it.

Reduce Server response time.

According to Google, this rule will be triggered if your server takes longer than 200ms to respond to a request.

This is probably the most complicated of them all. However, if you’re simply hosting your site on a shared host provider and it’s just a static site – I doubt you’ll get this warning anyway.

If you have a larger site with a lot of application logic or database queries, and you are on a dedicated server, there is a lot that could potentially go wrong. Figuring out what that is, is out of scope for this article. Even Google doesn’t offer much for recommendations.

Leverage browser caching

If you don’t know what browser caching is… it’s just when your web browser stores files from a server response in a cache.

If browser caching is being used, a site will load faster the second time you visit it, because you won’t have to re-download assets like JavaScript files or CSS.

My portfolio site is running on Apache so enabling browser caching on my shared host is as simple as dropping an .htaccess file into my root folder with this bit of code:

<filesMatch ".(css|jpg|jpeg|png|gif|js|ico)$">
  Header set Cache-Control "max-age=2592000, public"

The first line just matches for any file that ends with css or jpg or js etc. Now that I’m looking at it – I should add json to that regex so that my manifest.json is cached too. Not that anyone would ever save my portfolio site onto their phone…

The next line sets the HTTP header to send with the response. The header is called Cache-Control.

Basically, that line in your .htaccess will create two headers that look like this:

cache-control: public
cache-control: max-age=315360000

Setting cache-control to public just means that anyone can cache the files in this response. This is appropriate for a static site like mine. I have no reason to prevent proxy servers from caching my files.

Setting cache-control to max-age=315360000 just means that the file will stay cached for one month before being cleared.

Minify HTML, CSS, and JavaScript

For these 3 criteria, I already met 2 of them. I fell short on minifying HTML and I’ll explain why in a moment – but first – what does Google mean by “minify”?

Just in case some readers don’t already know, minify usually means stripping all unnecessary white space and comments out of a file.

Some minifying tools will also shorten things like variable and function names in your JavaScript.

Basically minifying is when we take code that looks like this:

And turn it into this:

So the first snippet of code, is the code you actually write and edit. For those who are newer to all of this, you’ve probably noticed src folders on GitHub before… that’s where your un-minified code goes.

The second snippet is what you actually upload to your production server. For a small file like the one above it doesn’t make much difference, but for much larger files it can. Smaller files load faster, so minifying your code is generally good practice.

For absolute beginners, one way to achieve this is by finding minifyers online or one that you can download, and manually minify each of your files by feeding into this program.

You’ll do it one time, and then probably never want to develop websites ever again!

Usually, developers use build tools like Gulp or Grunt, which you run from your command line, to automate this process. You can set those tools up to minify your code automatically and then store them in a dist or build folder. Sometimes you’ll find these in Git repositories on GitHub, but they are usually left out, and it’s up to each developer to run the build locally.

So this has actually been a part of my normal workflow for quite a long time. So for this website, all of my JS and CSS had already been minified, and as a bonus, concatenated into one JS, and one CSS file too – which reduces the number of requests to the server.

Unfortunately, I hadn’t been minifying HTML…

The reason was I couldn’t find a decent plugin for Gulp that would minify PHP files. I use PHP for this site only for templates, and to automate switching between production and development environments. So honestly, you could argue that it’s unjustified anyway, but at the time I started this website I was playing around with an idea that I won’t get into here – I’m just using PHP because I am, so there!

Anyways… the site only has a few PHP files that need to be minified for the time being, so I decided to run them through a minifyer manually. I used a site called HTML Compressor.

Then I just manually dragged and dropped those files into my dist folder before uploading it to my server. Definitely not a long term solution, but it worked for today.

I think I’m going to be changing this site over to Jekyll anyway. Once I do, all of my files will be HTML again, and I already have a nice build system for working with HTML so I’m not too worried about this temporary inconvienciance.

Optimize Images

This one was by far the easiest to fix once I noticed the handy download link Google provides! You’ll see a link under this point that says “download compressed assets” or somewhere along those lines…

Done! Add those assets to your dist folder and upload them to your server.

But seriously, you should still be using best practices when it comes to web images along the way. All of my images were already 72dpi, they weren’t any larger than they needed to be, and unless I needed transparency my images were all JPEGs too. Pretty standard stuff.

I actually run all of my images through a Gulp plugin called image-min that strips meta data during development. So my images were already pretty optimized before some final compression made by Google.


Eliminate render-blocking JavaScript and CSS in above-the-fold-content

This one is still in the yellow as I write this article, which I’m happy with. It was a red point the first time I ran Google PageSpeed Insights with my site. This was because of the way I used to include web-fonts back when I first started working on it.

I used to include web-fonts and CSS libraries with link tags in my HTML. Now I use a pre-processor called SASS which compiles all of my CSS into one file. So I simply moved all of the link tags to web-fonts and a link to normalize.css (the only vendor CSS I’m using on this site) into my stylesheet.scss as imports instead.

So now, the browser is only waiting on one resource before it renders the page instead of 5 or 6 or whatever it was.

Just to be clear, render-blocking refers to assets that have to be loaded before the browser begins rendering the page. CSS and JavaScript are render-blocking by default. You can make CSS non-render-blocking using media queries, but for my site, I didn’t need to do that.

Above-the-fold content refers to the content that a visitor sees before they scroll. Google wants developers to start including any CSS necessary for above-the-fold content (ATF) to be embedded right in the HTML. The idea is to render something on the page as quickly as possible, and then have everything also render afterward.

I found post on Stack Overflow that includes some pretty good answers and links to other resources on this topic:

I don’t really consider myself as a front-end developer, so meeting the requirements of this point is a little too involved for my liking at the moment. Maybe a challenge for another day, and another article too 😉

So, yeah. There you go. I’m pretty happy with how my portfolio performs, and I hope that me sharing a bit about my strategy for getting there will help you with optimizing your own static content for the internet overlords we call Google.

If you have any tips of your own you’d like to share, leave ’em in the comments below. Don’t forget to share this if you enjoyed the read too!

Oh and if you want to see the current state of my portfolio here it is. Feel the speed!



6 thoughts on “Improving Google PageSpeed Insights”

  1. Thanks a lot for sharing. This is an extensive and detailed article. I have gone through several articles, but no one mention, why to use that technique, why not to use that. Especially for the minification of the web assets point of view, that article is quite helpful for me. Please kindly write more on reducing server response time.

    1. That would just depend on what branching model you use. My dist folders usually are not in my repository at all, so it doesn’t matter which branch I’m in, I can always run a build. Normally the way I set up my repo is I only commit the root of my project and my src folder. So if you were to clone my repo you’d have to run the build command to create the dist, and you would actually run the app/website from that dist folder. I usually have a “watch” command set up that auto builds every time I save my work, and then automatically refreshes the browser for me too.

      The process is similar to clicking the “run” button in your IDE when you’re working with Java. If you think about it, you’re never running uncompiled code through your IDE right? It’s just with web development this stuff tends to be more in your face and up to the developer to figure out, unlike Java development which I you’re more used to.

  2. I didn’t know about minimizing HTML / CSS before reading this, thanks! I wonder though, when we minimize our code, do we sacrifice readability in exchange for performance?

    1. No not all. So first of all, in your browser when you use the developer tools, the DOM tree is displayed the same whether your HTML is minimized or not. Same with your CSS, when you inspect elements, you don’t know the difference between minified and non-minified code.

      Locally, you still keep all of your unminified source code – usually in a directory called src. If you have an automated build process set up, you’ll place your minified code into a folder called build, or dist. Dist is what goes on the server, but you don’t ever read or write any code from that folder.

      Does that make more sense?

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Show Buttons
Hide Buttons
%d bloggers like this: