Skip to main content

5 posts tagged with "Docusaurus"

View All Tags

Β· 3 min read

When we're using custom fonts in our websites, it's good practice to preload the fonts to minimise the flash of unstyled text. This post shows how to achieve this with Docusaurus. It does so by building a Docusaurus plugin which makes use of Satyendra Singh's excellent webpack-font-preload-plugin

title image reading "Preload fonts with Docusaurus" in a ridiculous font with the Docusaurus logo and a screenshot of a preload link HTML element

Preload web fonts with Docusaurus​

To quote the docs of the webpack-font-preload-plugin:

The preload value of the <link> element's rel attribute lets you declare fetch requests in the HTML's <head>, specifying resources that your page will need very soon, which you want to start loading early in the page lifecycle, before browsers' main rendering machinery kicks in. This ensures they are available earlier and are less likely to block the page's render, improving performance.

This plugin specifically targets fonts used with the application which are bundled using webpack. The plugin would add <link> tags in the begining of <head> of your html:

<link rel="preload" href="/font1.woff" as="font" crossorigin />
<link rel="preload" href="/font2.woff" as="font" crossorigin />

If you want to learn more about preloading web fonts, it's also worth reading this excellent article.

The blog you're reading is built with Docusaurus. Our mission: for the HTML our Docusaurus build pumps out to feature preload link elements. Something like this:

<link
rel="preload"
href="/assets/fonts/Poppins-Regular-8081832fc5cfbf634aa664a9eff0350e.ttf"
as="font"
crossorigin=""
/>

This link element has the rel="preload" attribute set, which triggers font preloading.

But the thing to take from the above text is that the filename features a hash in the name. This demonstrates that the font is being pumped through the Docusaurus build, which is powered by webpack. So we need some webpack whispering to get font preloading going.

Making a plugin​

We're going to make a minimal Docusaurus plugin using webpack-font-preload-plugin. Let's add it to our project:

yarn add webpack-font-preload-plugin

Now in the docusaurus.config.js we can create our minimal plugin:

const FontPreloadPlugin = require('webpack-font-preload-plugin');

//...
/** @type {import('@docusaurus/types').Config} */
const config = {
//...
plugins: [
function preloadFontPlugin(_context, _options) {
return {
name: 'preload-font-plugin',
configureWebpack(_config, _isServer) {
return {
plugins: [new FontPreloadPlugin()],
};
},
};
},
// ...
],
//...
};

It's a super simple plugin, it does nothing more than new up an instance of the webpack plugin, inside the context of the configureWebpack method. That's all that's required.

With this in place we're now seeing the <link rel="preload" ... /> elements being included in the HTML pumped out of our Docusaurus build. This means we have font preloading working:

screenshot of the Chrome devtools featuring link rel=&quot;preload&quot; elements

Huzzah!

Β· 3 min read

Google Discover is a way that people can find your content. To make your content more attractive, Google encourage using high quality images which are enabled by setting the max-image-preview:large meta tag. This post shows you how to achieve that with Docusaurus.

title image reading &quot;Docusaurus, meta tags and Google Discover&quot; with a Docusaurus logo and the Google Discover phone photo taken from https://developers.google.com/search/docs/advanced/mobile/google-discover

Google Discover​

I'm an Android user. Google Discover will present articles to me in various places on my phone. According to the docs:

With Discover, you can get updates for your interests, like your favorite sports team or news site, without searching for them. You can choose the types of updates you want to see in Discover in the Google app or when you’re browsing the web on your phone.

It turns out that my own content is showing up in Discover. I (ahem) discovered this by looking at the Google search console and noticing a "Discover" tab:

screenshot of the Google search console featuring a &quot;discover&quot; image

As I read up about Discover I noticed this:

To increase the likelihood of your content appearing in Discover, we recommend the following: ...

  • Include compelling, high-quality images in your content, especially large images that are more likely to generate visits from Discover. Large images need to be at least 1200 px wide and enabled by the max-image-preview:large setting...

I was already trying to include images with my blog posts as described... But max-image-preview:large was news to me. Reading up further revealed that the "setting" was simply a meta tag to be added to the HTML that looked like this:

<meta name="robots" content="max-image-preview:standard" />

Incidentally, applying this setting will affect all forms of search results. So not just Discover, but Google web search, Google Images and Assistant as well. The result of having this meta tag will be that bigger images are displayed in search results, which should make the content more attractive.

Docusaurus let's get meta​

Now we understand what we want (an extra meta tag on all our pages), how do we apply this to Docusaurus?

Well, it's remarkably simple. There's an optional metadata property in docusaurus.config.js. This property allows you to configure additional html metadata (and override existing ones). The property is an array of Metadata, each entry of which will be directly passed to the <meta /> tag.

So in our case we'd want to pass an object with name: 'robots' and content: 'max-image-preview:large' to render our desired meta tag. Which looks like this:

/** @type {import('@docusaurus/types').DocusaurusConfig} */
module.exports = {
//...
themeConfig: {
// <meta name="robots" content="max-image-preview:large">
metadata: [{ name: 'robots', content: 'max-image-preview:large' }],
//...
},
//...
};

With that in place, we find our expected meta tag is now part of our rendered HTML:

screenshot of the &lt;meta name=&quot;robots&quot; content=&quot;max-image-preview:large&quot;&gt; tag taken from Chrome Devtools

Meta meta​

We should now have a more Google Discover-friendly website which is tremendous!

Before signing off, here's a fun fact: the PR that published this blog post is the same PR that added max-image-preview:standard to my blog. Peep it here - meta in so many ways πŸ˜‰

Β· 6 min read

Docusaurus doesn't ship with "blog archive" functionality. By which I mean, something that allows you to look at an overview of your historic blog posts. It turns out it is fairly straightforward to implement your own. This post does just that.

Docusaurus blog archive

Update 2021-09-01​

As of v2.0.0-beta.6, Docusauras does ship with blog archive functionality that lives at the archive route. This is down to the work of Gabriel Csapo in this PR.

If you'd like to know how to build your own, read on... But you may not need to!

Blogger's blog archive​

I recently went through the exercise of migrating my blog from Blogger to Docusaurus. I found that Docusaurus was a tremendous platform upon which to build a blog, but it was missing a feature from Blogger that I valued highly; the blog archive:

Blogger blog archive

The blog archive is a way by which you can browse through your historic blog posts. A place where you can see all that you've written and when. I find this very helpful. I didn't really want to make the jump without having something like that around.

Handrolling a Docusaurus blog archive​

Let's create our own blog archive in the land of the Docusaurus.

We'll create a new page under the pages directory called blog-archive.js and we'll add a link to it in our docusaurus.config.js:

    navbar: {
// ...
items: [
// ...
{ to: "blog-archive", label: "Blog Archive", position: "left" },
// ...
],
},

Obtaining the blog data​

This page will be powered by webpack's require.context function. require.context allows us to use webpack to obtain all of the blog modules:

require.context('../../blog', false, //index.md/);

The code snippet above looks in the blog directory for files / modules ending with the suffix "/index.md". Each one of these represents a blog post. The function returns a context object, which contains all of the data about these modules.

By reducing over that data we can construct an array of objects called allPosts that could drive a blog archive screen. Let's do this below, and we'll use TypeScripts JSDoc support to type our JavaScript:

/**
* @typedef {Object} BlogPost - creates a new type named 'BlogPost'
* @property {string} date - eg "2021-04-24T00:00:00.000Z"
* @property {string} formattedDate - eg "April 24, 2021"
* @property {string} title - eg "The Service Now API and TypeScript Conditional Types"
* @property {string} permalink - eg "/2021/04/24/service-now-api-and-typescript-conditional-types"
*/

/** @type {BlogPost[]} */
const allPosts = ((ctx) => {
/** @type {string[]} */
const blogpostNames = ctx.keys();

return blogpostNames.reduce(
(blogposts, blogpostName, i) => {
const module = ctx(blogpostName);
const { date, formattedDate, title, permalink } = module.metadata;
return [
...blogposts,
{
date,
formattedDate,
title,
permalink,
},
];
},
/** @type {string[]}>} */ []
);
})(require.context('../../blog', true, /index.md/));

Observe the metadata property in the screenshot below:

require.context

This gives us a flavour of the data available in the modules and shows how we pull out the bits that we need; date, formattedDate, title and permalink.

Presenting it​

Now we have our data in the form of allPosts, let's display it. We'd like to break it up into posts by year, which we can do by reducing and looking at the date property which is an ISO-8601 style date string taking a format that begins yyyy-mm-dd:

const postsByYear = allPosts.reduceRight((posts, post) => {
const year = post.date.split('-')[0];
const yearPosts = posts.get(year) || [];
return posts.set(year, [post, ...yearPosts]);
}, /** @type {Map<string, BlogPost[]>}>} */ new Map());

const yearsOfPosts = Array.from(postsByYear, ([year, posts]) => ({
year,
posts,
}));

Now we're ready to blast it onto the screen. We'll create two components:

  • Year - which is a list of the posts for a given year and
  • BlogArchive - which is the overall page and maps over yearsOfPosts to render Years
function Year(
/** @type {{ year: string; posts: BlogPost[]}} */ { year, posts }
) {
return (
<div className={clsx('col col--4', styles.feature)}>
<h3>{year}</h3>
<ul>
{posts.map((post) => (
<li key={post.date}>
<Link to={post.permalink}>
{post.formattedDate} - {post.title}
</Link>
</li>
))}
</ul>
</div>
);
}

function BlogArchive() {
return (
<Layout title="Blog Archive">
<header className={clsx('hero hero--primary', styles.heroBanner)}>
<div className="container">
<h1 className="hero__title">Blog Archive</h1>
<p className="hero__subtitle">Historic posts</p>
</div>
</header>
<main>
{yearsOfPosts && yearsOfPosts.length > 0 && (
<section className={styles.features}>
<div className="container">
<div className="row">
{yearsOfPosts.map((props, idx) => (
<Year key={idx} {...props} />
))}
</div>
</div>
</section>
)}
</main>
</Layout>
);
}

Bringing it all together​

We're finished! We have a delightful looking blog archive plumbed into our blog:

Docusaurus blog archive

It is possible that a blog archive may become natively available in Docusaurus in future. If you're interested in this, you can track this issue.

Here's the final code - which you can see powering this screen. And you can see the code that backs it here:

import React from 'react';
import clsx from 'clsx';
import Layout from '@theme/Layout';
import Link from '@docusaurus/Link';
import styles from './styles.module.css';

/**
* @typedef {Object} BlogPost - creates a new type named 'BlogPost'
* @property {string} date - eg "2021-04-24T00:00:00.000Z"
* @property {string} formattedDate - eg "April 24, 2021"
* @property {string} title - eg "The Service Now API and TypeScript Conditional Types"
* @property {string} permalink - eg "/2021/04/24/service-now-api-and-typescript-conditional-types"
*/

/** @type {BlogPost[]} */
const allPosts = ((ctx) => {
/** @type {string[]} */
const blogpostNames = ctx.keys();

return blogpostNames.reduce(
(blogposts, blogpostName, i) => {
const module = ctx(blogpostName);
const { date, formattedDate, title, permalink } = module.metadata;
return [
...blogposts,
{
date,
formattedDate,
title,
permalink,
},
];
},
/** @type {BlogPost[]}>} */ []
);
// @ts-ignore
})(require.context('../../blog', true, /index.md/));

const postsByYear = allPosts.reduceRight((posts, post) => {
const year = post.date.split('-')[0];
const yearPosts = posts.get(year) || [];
return posts.set(year, [post, ...yearPosts]);
}, /** @type {Map<string, BlogPost[]>}>} */ new Map());

const yearsOfPosts = Array.from(postsByYear, ([year, posts]) => ({
year,
posts,
}));

function Year(
/** @type {{ year: string; posts: BlogPost[]}} */ { year, posts }
) {
return (
<div className={clsx('col col--4', styles.feature)}>
<h3>{year}</h3>
<ul>
{posts.map((post) => (
<li key={post.date}>
<Link to={post.permalink}>
{post.formattedDate} - {post.title}
</Link>
</li>
))}
</ul>
</div>
);
}

function BlogArchive() {
return (
<Layout title="Blog Archive">
<header className={clsx('hero hero--primary', styles.heroBanner)}>
<div className="container">
<h1 className="hero__title">Blog Archive</h1>
<p className="hero__subtitle">Historic posts</p>
</div>
</header>
<main>
{yearsOfPosts && yearsOfPosts.length > 0 && (
<section className={styles.features}>
<div className="container">
<div className="row">
{yearsOfPosts.map((props, idx) => (
<Year key={idx} {...props} />
))}
</div>
</div>
</section>
)}
</main>
</Layout>
);
}

export default BlogArchive;

Β· One min read

My blog lived happily on Blogger for the past decade. It's now built with Docusaurus and hosted on GitHub Pages. To understand the why, read my last post. This post serves purely to share details of feed updates for RSS / Atom subscribers.

The Atom feed at this location no longer exists: https://blog.johnnyreilly.com/feeds/posts/default

The following feeds are new and different:

The new format might mess with any feed reader you have set up. I do apologise for the friction; hopefully it shouldn't cause you too much drama.

Finally, all historic links should continue to work with the new site; redirects have been implemented.

Β· 9 min read

Docusaurus is, amongst other things, a Markdown powered blogging platform. My blog has lived happily on Blogger for the past decade. I'm considering moving, but losing my historic content as part of the move was never an option. This post goes through what it would look like to move from Blogger to Docusaurus without losing your content.

It is imperative that the world never forgets what I was doing with jQuery in 2012.

Blog as code​

Everything is better when it's code. Infrastructure as code. Awesome right? So naturally "blog as code" must be better than just a blog. More seriously, Markdown is a tremendous documentation format. Simple, straightforward and, like Goldilocks, "just right". For a long time I've written everything as Markdown. My years of toil down the Open Source mines have preconditioned me to be very MD-disposed.

I started out writing this blog a long time ago as pure HTML. Not the smoothest of writing formats. At some point I got into the habit of spinning up a new repo in GitHub for a new blogpost, writing it in Markdown and piping it through a variety of tools to convert it into HTML for publication on Blogger. As time passed I felt I'd be a lot happier if I wasn't creating a repo each time. What if I did all my blogging in a single repo and used that as the code that represented my blog?

Just having that thought laid the seeds for what was to follow:

  1. An investigation into importing my content from Blogger into a GitHub repo
  2. An experimental port to Docusaurus
  3. The automation of publication to Docusaurus and Blogger

We're going to go through 1 and 2 now. But before we do that, let's create ourselves a Docusaurus site for our blog:

npx @docusaurus/[email protected] init blog-website classic

I want everything​

The first thing to do, was obtain my blog content. This is a mass of HTML that lived inside Blogger's database. (One assumes they have a database; I haven't actually checked.) There's a "Back up content" option inside Blogger to allow this:

Download content from Blogger

It provides you with an XML file with a dispiritingly small size. Ten years blogging? You'll get change out of 4Mb it turns out.

From HTML in XML to Markdown​

We now want to take that XML and:

  • Extract each blog post (and it's associated metadata; title / tags and whatnot)
  • Convert the HTML content of each blog post from HTML to Markdown and save it as a /index.md file
  • Download the images used in the blogpost so they can be stored in the repo alongside

To do this we're going to whip up a smallish TypeScript console app. Let's initialise it with the packages we're going to need:

mkdir from-blogger-to-docusaurus
cd from-blogger-to-docusaurus
npx typescript --init
yarn init
yarn add @types/axios @types/he @types/jsdom @types/node @types/showdown axios fast-xml-parser he jsdom showdown ts-node typescript

We're using:

Now we have all the packages we need, it's time to write our script.

import fs from 'fs';
import path from 'path';
import showdown from 'showdown';
import he from 'he';
import jsdom from 'jsdom';
import axios from 'axios';
import fastXmlParser from 'fast-xml-parser';

const bloggerXmlPath = './blog-03-13-2021.xml';
const docusaurusDirectory = '../blog-website';
const notMarkdownable: string[] = [];

async function fromXmlToMarkDown() {
const posts = await getPosts();

for (const post of posts) {
await makePostIntoMarkDownAndDownloadImages(post);
}
if (notMarkdownable.length)
console.log(
'These blog posts could not be turned into MarkDown - go find out why!',
notMarkdownable
);
}

async function getPosts(): Promise<Post[]> {
const xml = await fs.promises.readFile(bloggerXmlPath, 'utf-8');

const options = {
attributeNamePrefix: '@_',
attrNodeName: 'attr', //default is 'false'
textNodeName: '#text',
ignoreAttributes: false,
ignoreNameSpace: false,
allowBooleanAttributes: true,
parseNodeValue: true,
parseAttributeValue: true,
trimValues: true,
cdataTagName: '__cdata', //default is 'false'
cdataPositionChar: '\\c',
parseTrueNumberOnly: false,
arrayMode: true, //"strict"
attrValueProcessor: (val: string, attrName: string) =>
he.decode(val, { isAttributeValue: true }), //default is a=>a
tagValueProcessor: (val: string, tagName: string) => he.decode(val), //default is a=>a
};

const traversalObj = fastXmlParser.getTraversalObj(xml, options);
const blog = fastXmlParser.convertToJson(traversalObj, options);

const postsRaw = blog.feed[0].entry.filter(
(entry: any) =>
entry.category.some(
(category: any) =>
category.attr['@_term'] ===
'http://schemas.google.com/blogger/2008/kind#post'
) &&
entry.link.some(
(link: any) =>
link.attr['@_href'] && link.attr['@_type'] === 'text/html'
) &&
entry.published < '2021-03-07'
);

const posts: Post[] = postsRaw.map((entry: any) => {
return {
title: entry.title[0]['#text'],
content: entry.content[0]['#text'],
published: entry.published,
link: entry.link.find(
(link: any) =>
link.attr['@_href'] && link.attr['@_type'] === 'text/html'
)
? entry.link.find(
(link: any) =>
link.attr['@_href'] && link.attr['@_type'] === 'text/html'
).attr['@_href']
: undefined,
tags:
Array.isArray(entry.category) &&
entry.category.some(
(category: any) =>
category.attr['@_scheme'] === 'http://www.blogger.com/atom/ns#'
)
? entry.category
.filter(
(category: any) =>
category.attr['@_scheme'] ===
'http://www.blogger.com/atom/ns#' &&
category.attr['@_term'] !== 'constructor'
) // 'constructor' will make docusaurus choke
.map((category: any) => category.attr['@_term'])
: [],
};
});

for (const post of posts) {
const { content, ...others } = post;
console.log(others, content.length);
if (!content || !others.title || !others.published)
throw new Error('No content');
}

return posts.filter((post) => post.link);
}

async function makePostIntoMarkDownAndDownloadImages(post: Post) {
const converter = new showdown.Converter({
ghCodeBlocks: true,
});
const linkSections = post.link.split('/');
const linkSlug = linkSections[linkSections.length - 1];
const filename =
post.published.substr(0, 10) + '-' + linkSlug.replace('.html', '/index.md');

const contentProcessed = post.content
// remove stray <br /> tags
.replace(/<br\s*\/?>/gi, '\n')
// translate <code class="lang-cs" into <code class="language-cs"> to be showdown friendly
.replace(/code class="lang-/gi, 'code class="language-');

const images: string[] = [];
const dom = new jsdom.JSDOM(contentProcessed);
let markdown = '';
try {
markdown = converter
.makeMarkdown(contentProcessed, dom.window.document)
// bigger titles
.replace(/#### /g, '## ')

// <div style="width:100%;height:0;padding-bottom:56%;position:relative;"><iframe src="https://giphy.com/embed/l7JDTHpsXM26k" width="100%" height="100%" style="position:absolute" frameBorder="0" class="giphy-embed" allowFullScreen=""></iframe></div>

// The mechanism below extracts the underlying iframe
.replace(/<div.*(<iframe.*">).*<\/div>/g, (replacer) => {
const dom = new jsdom.JSDOM(replacer);
const iframe = dom?.window?.document?.querySelector('iframe');
return iframe?.outerHTML ?? '';
})

// The mechanism below strips class and style attributes from iframes - react hates them
.replace(/<iframe.*<\/iframe>/g, (replacer) => {
const dom = new jsdom.JSDOM(replacer);
const iframe = dom?.window?.document?.querySelector('iframe');
iframe?.removeAttribute('class');
iframe?.removeAttribute('style');
return iframe?.outerHTML ?? '';
})

// capitalise appropriately
.replace(/frameBorder/g, 'frameBorder')
.replace(/allowFullScreen/g, 'allowFullScreen')
.replace(/charset/g, 'charSet')

// Deals with these:
// [![null](<https://4.bp.blogspot.com/-b9-GrL0IXaY/Xmqj4GRhKXI/AAAAAAAAT5s/ZoceUInSY5EWXeCr2LkGV9Zvea8S6-mUgCPcBGAYYCw/s640/hello_world_idb_keyval.png> =640x484)](<https://4.bp.blogspot.com/-b9-GrL0IXaY/Xmqj4GRhKXI/AAAAAAAAT5s/ZoceUInSY5EWXeCr2LkGV9Zvea8S6-mUgCPcBGAYYCw/s1600/hello_world_idb_keyval.png>)We successfully wrote something into IndexedDB, read it back and printed that value to the console. Amazing!
.replace(
/\[!\[null\]\(<(.*?)>\)/g,
(match) =>
`![](${match.slice(match.indexOf('<') + 1, match.indexOf('>'))})\n\n`
)

// Blogger tends to put images in HTML that looks like this:
// <div class="separator" style="clear: both;"><a href="https://1.bp.blogspot.com/-UwrtZigWg78/YDqN82KbjVI/AAAAAAAAZTE/Umezr1MGQicnxMMr5rQHD4xKINg9fasDACLcBGAsYHQ/s783/traffic-to-app-service.png" style="display: block; padding: 1em 0; text-align: center; "><img alt="traffic to app service" border="0" width="600" data-original-height="753" data-original-width="783" src="https://1.bp.blogspot.com/-UwrtZigWg78/YDqN82KbjVI/AAAAAAAAZTE/Umezr1MGQicnxMMr5rQHD4xKINg9fasDACLcBGAsYHQ/s600/traffic-to-app-service.png"></a></div>

// The mechanism below extracts the underlying image path and it's alt text
.replace(/<div.*(<img.*">).*<\/div>/g, (replacer) => {
const div = new jsdom.JSDOM(replacer);
const img = div?.window?.document?.querySelector('img');
const alt = img?.getAttribute('alt') ?? '';
const src = img?.getAttribute('src') ?? '';

if (src) images.push(src);

return `![${alt}](${src})`;
});
} catch (e) {
console.log(post.link);
console.log(e);
notMarkdownable.push(post.link);
return;
}

const imageDirectory = filename.replace('/index.md', '');
for (const url of images) {
try {
const localUrl = await downloadImage(url, imageDirectory);
markdown = markdown.replace(url, 'blog/' + localUrl);
} catch (e) {
console.error(`Failed to download ${url}`);
}
}

const content = `---
title: "${post.title}"
author: John Reilly
author_url: https://github.com/johnnyreilly
author_image_url: https://avatars.githubusercontent.com/u/1010525?s=400&u=294033082cfecf8ad1645b4290e362583b33094a&v=4
tags: [${post.tags.join(', ')}]
hide_table_of_contents: false
---
${markdown}
`;

await fs.promises.writeFile(
path.resolve(docusaurusDirectory, 'blog', filename),
content
);
}

async function downloadImage(url: string, directory: string) {
console.log(`Downloading ${url}`);
const pathParts = new URL(url).pathname.split('/');
const filename = pathParts[pathParts.length - 1];
const directoryTo = path.resolve(
docusaurusDirectory,
'static',
'blog',
directory
);
const pathTo = path.resolve(
docusaurusDirectory,
'static',
'blog',
directory,
filename
);

if (!fs.existsSync(directoryTo)) {
fs.mkdirSync(directoryTo);
}

const writer = fs.createWriteStream(pathTo);

const response = await axios({
url,
method: 'GET',
responseType: 'stream',
});

response.data.pipe(writer);

return new Promise<string>((resolve, reject) => {
writer.on('finish', () => resolve(directory + '/' + filename));
writer.on('error', reject);
});
}

interface Post {
title: string;
content: string;
published: string;
link: string;
tags: string[];
}

// do it!
fromXmlToMarkDown();

To summarise what the script does, it:

  • parses the blog XML into an array of Posts
  • each post is then converted from HTML into Markdown, a Docusaurus header is created and prepended, then the file is saved to the blog-website/blog directory
  • the images of each post are downloaded with Axios and saved to the blog-website/static/blog/{POST NAME} directory

Bringing it all together​

To run the script, we add the following script to the package.json:

  "scripts": {
"start": "ts-node index.ts"
},

And have ourselves a merry little yarn start to kick off the process. In a very short period of time, if you crack open the blogs directory of your Docusaurus site you'll see a collection of Markdown files which represent your blog and are ready to power Docusaurus:

Markdown files

I have slightly papered over some details here. For my own case I discovered that I hadn't always written perfect HTML when blogging. I had to go in and fix the HTML in a number of historic blogs such that the mechanism would work. I also learned that a number of my screenshots that I use to illustrate posts have vanished from Blogger at some point. This makes me all the more convinced that storing your blog in a repo is a good idea. Things should not "go missing".

Congratulations! We're now the proud owners of a Docusaurus blog site based upon our Blogger content that looks something like this:

Blog in Docusaurus

Making the move?​

Now that I've got the content, I'm theoretically safe to migrate from Blogger to Docusaurus. I'm pondering this now and I have come up with a checklist of criteria to satisfy before I do. You can have a read of the criteria here.

Odds are, I'm likely to make the move; it's probably just a matter of time.