SEO The Fundamentals
Learn the fundamentals of SEO as a foundation to start your SEO journey.

Table of Contents
What is SEO
Why do we need SEO in the first place?
- In this world where competion is at it's peak, everyone is looking to see your attension
- to get organic i.e free traffic(customers) to our brand/website with spending on CAC(Customer Acqisition Cost)
The best place is hide a deadbody is googles 2nd page, because no one goes there
How can I go about it.
Search Engines
- bing
How Crawlers work?
Technical SEO
The SEO that is done on the code side of the page is called technical SEO, like on HTML tags, all all the other stuff on tech side of web page
Nextjs 15 - SEO
Here's the cleaned and corrected Markdown:
## Favicon
Place it in root src directory
Use [RealFaviconGenerator](https://realfavicongenerator.net) for .ico generation
## Opengraph Image (Global)
That's the image shown when you paste the link in social media
```tsx
src / app / opengraph - image.png;
```
Recommended size (1200x630 px)
Base Metadata
export const metadata: Metadata = {
title: "Create Next App",
description: "Default template title for child pages",
twitter: {
card: "summary_large_image",
},
};
- We can host our localhost online by using:
- socialsharpreview.com
- open graph analyzers
Dynamic Metadata
export async function generateMetadata(params: {
postId: string;
}): Promise<Metadata> {
// Fetch post data
return {
title: post.title,
description: post.body,
openGraph: {
images: [post.thumbnailUrl],
},
};
}
const getPost = cache(async (id) => {
// Fetch post logic
});
Manually deduplicate requests if not using ISR
- If you put "opengraph-image.png" in a specific route/dynamic, it will overwrite base opengraph image
- You can also dynamically generate OG images Ref: documentation
Static Rendering
Loading is bad for SEO
- After building, if a page is static, even if you have a request to now data, it doesn't work as it has fetched during compile (build) time & cached
export const generateStaticParams = async () => {
const posts = getPosts();
return posts.map((post) => ({ id: post.id }));
};
This returns a list/array which tells Next.js to load them & then cache them (which will make them good for performance)
- What if during build time you don't have that data/slug (as you might have lately added that to the DB)? Next.js is smart here - first time when someone visits that slug, it takes time to render (slow), once after that it caches it & then it takes less time
notFound()
pages are essential because they provide accurate data to crawlers
Server & Client Components
We've to make sure we render as much as possible on server components, but in case we need interactivity (client components), then we make sure it is as less as possible and try using leaf components as small as possible
Example:
- If a button needs interactivity, then only the button should be a client component, rest should be server components
Sitemap
This tells the crawler what pages are contained in your project/website
import { navigation } from "@/config/navbar";
import { allBlogs } from "contentlayer/generated";
import type { MetadataRoute } from "next";
const WEBSITE_URL = "https://ui.unicorn-space.com";
export default function sitemap(): MetadataRoute.Sitemap {
const blogs: MetadataRoute.Sitemap = allBlogs.map((blog) => {
return {
url: `${WEBSITE_URL}${blog.slug}`,
lastModified: new Date(),
changeFrequency: "yearly",
priority: 0.7,
};
});
let tools_temp = navigation.find((nav) => nav.title === "Tools");
let tools;
if (tools_temp) {
tools = tools_temp.links.map((tools) => {
return {
url: `${WEBSITE_URL}${tools.href}`,
lastModified: new Date(),
changeFrequency: "monthly",
priority: 0.9,
};
});
}
const otherPages = [
"/components/get-started",
"/showcase",
"/changelog",
"/blog",
"/components",
"/guides",
"/tools",
];
return [
{
url: `${WEBSITE_URL}`,
lastModified: new Date(),
changeFrequency: "yearly",
priority: 1,
},
...otherPages.map((page) => {
return {
url: `${WEBSITE_URL}${page}`,
lastModified: new Date(),
changeFrequency: "monthly",
priority: 0.9,
};
}),
...blogs,
];
}
Robots.txt
This tells the crawler what to crawl & what not to crawl
Note: Visit sitemap for more information
import { MetadataRoute } from "next";
export default function robots(): MetadataRoute.Robots {
return {
rules: [
{
userAgent: "*",
allow: "/",
},
],
sitemap: "https://ui.unicorn-space.com/sitemap.xml",
};
}
Structured Data Markups
A good refernce to learn it. https://www.youtube.com/watch?v=-vQJYJSN-g4&ab_channel=OmriBarmats
Usefull tools and resources:
- https://www.npmjs.com/package/schema-dts
- https://validator.schema.org/
- https://nextjs.org/docs/app/building-your-application/optimizing/metadata#json-ld
Checklist
- Check if the page is static or dynamic (ideal goal is to make it static)
- Check if the page is server or client component (ideal goal is to make it server component)
- Check if the website has a robots.txt
- Check if the website has a sitemap
- Check if the website has structured data markups (TODO)
- Check if the website has a favicon
- Check if the website has an opengraph image
- Check if the website has metadata
- Dynamic routes
- generateMetadata() - Check if the website has dynamic metadata (FOR generating dynamic metadata)
- generateStaticParams() - ISR incremental static Regeneration (FOR generating static pages in build time)
- Check if the website has a 404 page
- Check if the website has a 500 page
- Check if the website has a 403 page 13