Why pages show “Discovered – currently not indexed” and how to fix it
Discovered – currently not indexed in Google Search Console: what you’re seeing
In Google Search Console, you’ll see this status in Indexing, then Pages, then the list of reasons under “Why pages aren’t indexed.” When a URL shows “Discovered – currently not indexed,” Google knows the URL exists, but Google has not crawled it yet.
That usually means the report shows no last crawl date for that URL, because Google has not fetched the page.
This status does not tell you that the page is good or bad. It tells you the page has not reached the stage where Google can judge the content.
That matters because many people use fixes meant for a different problem. “Crawled – currently not indexed” means Google did fetch the page, but decided not to keep it indexed. “Discovered” is a crawl timing and priority problem, while “Crawled” is often a value or duplication problem. To fix “Discovered,” you need to understand why Google waits to crawl.
Why Google delays crawling: crawl rescheduling and prioritization signals
Google delays crawling for two big reasons: capacity and demand. Capacity means how many requests your server can handle without slowing down or failing.
If Google sees slow responses, timeouts, or server errors, it backs off and reschedules to avoid stressing your site. You can have great pages and still hit this issue if the server struggles when Googlebot visits.
Demand means how much Google wants to spend time on your site. Google sets demand based on signals it already knows, even before it crawls every new URL. Site-wide and pattern signals matter here.
If your site produces lots of near-duplicate URLs, such as parameter variations, faceted filters, internal search pages, or other low-value patterns, Google learns that many discovered URLs lead to repeats. That can lower crawl demand for the whole site or for that URL pattern.
Page-level priority also affects the queue. If a page sits deep in your site and few internal links point to it, Google reads that as “not important” and saves the crawl for later. Once you know what holds a URL in the queue, you can decide whether to wait or intervene.
What it means operationally: when it’s normal vs when it’s a problem
This status can be normal when the number is small and the URLs have low business value, like old tag pages or thin filter pages you do not need in search. In those cases, Google may crawl them later without any work from you.
It becomes a problem when important URLs stay in this state for weeks, when the count rises, or when the URLs cluster around one folder, template, or parameter pattern. Before you act, confirm the status is current. The Pages report can lag, so I use URL Inspection on a few sample URLs to see the live crawl and index state.
Also set expectations: the Request Indexing button sends a signal, but it does not force a crawl or an index decision. If you want better results, fix the causes first, then request indexing. Next, I’ll walk you through the fixes in the order that tends to move the needle.
Fixes that change crawl access, efficiency, and priority (in the order to apply)
I start with sitemap hygiene because it shapes what you ask Google to crawl. Keep your XML sitemap limited to URLs that return 200 OK, allow indexing, and represent the canonical version of the page.

Do not include redirects, error URLs, URLs blocked by robots.txt, or URLs with a noindex directive. A clean sitemap helps Google spend its time on pages you want indexed.
Next, reduce crawl waste by stopping Google from crawling infinite or low-value URL spaces when that makes sense for your site. If filters, parameters, or internal search pages create endless combinations that you do not want in search, use robots.txt rules to block crawling for those patterns. This step matters because noindex does not stop crawling; Google still has to fetch a page to see a noindex tag.
Then fix redirects that burn crawl time. Remove redirect chains and loops, and update your internal links so they point straight to the final 200 OK URL. Each extra hop costs a crawl request and can slow discovery of the pages you care about.
After that, strengthen internal linking to raise priority. Make sure important pages are not orphaned, meaning they are not only in the sitemap.
Link to them from pages Google already crawls often, and keep key pages closer to the homepage so Google reaches them in fewer clicks. This helps Google find the page and treat it as worth crawling.
Then check server and connectivity constraints. In Search Console, review Crawl Stats for response time spikes and for 5xx patterns. If you see slowdowns or errors, improve caching, reduce heavy page resources, and address backend bottlenecks so Googlebot lifts its crawl throttle.

Only after these fixes, request indexing for a small set of priority URLs, and do it once per URL to send a clean signal. With the fundamentals in place, you can decide what to do today based on how your affected URLs look.
Decision workflow: what to do today based on your URL set
I pick a sample of affected URLs (if any) and sort them by business importance. If you only have a few high-value URLs, I would open each one in URL Inspection, run the live test, and confirm the page returns 200 OK, allows indexing, and points its canonical to the correct URL.
Then I would make sure the page has internal links from relevant, already indexed pages, and confirm the URL appears in your sitemap only if it belongs there. Once those checks pass, I would request indexing for those URLs.
If you see a pattern or a large-scale issue, I would find the source that generates the URLs, such as parameters, filters, or a template that creates duplicates.
Then I would apply crawl-waste controls and sitemap cleanup first, strengthen internal linking next, and address server performance after that if Crawl Stats show strain.
After changes, I would watch the trend for a few weeks. I look for movement from “Discovered” to a crawl date showing up, then to indexed, and I track whether the count drops in the Pages report.
If nothing moves after your changes, that’s the point where deeper technical checks like server log analysis can explain what Googlebot does on your site.