Competitive websites' owners understand how both the sites' onscreen contents and their technical details such as meta tags directly determine whether they will get a representative snippet posted in the critically important initial page of a particular SERP. A great deal of effort and expense is made to optimize the contents of a given website's featured pages so that they can be seen and treated by Google as having a valuable niche over competing sites. However, mainstream websites include pages that are technically necessary for audiences to be able to see but would otherwise be actively harmful for Google's crawlers to treat as content reflective of the website's intended niche.
To avoid getting legal pages like privacy policies formally assessed as thematically representative content, websites maintain a special text file behind the scenes within their domains titled "robots.txt." Google parses this file for any instructions that may have been left by the webmaster about whether to add specific pages to Google's index database and whether to even follow specific links. Since the beginning of September 2019, however, an update rolled out by Google had sterilized what was once a traditional method of protecting pages from the indexing procedure: including "noindex" syntax within the robots.txt file.
Part of the reasoning behind this decision is that there are several other ways to block Google's spiders from problematically digging into specific pages. For one thing, the "disallow" syntax effectively fills the same function in the robots.txt file because it tells Google not to follow links to that URL at all; Google obviously will not index what it cannot reach. This can potentially block Google from being able to reach viable pages that may be accessible only from the page that has been disallowed, however, so a more reliable alternative amounts to a different take on the "noindex" syntax that can be included within the HTML source code of a page. In either case, the webmaster has the option of using Google Search Console to manually remove particular URLs from SERPs without having to wait for Google to organically reflect the aforementioned changes. For more information click here https://www.reddit.com/r/SEO/comments/d2n20i/googlerobottxtupdate/.