

Fun fact: the Titanic’s swimming pool was originally a salt water pool.
Fun fact: the Titanic’s swimming pool was originally a salt water pool.
Spoofing is a whole hell of a lot easier said than done. Content delivery networks like Akamai, Cloudflare, etc. all know exactly how different versions of different browsers present themselves, and will catch the tiniest mistake.
When a browser requests a web page it sends a series of headers, which identify both itself and the request it’s making. But virtually every browser sends a slightly different set of headers, and in different orders. So Akamai, for example can tell that you are using Chrome solely by what headers are in the request and the order they are in, even if you spoof your User-Agent string to look like Firefox.
So to successfully spoof a connection you need to decide how you want to present yourself (do I really want them to think I’m using Opera when I’m using Firefox, or do I just want to randomize things to keep them guessing). In the first case you need to be very careful to ensure your browser sends requests that exactly matches how Opera sends them. One header, or even one character out of place can be enough for these companies to recognize you’re spoofing your connection.
I had a few AC Pros in a 110+ year old house where other AP’s had issues with all the plaster & lathe walls. They worked great. I also have a couple of them installed at a non-profit org I volunteer with and everybody is very happy with how they work there as well.
After moving from that first house to a new one with a bigger footprint I upgraded to a pair of their U6 mesh AP’s, one at each end of the house. Never had any issues with them.
president or secretary of a recognised organisation
What constitutes a “recognized organization”? That sounds rather open to interpretation…
DigiCert recently was forced to invalidate something like 50,000 of their DNS-challenge based certs because of a bug in their system, and they gave companies like mine only 24 hours to renew them before invalidating the old ones…
My employer had an EV cert for years on our primary domain. The C-suites, etc. thought it was important. Then one of our engineers who focuses on SEO demonstrated how the EV cert slowed down page loads enough that search engines like Google might take notice. Apparently EV certs trigger an additional lookup by the browser to confirm the extended validity.
Once the powers-that-be understood that the EV cert wasn’t offering any additional usefulness, and might be impacting our SEO performance (however small) they had us get rid of it and use a good old OV cert instead.
If you have ssh open to the world then it’s better to disable root logins entirely and also disable passwords, relying on ssh keys instead.
Port 22 is the default SSH port and it receives a TON of malicious traffic any time it’s open to the whole internet. 20 years ago I saw a newly installed server with a weak root password get infected by an IP address in China less than an hour after being connected to the open internet.
With all the bots out there these days it would probably take a lot less time if we ran the same experiment again.
This reminded me of a glass artist named Josh Simpson who is known for his glass spheres he calls “planets” that have amazingly complex scenes in them. For over two decades he’s had what he calls the “Infinity Project” where he encourages people to hide them out in the open where folks are unlikely to find one. If you submit a proposal to him that he likes then he’ll send you two of his smaller planets, one for you to hide and one to keep for yourself.
Well OPSEC is the stated cause. Who knows how the person was initially identified and tracked. For all we know he was quickly identified through some sort of Tor backdoor that the feds have figured out, but they used that to watch for an unrelated OPSEC mistake they could take advantage of. That way the Tor backdoor remains protected.
Exactly. Tor was originally created so that people in repressive countries could access otherwise blocked content in a way it couldn’t be easily traced back to them.
It wasn’t designed to protect the illegal activities of people in first world countries that have teams of computer forensics experts at dozens of law enforcement agencies that have demonstrated experience in tracking down users of services like Tor, bitcoin, etc.
Oh there are definitely ways to circumvent many bot protections if you really want to work at it. Like a lot of web protection tools/systems, it’s largely about frustrating the attacker to the point that they give up and move on.
Having said that, I know Akamai can detect at least some instances where browsers are controlled as you suggested. My employer (which is an Akamai customer and why I know a bit about all this) uses tools from a company called Saucelabs for some automated testing. My understanding is that our QA teams can create tests that launch Chrome (or other browsers) and script their behavior to log into our website, navigate around, test different functionality, etc. I know that Akamai can recognize this traffic as potentially malicious because we have to configure the Akamai WAF to explicitly allow this traffic to our sites. I believe Akamai classifies this traffic as a “headless” Chrome impersonator bot.
When any browser, app, etc. makes an HTTP request, the request consists of a series of lines (headers) that define the details of the request, and what is expected in the response. For example:
GET /home.html HTTP/1.1
Host: developer.mozilla.org
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.9; rv:50.0) Gecko/20100101 Firefox/50.0
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
Accept-Language: en-US,en;q=0.5
Accept-Encoding: gzip, deflate, br
Referer: https://developer.mozilla.org/testpage.html
Connection: keep-alive
Upgrade-Insecure-Requests: 1
Cache-Control: max-age=0
The thing is, many of these headers are optional, and there’s no requirement regarding their order. As a result, virtually every web browser, every programming framework, etc. sends different headers and/or orders them differently. So by looking at what headers are included in a request, the order of the headers, and in some cases the values of some headers, it’s possible to tell if a person is using Firefox or Chrome, even if you use a plug-in to spoof your User-Agent to look like you’re using Safari.
Then there’s what is known as TLS fingerprinting, which can also be used to help identify a browser/app/programming language. Since so many sites use/require HTTPS these days it provides another way to collect details of an end user. Before the HTTP request is sent, the client & server have to negotiate the encryption to use. Similar to the HTTP headers, there are a number of optional encryption protocols & ciphers that can be used. Once again, different browsers, etc. will offer different ciphers & in different orders. The TLS fingerprint for Googlebot is likely very different than the one for Firefox, or for the Java HTTP library or the Python requests package, etc.
On top of all this Akamai uses other knowledge & tricks to determine bots vs. humans, not all of which is public knowledge. One thing they know, for example, is the set of IP addresses that Google’s bots operate out of. (Google likely publishes it somewhere) So if they see a User-Agent identifying itself as Googlebot they know it’s fake if it didn’t come from one of Google’s IP’s. Akamai also occasionally injects JavaScript, cookies, etc. into a request to see how the client responds. Lots of bots don’t process JavaScript, or only support a subset of it. Some bots also ignore cookies, and others even modify cookies to try to trick servers.
It’s through a combination of all the above plus other sorts of analysis that Akamai doesn’t publicize that they can identify bot vs human traffic pretty reliably.
Exactly. The only truly effectively way I’ve ever found to block bots is to use a service like Akamai. They have an add-on called Bot Manager that identifies requests as bots in real time. They have a library of over 1000 known bots and can also identify unknown bots built on different frameworks, bots that impersonate well known bots like Googlebot, etc. This service is expensive, but effective…
Not easily. The scammer likely has your current address & contact info, but knows nothing about your history.
To confirm your identity when you contact these reporting agencies they will use details from your credit history by asking detailed questions the scammer likely won’t know. For example it might be questions like these:
They’ll throw 3 or 4 questions like these at you that you’ll have to answer correctly. They might involve places you used to live, banks you have had accounts with, etc. The chances of a scammer with your SSN knowing all these details about you is pretty tiny.
The credit monitoring companies have your up-to-date contact information (and verified) when you put the freeze in place. Now, should a third party try to open an account, etc. in your name it should be blocked from happening and the credit monitoring company should contact you.
If a scammer tries to unfreeze or otherwise modify your account with them they should also contact you.
If/when they contact you or you request your account be unfrozen then they’ll use old credit history to confirm your identity. These are a series of three or four random questions that a scammer is unlikely to know. For example they might ask you what kind of car you purchased in 2005, then give you 4 options, like Ford, Honda, Jaguar, or BMW, and then also a “nine of the above” option. Then they might ask you which of the following street addresses you used to live at, and list 4 seemingly random addresses, one of which you might have lived at.
Years ago I worked at a company where they based server root/admin passwords on song lyrics. The person who came up with it clearly liked classic rock. I still remember at least one of them:
4ThoseAboutToRockWeSaluteYou!
robots.txt is 100% honor based. Well known bots like Googlebot, Bingbot, etc. definitely honor them. But there are also plenty of bots that completely ignore them.
I would hope the bots used to collect LLM training data honors them, but there’s no way to know for certain. And all it really takes is one bot ignoring it for the content of your website to end up in a random set of training data…
Try using “curl -A” to specify a User-Agent string that matches Chrome or Firefox.
Only some VOIP calls are routed over the internet. Most calls, while digital, are still routed over the proprietary networks owned & operated by the major telcos.
The internet is a packet switched network, which means data is sent in packets, and it’s possible for packets to end up at their destination out of order. Two packets sent from the same starting point to the destination could theoretically go over completely different routes due to congestion, etc. The destination is responsible for putting the packets back together properly. Packets can also get delayed if other higher priority packets come along. It’s for reasons like these that both voice & video on the internet can occasionally freeze, stutter, etc. Granted the capacity & reliability of the internet has improved greatly over time so these things happen less and less often. But the fact still remains that a packet switched network isn’t optimal for real time communication.
Telephone networks on the other hand are circuit switched networks. When you are talking to somebody on a telephone then there is a dedicated circuit path between you and the other person. Each piece of the path between the two of you has a hard limit of the number of simultaneous calls it can handle, which ensures it always has the capacity to serve your particular call. If a circuit between two points is maxed out then the telephone exchange may try to route your call via a different path, or you may just end up with a busy signal.
Packet switched networks also don’t have those hard limits that circuit switched networks do. So packet switched networks can get overwhelmed (think DoS attacks) which can also lead to outages.