hckrnws
I miss good and responsive support. I mostly deal with Google GCP and it's soul draining. They always take close to 24 hours to reply to any of your comment, even if you immediately reply you won't hear from them until next day. They cherry pick one line to answer and their replies often contradict rest of the facts presented in the issue report. They shield product engineers from you, acting as intermediary, so now there are two 24 hour hops each way. When product engineer also give nonsense answer, support doesn't enter into discussion with them on your behalf to get something reasonable, no they just pass it along.
For the price they charge for support I'd expect more quality.
There is very little incentive for engineers to spend a lot of time on support at big tech. In fact it detracts time from their projects which tie to their performance review.
Internally big tech would need to prioritise customer satisfaction as some OKR to incentivise engineers - but this doesn’t happen for some reason. As a result you’re at the whim of whether the engineer cares enough to spend time on your case.
Note in my experience AWS has great support - so their internal incentives seem to be aligned with customers.
This is not a difficult problem to solve, if the corporation wants it solved.
We managed it by assigning someone to support on a rotating weekly basis. During that week, project hours missed due to support tasks were acceptable. If you had a critical task due and it was your support week, you could swap support with someone, or we'd reassign the task. But with Software being the "last line of defense" after Field Service and Product Technical Support, we couldn't just ignore it.
There are solutions. You just have to act like you want it solved.
I honestly think this is the best way to do it. I know a lot of software developers don't want to do this and I understand why.
But there's nothing like an engineer seeing the struggles people have for themselves and choosing to fix it so they don't have to field that damned question anymore (I'm saying this tongue-in-cheek).
Plus, you know as well as I do, the more people between your customers and your engineers, the more agenda's get injected into the mix.
The company had a voluntary program (for a while) where you got to shadow specific local customers for a day. It was unbelievable. I had been there about 2 hours before I had a long list of potential product ideas just from watching this guy struggle. Not just with our product, but mainly with the interaction between all the different machines and instruments he had to use to get his work done.
I see comments like this on HN frequently and I wonder what is going on with BigTech. I guess I wouldn't make it. Support is fun. You're solving mysteries and (hopefully) fixing them within the constraints of the design.
Level 2 support is fun. Level 1 support is "dealing with an onslaught of people whose days have gone wrong, and they're sure it's your fault".
It's the performance culture at these companies. An engineer gets evaluated on metrics they move the needle on. At very OKR centric companies like Google, these metrics have to tie into high level business goals like revenue growth. I imagine it is pretty difficult to match customer support metrics to these, hence it's not prioritised by engineers or the company.
Support engineers are usually a completely different team and unrelated to engineering teams.
Support engineers are first or second line support, and typically limit their scope to customer errors/pebkacs. If you find a legitimate bug, it is typically escalated to the SWE team's oncall who maintain the service. For example, I found a legitimate bug in ALBs (new at the time) and was talking directly with the SWEs who built ALBs.
This makes sense if you consider the size of the product engineering team vs the amount of customers out there. For every engineer there's probably hundreds or thousands of customers. If they had to engage immediately with every support case, there would be zero progress on any other job.
The problem actually comes from the fact that big tech is becoming increasingly cheap on creating good support organizations. Experienced support engineers are fired and replaced with outsourced low-cost inexperienced personnel. In most cases, issues can be resolved or worked around with the help of a support engineer with access to some extra knobs. When those engineers are removed and are replaced with people who act like a pipe for cat to send the customer's stdout to product engineers, you get what you describe.
I recently had a conversation with a large tech company that went basically "We saved a ton of money by replacing all of tier 1 support with AI chatbots so that the problem can be identified and routed to the correct tier 2 person to triage before it goes to a tier 3 person if it needs to. Next we're replacing all of the tier 2 people and the only time we have to get an expensive human involved are the weirdest/most difficult problems." My reply: "where do tier 3 people come from if they have no experience at tier 1 or 2". Queue the sound of crickets...
The end goal is to only offer chat bots and thoughts and prayers
That perfectly described my team. Everyone hated on call. We shared an on call and were responsible for multiple systems that few people understood. Customers have questions and you just don't know and it's not documented. Every week you'd be on call there'd be 5-10 open support cases already open from last week. You have no context and no docs so you give a bs answer and pass it to the next customer.
In the case of GCP, they outsource their support services to smaller companies who (as I’ve been told) have no access to any details of the customer’s environment, and may sometimes escalate to a contact point within Google if the issue is not solved after some level of troubleshooting.
The list of companies providing support for GCP can be found in their subprocessors list.
I’ve had them debug and fix cloud function for me.
I thought that was above and beyond, but honestly I don’t know the support contract my company had with GCP
fwiw, in the “Port” directive in sshd_config you can just add another port number declaration and SSHd will listen on both ports.
Port 22
Port 80
Much cleaner than iptables magic; though I have done similar iptables redirects before it is almost always a bad idea. :)If server has strict iptables policy for incoming packets, you would still need to go to iptables allow second port. so if you need to iptables anyway, why just not redirect without editing sshd config? the less modifications the better chance to not forget to revert them
An advantage of “iptables magic” is that the service doesn’t ever have to run as root: I’ve done this before to great success to have a web server be accessible on standard ports even though it was running as an unprivileged user.
There's a capabilities thing you can do to add the permission to a process without changing users. You can also have it bind to the port as root then drop permissions afterwards.
Yeah, but requiring the process to start as root adds a window of vulnerability and the process has to drop privileges correctly.
You could also do it in the load balancer/router.
Run this as root once:
setcap 'cap_net_bind_service=+ep' /path/to/executable
Now, the process doesn't need to start as root and drop privs.Systemd can setup the privileges before starting the process
this is actually dangerous advice.
The reason privileged ports require root is so that they cannot be easily intercepted by userland processes.
The entire reason for the root requirement is this; it's expected that after your port assignment you drop your privileges to a lesser user. But requiring root to bind is a feature, not a bug.
If you do not have this then intentionally crashing the process for a user and rebinding the port as a standard user (even `nobody`) is possible.
This means if you break into someones mail server and run as the mail user, you would be able to bind ssh ports (if they are not privileged ports <1024).
Of course your mail server should be running as the `mail` user or some equivelent, because only the binding of the ports should be done on startup as root and then it should drop into the mail user.
Only if you can kill the ssh server that's already listening on the port.
Yes, but you can do so as a regular user in some cases.
Security is an onion.
[dead]
Possible that this was a feature added after this event?
I remember doing some relatively complex stuff with SSH config 15 or 20 years ago with IP filtering, different users having different chroots, IP forwarding rules based users connecting and rules around what SSH clients / protocols were allowed. Part of that was also defining custom ports too. All of which were just defined in sshd_config.
None of this was new stuff back then. It just wasn’t well blogged (in fact it was so poorly written about that my very first blog post was on exactly this topic. Blog is long gone now though). However if anyone took the time to read the man pages, you’d see all the functionality is already backed into openssh
Multiple ports have been supported for more than 20 years.
possible, but I see recommendations along these lines going back to 2011
https://serverfault.com/questions/284566/configuration-for-m...
I feel like the blink tag part of the story dates it to before 2011. (I just tested to see if blink tags still worked. It did not.)
Unless it’s anonymized, Hurricane Jeanne dates it to 2004
That fits the "almost 20 years ago" line in the blog post.
That's hilarious. I missed that line and the name of the hurricane line. But blink tags, those stick with me.
Honestly I also wonder now, I've read through the OpenSSH release notes which is one single HTML page with all releases since 2000, but unfortunately couldn't quickly spot when they would have introduced this change.
Do you believe it to be arcana for new sysadmins, or some technical reason it is a bad idea?
It's less surprising and the obvious way to do it. Iptables are very powerful, but not needed in this case.
Comment was deleted :(
hmmm... if port NNNN got through, maybe it went to something already listening on NNNN? but an iptables redirect could fix that.
just idle speculation.
I'm pretty sure "Port" worked with older ssh servers, maybe. openssh was only 4 years old 20 years ago.
EDIT: hmmm what about needing privileges or an selinux config change?
Unlikely. While selinux is more than 20 years old, it was merged into the mainline kernel a few years after its initial release and it would have taken a while longer for that kernel to trickle down to distros, and then sysadmins installing.
Even when it was trickled down to the distros, it was unfortunately very much in vogue to disable it...
Hell even in current year I occasionally come across guides that suggest disabling it... great way to get ctrl-w'd smh
Why would a hurricane need SSH?
Probably for cloud migration.
… I’ll see myself out.
Brings a whole new meaning to TCP.
Tornado Cloud Protocol.
I prefer the Unending Deluge Protocol.
Part of a series on DNS, Do Natural Survival
Why would a hurricane need electric?
Why would a hurricane want to rock me?
IPv6 migration.
The hurricane needed SSH so that it could troubleshoot the customer's problem and get them unstuck.
People who run Wi-Fi hotspots that firewall port 22 should be tied to rocks and tossed in a river.
Also Wi-Fi hotspots that hijack DNS.
Not being too familiar with `iptables` myself, I'd love to see the magic invocation they used. Anyone have any idea what that would have looked like?
I'm guessing something like this would work
# Redirect port 8080 to local port 22
iptables -A PREROUTING -t nat -i eth0 -p tcp --dport 8080 -j REDIRECT --to-port 22
Something on the order of
iptables -t nat -I PREROUTING -p tcp --dport NNNN -j REDIRECT --to-ports 22
I expose sshd on :443 for use in airports and hotels and other places with awful firewalls.
Worked great until the hotel where such connections also had a maximum duration!
Still works great, what with `autossh somehost.example.com -p443 -t -- screen -xR` or somesuch.
I've exposed SSH over tor in the past, so I can at least get into the box to set up a workaround like this if needed. (usually ddns/DMZ setup being broken)
Never had many login attempts that weren't me through it, but had fail2ban installed just in case.
Add ServerAliveInterval and ServerAliveCountMax to your SSH client.
this works great for the lowend of security solutions.
at bigger chains, there is some dpi security solution involved which will easily spot the difference between https and ssh.
socat is better for this. It plumbs any connection to any connection.
Listen on port 8080 and route to some local 22?
socat TCP-LISTEN:8080,fork,reuseaddr TCP:[somelocalip]:22
This lets you be very explicit in watching this run and killing when done.
Socat also lets you route networks through old serial ports, log all data going over a connection to a file, and even join completely different protocols.
Fun past projects based on socat; a serial port->socat to tcp out->socat on another computer to listen->a serial port out. Basically this created a serial port that worked over a satellite for a customer doing some remote monitoring so they could set an alarm if something failed (a lot of equipment only has serial connectivity for status).
Socat would make the connections to ssh show up from the machine running socat, rather than the origin, which might have auditing/logging implications.
iptables with a dnat redirect addresses that.
but socat is a process that needs to run, so someone needs to configure it to start at boot. that's what the iptables solution avoids.
Yeah the quick and dirty socat way can be an advantage or disadvantage. I like it for the times i just need to quickly get something through as a once off exactly because it runs when i run it. I feel like hacking some port mapping up during a disaster as more of a socat thing than an iptables thing but hey if it works it works.
I hate public WiFi that blocks arbitrary ports. The internet is not HTTP!!
At the same time free public Wifi is just that, a "we do what we can and try to keep the infra safe at low cost".
There is a lot of tooling to filter out bad behavior by HTTP. When it comes to other protocols, not so much. Much easier to block other ports then end up with your IP range on a block list.
What I'd like to see, is public Wifi setup with something akin to "HTTP/HTTPS - wide open" and "all other ports, you can connect to 1-5 machines an hour" or something.
Blocks useful access for worms, Trojans, etc but still lets you get out once.
Trojans only need to connect to one server (c&c) and they usually use http(s). Worms are almost nonexistent nowadays. Spambots are a concern (some of which could be consider a worm if you squint hard enough), but they only need to connect to two servers (configured SMTP server and C&C).
i use mosh.
it uses high ports that shouldn't be blocked. usually it is already running when i open the laptop. saves a lot of time. as an added benefit, it sometimes even works when the actual wifi connection is not yet authorized at the portal because that often only blocks tcp and not udp.
Another option is an ssh/https multiplexer like sslh: https://github.com/yrutschle/sslh
It allows you to listen for https and ssh traffic on a single port.
I found this recently too: it's a bit trick to setup, but it works very well for my usecase and I did not notice any performance issues.
Basically your services listen on localhost:port, sslhd listens on hostname:port, inspects the packets and forwards them (transparently) to localhost:port.
If you put everything on port 443 it's very unlikely they will ever be blocked.
I think ports < 1024 is not so arbitrary.
Don't you know that ssh is only used by hackers?!?!!!1
Good reason to have backup connection means, like ssh in https. (Or at the time of the article, since it's possible even https was blocked, ssh over dns.)
At Uber someone (reportedly) got TCP or UDP (can't remember which) over SMS. Always intrigued me.
I haven't heard of that, but here's IP over WhatsApp. Rather convenient for flights that only have WhatsApp free.
Commas, so important, are so very, infrequently, and - unnecessarily - sparingly used.
Poor hurricane needing ssh
The ability to just switch -I to -D to delete the rule is something I miss in nftables. I used that all the time.
> Whether they were trying to be KIBO or B1FF, I may never know.
Can someone explain this reference to me, I didn't get it?
I had the exact same question, and I'm happy to share how I got an immediate, complete and clear answer: https://chat.openai.com/share/8813ec1b-f64d-4dad-9b80-a8d3a2...
even today, while traveling through the US, there are lots of wifi access points that won't allow you to ssh. Sometimes I just fire up my VPN and then I can ssh again.
Was thinking about this... You can easily just connect to a VPN and get around the port restrictions.
unless VPN ports are blocked too
Web based ssh could have solved that issue, e.g. [0].
cockpit does offer lots of admin including a web shell (port 9090 by default)
meshcentral has a web shell too.
I wonder how long until customer support AI will solve these issues or is this an edge case that will require intervention?
Giving a customer service ai the ability to configure firewall rules seems problematic
Maybe eventually they will be less suspectable to social engineering but I don't have that confidence yet
Is it still social engineering if you're talking to an AI?
Prompt injection is basically the AI version of social engineering, isn't it?
Social engineering has limits and each individual has unique vulnerabilities. It's not possible to call in and speak a single sentence compelling any agent who hears it to immediately burn the office building down.
Some human vulnerabilities are surprisingly common. A lot of scammers follow scripts and formula. Of course, to coax arson would be difficult. But life-devastating incidents like emptying out the entire bank account, leaking secrets, causing self-harm, etc. are not unheard of.
Some support phone numbers automatically offer to waive a late fee if your account is generally in good standing.
sort of like stackoverflow with a "run this solution"
If you can sneak a <blink> tag into the ticket system, you likely can sneak a in <script> or <iframe> tag as well... I'm sure input sanitization was already a thing preached back then but ignored by many web developers...
My interpretation is that the frontend added the <blink> tag when rendering a critical-priority ticket, no injection needed.
No, it was 20 years ago. A lot of projects were really blasé about html injection back then.
Comment was deleted :(
Crafted by Rajat
Source Code