hckrnws
I know people who are still using the router their ISP gave them, and they’ve never even changed the default password. The thing is, they don’t even know it can be updated, let alone that there might be security vulnerabilities. To most users, if the internet works, that’s all that matters.
Oh awesome, this is using my Frida scripts! These: https://github.com/httptoolkit/frida-interception-and-unpinn....
Nice project, great to see the scripts doing good work in the wild. If you needed any extra additions or tweaks to get them working, I'd love to hear about it.
HTTP Toolkit is fantastic, great job Tim!
Http toolkit is one of the best software i have used. I have used mitmproxy, proxyman and charles proxy and httptoolkit is the best and is open source too.
> SIDENOTE: If you want 2 way audio to work in frigate you must use the tapo:// go2rtc configuration for your main stream instead of the usual rtsp://. TP-Link are lazy and only implement 2 way audio on their own proprietary API.
Annoyingly when this is in use, I can't use ONVIF which seems like the only way to pan and tilt the camera using open tools. So if I want to use two way audio and also control the camera, I have to stop the process reading tapo:// stream, start onvif client and rotate, turn off onvif client and start streaming using tapo:// again
IoT security is generally terrible, but the fact that consumer routers are essentially unaudited black boxes processing all your network traffic is genuinely concerning. Most people have no idea their router firmware hasn't been updated in years and is probably running known CVEs. The supply chain trust model for networking hardware is broken.
Most people only care about how strong the signal is when buying a router, but almost no one checks if the firmware is outdated, or bothers to change the default password or disable remote access. And manufacturers rarely remind you either, so over time it just becomes a hidden risk.
>IoT security is generally terrible
I think IoT demands a rethink of security.
Like sometimes I want IoT devices to just bloody connect, and if I have to use a published exploit that circumvents online only requirements I will do it.
But some people do genuinely have use cases for cloud speaking IoT stuff.
Really I think the device should ask at first run, and then burn in your response and act only in the selected mode. If you want it to require Cloud MFA, thats an option, if you want to piss python at your lightbulb to make it blink, then thats where it lives permanently.
There are countless routers in between you and your destination which you can't audit anyway. End devices long since consider the routers to be compromised and have everything verified and encrypted in transit. So unless your router is participating in a DDoS or mining bitcoins it doesn't really matter how secure it is.
Many IoT devices (or Windows when the LAN network location is set to “Private”) expose a wider surface area to local network addresses. Having a competent firewall on your residential router is still useful, especially for those that have no idea how to configure their endpoints securely.
Comparing a residential router to a network operator’s router is spurious: those routers don’t perform any sort of filtering for the public internet traffic flowing through them.
Most people are using routers given to them (and configured by) their ISP... so really they are blackboxes connected to an upstream blackbox for most people.
I am always surprised by how many people give me their ISP chosen router name and ISP chosen password when I connect to their WiFi. I don't want to give my ISP that much control.
Are you really surprised though or are you talking about the HN reading subset of your "many people"?
Coz I would absolutely 100% not be surprised for your average consumer.
For your average HN reader I would hope they treat whatever their ISP gave them as just some dumb "switch" type device that sits outside their trusted network and handles nothing but encrypted traffic. Like my ISPs device definitely does have a WiFi and such, which I disabled. I treat it as a bridge / modem and it's definitely not part of my "inner circle". Hasn't been in 25 years.
The stuff on the shelf, sure, but you can always go 'prosumer-grade' like Ubiquiti or Mikrotik for hardware that actually receives timely updates and has competently written firmware.
Ubiquiti is awful, it's a cloud-centric ecosystem. The best "prosumer-grade" stuff is probably OpenWrt. If you need more power, opnSense or a plain Linux distro on an x86 machine.
Not entirely true. There's a local admin option, where your Ubiquiti devices never see the internet (well, except your gateway). You can then connect and admin the whole thing remotely via your own VPN. It's quite nice, actually.
100% this.
IOT - "S" stands for "Security"!
The password for my IoT wifi is "TheSInIoT"
;)
The solution is pfsense
Or openWRT.
The bsd based distributions sure are powerful, but with the power/heat budget to match.
I love me some OpenWRT but updating it has always been a risky chore.
Check out attended sysupgrade
Actually, pfsense kind of has a shitty reputation in the FOSS community and opnSense is preferred.
But I don't like the limitations of BSD systems in terms of hardware compatibility and performance, so I build my router using a plain Linux distro (Debian).
The soulutions is iptables.
The solution is nftables.
The solution is bpf.
The solution is emacs-m-x-butterfly-bpf.
Better go OPNsense
really like how this blog is written. a lot of writeups like this recently have been generated by an LLM, and it's quite distracting to read - this was a pleasant surprise. it strikes a good balance between technical and laid-back
(yes i know the cover image is AI-generated, that's incidental to the content)
I've been blocking by default bigger media files with uBlock Origin to avoid needless resource usage. Cover images are typically blocked, and they are usually useless anyway.
It's too bad people spend energy for generating them now.
Somewhat related:
The Tapo C200 research project https://drmnsamoliu.github.io/ (https://news.ycombinator.com/item?id=37813013)
PyTapo: Python library for communication with Tapo Cameras https://github.com/JurajNyiri/pytapo (https://news.ycombinator.com/item?id=41267062)
Are techniques like using Frida and mitmproxy on Android apps still going to be possible after the signing requirement goes into effect next year?
Overall: yes, but it will get much harder for apps which need attestation, which is sort of the point, for better or for worse. As far as I know you'll still be able to OEM unlock and root phones where it's always been allowed, like Pixels, but then they'll be marked as unlocked so they'll fail Google attestation. You should also be able to still take an app, unpack it, inject Frida, and sideload it using your _own_ developer account (kind of like you can do on iOS today), but it will also fail attestation and is vulnerable to anti-tampering / anti-debugging code at the application level.
So for people with any practical needs what so ever (like banking): No.
At this point Android isn’t meaningfully an open-source platform any more and it haven’t been for years.
On the somewhat refreshing side, they are no longer being dishonest about it.
I don't think any vendor should be solving for "I want to do app RE and banking on the same device at the same time;" that seems rather foolish.
These are sort of orthogonal rants. People view this as some kind of corporate power struggle but in this context, GrapheneOS, for example also doesn't let you do this kind of thing, because it focuses on preserving user security and privacy rather than using your device as a reverse-engineering tool.
There is certainly a strong argument that limiting third-party app store access and user installation of low-privilege applications is an anticompetitive move, but by and large, that's a different argument from "I want to install Frida on the phone I do banking on," which just isn't a good idea.
The existence of device attestation is certainly hostile to reverse engineering, and that's by design. But from an "I own my hardware and should use it" perspective, Google continue to allow OEM unlock on Play Store purchased Pixel phones, and the developer console will allow self-signing arbitrary APKs for development on an enrolled device, so not so much has changed with next year's Android changes.
> But from an "I own my hardware and should use it" perspective, Google continue to allow OEM unlock on Play Store purchased Pixel phones, and the developer console will allow self-signing arbitrary APKs for development on an enrolled device [...]
But that's not really using it, is it? If the process of getting access to do whatever I want on my smartphone makes it cease to be a viable smartphone, can you really count that as being able to use it?
It's like if having your car fixed by a third party mechanic made it not street legal. It is still a car and it does still drive, but are you really still able to meaningfully use it?
And before anyone jumps on my metaphor with examples of where that's actually the case with cars, think about which cases and why. There are modifications that are illegal because they endanger others or the environment, but everything else is fair game.
What I don't get is, if I am using my bank website on linux (with full root ability), it's still almost nearly the same as having the app on Android. The argument of "we lock it down to protect you makes 0 sense to me"
* Your bank (and Google) want to deal with as little fraud as possible.
* Market forces demand they provide both a website and an Android app.
* If both platforms are equally full of fraud, have the same features, and both have similar use, they cut out half the fraud even if they can only make one or the other fraud proof.
* But it isn't like that in reality: in reality, something more like 80% of their use and 90% of their fraud comes from mobile devices, and so cutting off that route immediately reduces their fraud-load by a lion's share.
Ergo, locking down the app is still in everyone's best interest, before we even get into the mobile app having features the desktop one does not (P2P payments, check deposit, etc.)
And this isn't just a weird theory / ivory tower problem: Device Takeover banking fraud on Android is _rampant_ (see Gigabud/GoldDigger).
Why does most fraud come from locked down mobile devices and not open Windows/Linux PCs?
If it's true that 90% of fraud comes from mobile despite all of the restrictions, what that tells me is that locking down devices doesn't actually prevent fraud.
---
> before we even get into the mobile app having features the desktop one does not (P2P payments, check deposit, etc.)
I think it would be reasonable to disable those specific features on mobile while leaving the rest of the app accessible.
Actually, back when jailbreaking iOS was still actually feasible, I recall the Chase app doing exactly that. The app worked fine, but it wouldn't let me deposit checks, I had to go to a branch for that. A bit annoying, but I can mostly understand that one.
> If it's true that 90% of fraud comes from mobile despite all of the restrictions
Statistics on mobile vs. desktop banking will really shock you; the mobile usage penetration is easily well upwards of 90% in many markets. There's also a skewed distribution for fraud-vulnerable users and scenarios.
> I think it would be reasonable to disable those specific features on mobile while leaving the rest of the app accessible.
I agree with you in an idealist sense; it would be awesome to be able to use GrapheneOS and have 80% app functionality instead of 0% app functionality. I also completely understand why nobody does it; supporting what's probably <0.001 (if not lower)% of legitimate users in exchange for development time and fraud risk isn't a particularly appealing tradeoff. If I were in a situation to advocate for such a trade-off, I probably would, but I don't think it's evidence of a sinister conspiracy that nobody does that.
> Statistics on mobile vs. desktop banking will really shock you; the mobile usage penetration is easily well upwards of 90% in many markets. There's also a skewed distribution for fraud-vulnerable users and scenarios.
But if my goal was to commit fraud, wouldn't I go to wherever it was easiest to commit fraud? The actual market penetration of each platform shouldn't matter.
It's usually done in bulk, so the overall payoff is the combination of value and number of targets, but the effort is typically sublinear with the targets. Something easier to attack but relatively low in number is not as juicy as something a bit harder (where the effort is mostly a one-off up-front rather than per target) but having many, many more targets.
They usually don't let you deposit checks via web app.
It's unclear what device attestation does here. You can print a fake check and take whatever picture you want. If it's using dead pixels or something as a device fingerprint, you get those dead pixels. You can also fake dead pixels, of course. Authenticating the phone's OS doesn't authenticate the camera, or what the camera's looking at. It's a signal, maybe, but the weak link in "a napkin with the right numbers and scribble on it is a money transfer" is probably not whether someone has root on the device that's taking a picture of the napkin.
GrapheneOS strongly recommends that you do not do it, but it will not stop you if you want to. You can root and leave your bootloader unlocked or create a custom user signed image with root support included. Plenty of user written guides out there how to do so.
Locking the bootloader is important as it enables full verified boot https://grapheneos.org/install/cli#locking-the-bootloader
> You can root and leave your bootloader unlocked
That's Google, not GrapheneOS.
Open source has nothing to do with hackability.
Firmware which requires updates to be signed with a manufacturer key can still be open source. As long as its code is available publicly, under a license which lets the user create derivative works, it meets the definition. You can still make a version of it that doesn't contain that check, you just can't install that version on the device you bought from the original firmware developer. Some FIDO keys (and I think Bitcoin wallets) do this.
I'm stuck on iOS for various reasons, but if I was on Android I could do without mobile banking in exchange for having root privileges. I don't entirely understand why this is such a big deal.
If e.g. Slack required attestation that would be a different story. I need that for work.
they leave that up to your organization: https://slack.com/help/articles/360042097113-Block-jailbroke...
It's not really going to be directly affected by that change I would expect. Most reverse engineering is on rooted devices & emulators anyway, which are already outside the bounds of those kinds of Google restrictions.
For the (less common) cases where you want to use a non-rooted device (e.g. using Frida by injecting it into the APK via gadget) it gets trickier, but I think in practice there will still be a way for developers to build & install their own APKs with developer mode enabled. This will be tightened, but removing that restriction would effectively make Android development impossible so it seems very unlikely - I think they will block sideloading on all non-developer devices only, or allow you to add your own developer cert for development or similar (all of which would probably be fine for development & reverse engineering, while still being a massive pain for actual distribution of apps).
The larger issue is device attestation, which _could_ make all rooted/non-certified devices progressively less practical, as more apps attempt to aggressively detect unmodified devices. Right now that's largely limited to big financial apps, and has some downsides (you get a bunch of complaints from all 3 GrapheneOS users, and it requires a bunch of corresponding server work to be reliable) but it could become more widespread.
They're already barely possible as it is.
For frida to work you need to root the device, which is impossible on ever more models, and there's an endless supply of very good rooting detection SDKs on the market, not to mention Play Integrity.
> For frida to work you need to root the device, which is impossible on ever more models
There's plenty of physical devices where it is possible, and Google publish official emulator images with root access for every Android version released to date. This part is still OK.
> there's an endless supply of very good rooting detection SDKs on the market, not to mention Play Integrity
Most of the root detection is beatable with Frida etc, mostly.
Play Integrity & attestation (roughly: 'trusted computing' on your phone, which signs messages as 'from an unmodified certified device' in a way that the server can verify, to only allow connections from known-good devices) is a much larger problem. Best hope here is that a) it creates much work for most apps to bother and b) it eventually gets restricted as anti-competitive. It's literally them charging & setting rules on their competitors for how they get a certificate which allows phones they make to function with all the Android apps on the market, and pushing app makers to restrict their apps to not work on phones from competitors who don't play ball, so I don't think anti-competition pushback here is that implausible medium term.
> There's plenty of physical devices where it is possible
Yup, but say Samsung, kiss KNOX goodbye. Fused off once you flash a non-Samsung image.
> and Google publish official emulator images with root access for every Android version released to date. This part is still OK.
Many apps will straight refuse to run in emulators unless you're lucky to snag a debug build that accidentally got pushed to production.
> Most of the root detection is beatable with Frida etc, mostly.
It's a cat and mouse game and frankly, I'm sick of it - and especially about the fact that it's either "accept that you'll need to wait X weeks until <Magisk plugin> gets an update" or "install some unofficial closed source fork that may or may not be laced with malware".
> Best hope here is that a) it creates much work for most apps to bother and b) it eventually gets restricted as anti-competitive.
Rooting detection used to be too much work, then SDKs cropped up that made it very easy, and that will be the case for remote-verifiable hardware attestation.
And restrictions from anti-trust? No way that will happen in the next three years in the US, and here in the EU it takes about 5-10 years until our parliament finally gets to work after a problem gets too much attention for their lazy asses to ignore. And even then, the lobby from banks, game studios ("them cheaters!!!" in f2p scam games) and other influential lobbyists will likely prevent any serious action.
As far as I'm aware it is possible to use Frida without rooting, by using Objection https://github.com/sensepost/objection
> Patch iOS and Android applications, embedding a Frida gadget that can be used with objection or just Frida itself.
This is the key thing, and the part that will change next year: previously, you could unpack, patch, and repack an APK with the Frida gadget and install it onto an Android device in Developer mode, while the device remained in a "Production" state (with only Developer mode enabled, and no root). Now, the device would either need to be removed from the Android Certified state (unlocked/rooted) or you would need to sign the application with your own Developer Console account and install it on your own device, like the way iOS has worked for years.
Wow that's horrifying. I guess apk modding era is over for most users.
Not yet. If I recall correctly only very few countries affected in the beginning.
Also somewhat related:
(TP-Link Firmware Decryption C210 V2 cloud camera bootloaders) https://watchfulip.github.io/28-12-24/tp-link_c210_v2.html?u...
Unrelated, but I wonder if the OP's dog moves from the bed to the floor because the radiator turns on? might need more sensor data :D
Or just because she noticed she was cold
So we're at the point that finding hardcoded admin passwords is no big deal.
It's a hardcoded default password, not a permanent backdoor. If I'm understanding the post correctly, the user changes it as part of the onboarding flow.
This is the way most apps work if they have a default password the user is supposed to change.
The device should ideally have some kind of secret material derived per device, like a passphrase generated from an MCU serial number or provisioned into EEPROM and printed on a label on the device.
Some form of "enter the code on the device" or "scan the QR code on the device" could then mutually authenticate the app using proof-of-presence rather than hardcoded passwords. This can still be done completely offline with no "cloud" or other access, or "lock in"; the app just uses the device secret to authenticate with the device locally. Then the user can set a raw RTSP password if desired.
This way unprovisioned devices are not nearly as vulnerable to network-level attacks. I agree that this is Not Awful but it's also Not Good. Right now, if you buy this camera and plug it into a network and _forget_ to set it up, it's a sitting duck for the time window between network connection and setup.
I agree that would be nice, but it also doesn't sound all that practical for a small vendor.
I used to sell a home networking device,[0] and I wouldn't do what you're describing. If there were an issue where the labels calculate the wrong password or the manufacturer screws up which device gets which label, you don't find out until months later when they're in customer hands and they start complaining, and now you have to unwind your manufacturing and fulfillment pipeline to get back all the devices you've shipped.
All that to protect against what attack? One where there's malicious software on the user's network that changes the device password before the user can? In that case, the user would just not use the camera because they can't access the feed.
Ha! I actually use TinyPilot all the time, nice!
> I agree that would be nice, but it also doesn't sound all that practical for a small vendor.
Personalizing / customizing per device always introduces a huge amount of complexity (and thus cost). However, this is TP-Link we're talking about, who definitely have the ability to personalize credentials at scale on other product lines.
And again, to be clear, I'm not trying to argue that the current way is some horrible disaster from TP-Link, just advocating for a better solution where possible. I think the current system reads as fine, honestly, it sounds like typical cobbled together hardware vendor junk that probably has some huge amount of "real" vulnerability in it too, but this particular bit of the architecture doesn't offend me badly.
> now you have to unwind your manufacturing and fulfillment pipeline to get back all the devices you've shipped.
This can be avoided with some other type of proof-of-presence side channel which doesn't rely on manufacturing personalization - for example, a physical side-channel like "hold button to enable some PKI-based backup pairing or firmware update mode." For a camera, there should probably be an option to make this go away once provisioning is successful, since you don't want an attacker performing an evil maid attack on the device, but for pre-provisioning, it's a good option.
Slight tangent: I just read your Tiny Pilot blog post, which was interesting and worthwhile. Thanks for sharing that!
TP-Link is far from being a small vendor, though.
Ah, I see. I thought OP used TP-Link for their router. I missed that Tapo (the camera manufacturer) is a subsidiary of TP-Link.
I think he has it backwards: Easy for a small vendor, very hard for a large one.
> The device should ideally have some kind of secret material derived per device, like a passphrase generated from an MCU serial number or provisioned into EEPROM and printed on a label on the device.
It is better than simple secret like 12345678 but it can go wrong too, like in the case of UPC UBEE routers where the list of potential passwords can be narrowed down to like ~60 possibilities using a googled generator [1] whilst knowing only the SSID.
It did require firmware reverse engineering to figure out [2][3] but applies to most devices I've encountered. User should ideally always change the default password regardless.
[1] https://upcwifikeys.com/UPC1236567
[2] https://deadcode.me/blog/2016/07/01/UPC-UBEE-EVW3226-WPA2-Re...
[3] https://web.archive.org/web/20161127232750/http://haxx.in/up...
AT&T routers, for example, ship like this. There's a wifi network and a wifi password printed onto the device.
But that also means then that often anyone with physical access can easily get into the device. The complicated password provides an additional layer of illusion of security, because people then figure "it's not a default admin password, it should be good". The fundamental problem seems to be "many people are bad at passwords and onboarding flows", and so trying variations on shipping passwords seem to result in mostly the same problems.
If you have physical access you can just factory reset the device and onboard it with the normal flow though
That's fair, though at least resetting would indicate that an attack happened. Default passwords and printed passwords can result in undetected attacks, which are arguably worse.
It doesn't change anything in this case though, you can't use the default password against a tp-link device after it's been onboarded.
Comment was deleted :(
I feel seen. Why is the security illusory? I still don't understand the problem with this. Is the concern that someone will break into my house to covertly get access to my wifi password?
Same with Orange branded ones. There is even a QR code that you can scan on your phone - no more typing 16-24 hex characters.
It's hard to decide whether it's good or bad. It is definitely easier. Which I guess matters most in consumer grade routers.
These may be illegal in some jurisdictions due to accessibility laws, and are a bad idea in general, for these reasons as well as unattended configuration scenarios.
If you buy the camera, plug it in, and forget to set it up, you just flat out can't use it right? I agree that proof of presence is way better but how many people are seriously going to be affected?
No, if you buy the camera, plug it in, and forget to set it up, then someone can use the default password and key material stored in the app to pretend to be the app and provision it on your behalf.
That's the only real vulnerability here, and it's no big deal, but it is A Thing and there is definitely a better way to do this that doesn't lose the freedom of full-offline.
Ok yeah I think we're in agreement then.
on the other hand "onboarding" seems to be a less offensive normalizing word which really means "ask permission to use device"...
I mean, given that it's updated after setup with the normal flow, I'm okay with it.
The thing I've most been convinced of in the past 5 years of building as much 'iot/smart home' stuff out as possible in my house is that nearly every vendor is selling crap that has marginal usefulness outside of a 'party trick' in isolation. Building out a whole smart home setup is frustrating unless it's all from one vendor, but there isn't one vendor which does all of it well for every need.
On my phone I have apps for: Ecobee, Lutron, Hue, 4 separate camera vendors[1], Meross, and Smart Life. Probably a couple more that I'm forgetting.
Only Lutron and Hue are reasonable in that they allow pretty comprehensive control to be done by a hub or HomeKit so I never have to use those apps.
It's been years since Matter and Thread were supposedly settled upon as the new standards for control and networking, but the market is, instead of being full of compatible devices, instead absolutely packed with cheap wi-fi devices, each of which is cloud-dependent and demands to be administered and even used day-to-day only through a pile-of-garbage mobile app whose main purpose is to upsell you on some cloud services.
[1] I admit the fact I have 4 is my fault for opportunistically buying cameras that were cheap rather than at least sticking with one vendor. But many people have a good excuse, perhaps one vendor makes the best doorbell camera, while another might make a better PTZ indoor camera.
Home Assistant is making more and more sense to make your own fully local and private home automation system.
Absolutely. I've been using Home Assistant for around 6 years now and it's absolutely amazing for tying hardware from varying ecosystems together.
Even if your hardware doesn't support local APIs, there's a good chance someone has made an HA integration to talk to their cloud API.
> Even if your hardware doesn't support local APIs, there's a good chance someone has made an HA integration to talk to their cloud API.
And if they haven’t, you can pretty trivially write your own and distribute it through HACS (I’ve got three integrations in HACS and one in mainline now)
Thank you for your contributions btw! There is so much amazing work that's gone into HA and I appreciate it every day.
Thanks, but it really is a community effort. Even the one I wrote the most of was still me and another guy (the Lucid Motors integration).
I'd love to see what's needed to get some of these integrations in core!
I love it! But my setup has a lot of sharp edges. It's a combo of things where the "standards compatible" way to connect to HA lacks things like camera control, by dastardly vendors like Chamberlain who basically killed HA support for spite, and finally, by having to use Google or Amazon for voice assistants.
My #1 wish would be for someone to build a HA-native voice assistant speaker. I'd pay $100 each for a smart speaker of the physical quality of the $30 Google Home Mini but which integrated directly with HA and used a modern LLM to decide what the user's intent was, instead of the Google Assistant or Siri nonsense which is like playing a text adventure whose preferred syntax changes hourly. I'd pay that plus a monthly fee to have that exist and just work.
you may be interested in https://www.home-assistant.io/voice-pe/
or
https://www.home-assistant.io/voice_control/thirteen-usd-voi...
Chamberlain can't change MyQ to get around the fact that HA can operate the switch in your garage with a simple controller attached to it. It is very annoying that they are anti-hacker though.
or roll your own.
This M5 Stack ASR unit costs $7.50, and has a vocab of about 40-70 words. That's enough to turn on/off lights and timers. You might need to come up with your own command language, but all of the ASR is extremely local
https://shop.m5stack.com/products/asr-unit-with-offline-voic...
That is probably a great and fun way to solve the problem for those with even a little free time.
Sadly for family reasons I sadly can't take on projects that require more than a few minutes, so I'm holding out hope for someone to bridge the gap between the "project boards that require writing a bunch of code to interface with Home Assistant and define all of its possible abilities and commands" and "dumb as a post Google thing that you just plug in" with a hardware device that is easy to connect to HA and starts out doing what the Google thing can do, but smart instead of stupid like the legacy voice assistants are.
Hard coded admin passwords that you have to change in order to start using the device aren't really an issue.
Well, they aren’t here though.. I feel like you just wanted to be annoyed at this tech
Smartphones can be seen by some as the initial hostile devices.
Network devices can at least be monitored and discovered like this.
Does anyone have a good reference for which tapo cameras support rtsp? I have a c210 that works well (sort of, you can't use it with their cloud capture) and I have it working with frigate.
But today I got a c402 (outdoor) thinking I could use it to capture my son's soccer practice. But that doesn't have the camera account option under advanced.
I love the price point of these devices but the functionality is all over the place.
If anyone knows a good outdoor camera, preferably with solar panel, that is cheap and has an rtsp stream, please let me know.
You should still be able to use the tapo:// go2rtc stream source even if the camera does not support the rtsp:// via the camera account option. Have a look at my frigate configuration for reference - https://github.com/kennedn/frigate/blob/7c56604e819d2cb1da28...
Fantastic.
Why do you use rtsp for some streams and then tapo protocol for others? Are these all tapo cameras?
I tried and failed at enough suggestions I found on the internet and via AI to cobble together a frigate configuration that eventually worked with the Tapo cameras.
RTC setup section:
go2rtc:
streams:
<Camera RTC name>:
- rtsp://tapoadmin:<local camera account password>@<camera IP address>:554/stream1
- ffmpeg:<Camera RTC name>#audio=opus
- tapo://<Tapo cloud password>@<camera IP address>
<Camera RTC name>_sub:
- rtsp://tapoadmin:<local camera account password>@<camera IP address>:554/stream2
- ffmpeg:<Camera RTC name>_sub#audio=opus
- tapo://<Tapo cloud password>@<camera IP address>
Main section: <Camera name>:
ffmpeg:
output_args:
record: preset-record-generic-audio-aac
inputs:
- path: rtsp://127.0.0.1:8554/<Camera RTC name>_sub
input_args: preset-rtsp-restream
roles:
- detect
- path: rtsp://127.0.0.1:8554/<Camera RTC name>
input_args: preset-rtsp-restream
roles:
- record
- audio
detect:
enabled: true
width: 640
height: 360
fps: 7
live:
streams:
<Camera RTC name>: <Camera RTC name>
record:
enabled: true
retain:
days: 0
mode: all
Where:* <Camera RTC name> is just any old short name you want to assign to the camera.
* <Camera name> is the main name for the camera that will be shown in the frigate UI
* <local camera account password> is something set individually on each camera (settings > Advanced > Camera Account, set it to On and setup username/password > Account Information)
* <Tapo cloud password> is the password setup for the Tapo app (I'm not sure how necessary this is, since there's nowhere that the username is specified... this is the only bit I'm fuzzy on)
This is the basics that works for me for the Tapo cameras. There are a boatload of other settings specific to Frigate (but not specific to Tapo cameras).
This is nowhere near as cool hack as the article, however.
Hello, as I alluded to in my blog post you should use tapo:// as the main stream source to get two-way audio. You can then optionally use rtsp://.../stream2 as a lower resolution sub stream source for detections. Defining both tapo:// and rtsp:// in a single stream is redundant as go2rtc only picks one or the other. My own configuration is here for reference https://github.com/kennedn/frigate/blob/7c56604e819d2cb1da28...
Aaaah, I should have specified above that I don't use (or care about) the two-way audio in the frigate config.
Thanks for the info, I just hacked away at various config suggestions until one combination worked.
Cheers!
Do you use an outdoor camera with this? I'm trying to find one and my c402 does not appear to have that support.
Sorry no.
I use Reolink's for outdoor. The Tapo's are all indoor C210/C211 (cheap, but do the job just fine).
Looks like the C402 has two different hardware versions[0] so maybe the old one doesn't work but the new one does? A firmware upgrade might also be worth trying. This reddit page suggests trying ONVIF as the go2RTC connection[1].
Good luck!
[0]: https://www.tp-link.com/us/support/download/tapo-c402/
[1]: https://www.reddit.com/r/frigate_nvr/comments/1liosei/tapo_c...
Comment was deleted :(
Thank you for including the final part about what your dog has been up to :)
> "She sleeps"
The fact that OP did all this work to find out the dog sleeps is pure hacker culture. Love to see it :)
Got one for my house but what really annoyed me was that I wasn't able to set a fixed IP for it
On your dhcp server (probably your router/gateway), statically assign (reserve) the camera's MAC address to the IP that you want it to have. Sometimes called MAC binding.
Some of them support it but not all.
Really? Mine has a switch for static. You aren't seeing that in the app? Configuration -> Advanced settings -> Network
Confirmed here. I've got four of them, and they all have this setting. I know because I changed them from DHCP reservation to static IP recently.
Are any of these outdoor cameras? Do you use rtsp if so? I'm trying to find an outdoor camera that supports frigate and surprised my new c402 does not.
Every single post on this site is worth reading. Loads of fun with hacking electronics. :)
go2rtc is great. the compatibility range it offers is just huge and gets rid of 90% of the difficulty in making a decent NVR app.
They cracked the APK to get the default password, but googling for it I see it is in a CVE from 2022
Mon key toge ther strong
Anyone have a similar fix for Yi/Kami cameras?
Yes Pls I'm begging u
Interesting... from the terminal I see he named his laptop the same thing I've named one of my cheaper laptops too... craptop. LOL.
tapo annoyingly is also one of the only cameras that doesn't have a still snapshot url after all these years and endless requests from many
someone needs to make replacement firmware
ffmpeg can fake it but takes a few seconds to grab from the video stream and of course you can't run ffmpeg from your browser (or wait, can you now?)
ffmpeg -rtsp_transport tcp -i "rtsp://cameraname:camerapass@192.168.1.23:554/stream1" -an -y -vframes 1 -f image2 -vcodec mjpeg "snap.jpg"
try reducing your buffer size and -probesize to help with that delay (and/or optionally fixing the video format parameters so probing isn't needed at all)
Comment was deleted :(
The cover image is a 2.8M png, if the author is reading. I gave up my github account so cannot comment.
Now a ~300kB webp, thanks.
Oof, I need to go through my personal site and resize some images. I never considered that.
Also, fantastic write-up
[dead]
Crafted by Rajat
Source Code