

But not FOSS like LS unfortunately
But not FOSS like LS unfortunately
Try WG Tunnel instead. It will reconnect on loss, but you lose the Tailscale features (no big deal with dynamic DNS)
Thats why I bind toggling them to a hotkey. One or the other at a time, never both.
So maybe use Debian and compile the app yourself instead? The Dev made something free with their time, use your time to make it work for you.
It’s VIM features and key bindings that you can toggle on and off with a hotkey in VScode.
Very handy when you have a task that VIM is better at (for your workflow), like recording s macro and replaying 100 times.
Why not both?
Yes, but…
The build environment was not clean to start, which is why a contributor is working to correct that.
You could also have the build scripts that run on GitHub pull the binary releases directly from their original release locations at build time, vs a file that an individual can modify in the source tree. This isn’t as good as building from source, but it’s better than nothing.
You:
solve a relatively minor security issue.
Wikipedia:
In February 2024, a malicious backdoor was introduced to the Linux build of the xz utility within the liblzma library in versions 5.6.0 and 5.6.1 by an account using the name “Jia Tan”.[b][4] The backdoor gives an attacker who possesses a specific Ed448 private key remote code execution through OpenSSH on the affected Linux system. The issue has been given the Common Vulnerabilities and Exposures number CVE-2024-3094 and has been assigned a CVSS score of 10.0, the highest possible score.[5]
Binary supply-chain attacks are not “minor security issues”. There is a reason many companies will not allow admins to use Ventoy.
I like Ventoy, it’s a fantastic project. I like that the author is transparent about where they won’t be spending their time. You can like a project, and recognize it’s flaws at the same time.
A contributor building a PR to solve the build concerns is not a bad thing, it’s to be celebrated. Even a short-term solution of having the build script pull the binaries from a release and checksum them would alleviate a lot of that concern. And the Windows vs Nix item would be alleviated by the GitHub build ENV. Binary releases isn’t the problem, it’s binary in the source. This is about audits and traceability more than the build itself.
Not having a security first posture on these kinds of attacks is how the xz
event happened, and I would hate to see that happen to Ventoy. I look forward to contributors helping the author out.
The problem with Ventoy isn’t the ISOs.
The problem is they use binary versions of core tools like cryptsetup
in their source tree, vs compiling them at build time.
This leaves the door open to supply-chain attacks. I.E. a PR with a bad cryptsetup
binary, or an attack on crypt that makes its way downstream with no way to audit. This is how huge software distributions make their way to Wikipedia in a bad way: https://en.m.wikipedia.org/wiki/XZ_Utils_backdoor
The solution is the build those binaries at build time, which a fork is working on.
Yup, all his top video comments said this shortly after video release. Personally, I always check for this stuff pre-purchase, but if I ran into it now, I would return unless I specifically bought it to self-host and block internet, which you can do with Bosch, but I wouldn’t, because for a dishwasher that’s dumb.
Next post:
“Why do people respond to a message that doesn’t need a response when they could just send an emoji?”
I used this: https://github.com/arabcoders/watchstate
It works, really, really well. You just connect it to the servers, and it syncs them by user. You can even let it run regularly while you are in your transition phase as a docker container right next to Plex and Jellyfin.
Mine’s even a bit more advanced, as I used samba-domain
to set up an LDAP active directory for my fam, then the above to sync the Plex users to those users in Jellyfin, and it still worked great.
Edit: The WebUI is also pretty intuitive, but I did have to run it twice for my user the first time for it to get 100% in sync. Everything was fine after that.
Realistically they would get a bailout “for the consumer”.
More likely than central hosting would be some of the same people enabling faster modes via software hacks currently making them run offline.
My current server runs 40ish docker containers and has 24TB of disk space in a ZFS array.
It is a 11 year old Intel chip and mobo that was my desktop once upon a time. I have been thinking about updating it simply because of power draw, but it works just fine.
I did add in PCI risor boards to get PCI 3.0 NVME drives in there.
It’s pretty common practice to upgrade your computer and turn your old one into a server. Then continue that cycle every upgrade.
Android, too.
Yup. I use one of those micro PCs with 4 network ports as a router, and that’s it.
To add to this, don’t buy a server at all, upgrade your desktop! Then use the desktop as a server. Then recycle every desktop for the rest of your life into the new server. Been working for me for decades.
So, anyone forking and setting it up with ntfy.sh?