

Yeah, American research is absolutely screwed.
Yeah, American research is absolutely screwed.
It’s your country, your attempt at a democratic system and your mess to deal with internally. That it’s a fundamentally broken system you have over there has been known (and in some cases mathematically proven) for a long time now. Personally, I’m getting tired of the ”Not all americans”-stuff. It kinda worked the first time around, but you had four years to deal with him peacefully through your legal system, demonstrations, manifestations and public pressure. He got reelected and there was a peaceful transition of power, possibly the last.
Enough people voted for him. The margins weren’t even that thin. You are now represented by President Orange in your international affairs and force projection. We can’t really help you that much either, as you have positioned yourselves as a dominant global power, with economical, soft and hard power.
My entirely unqualified guess - we’ll start accepting political refugees from the US, fearing for their lives, fairly soon. I’m guessing LGBTQ+, some ethnicities, some scientists and some public servants are in the danger zone. Stop being ”horrified” and start being ”absolutely fucking terrified”.
In short, go deal with your carrot man, we can’t do it for you. We can hopefully provide some refuge. But we can only deal with you as a nation, represented by Trump. Sorry.
Unless you have actual tooling (i.e. RedHat erratas + some service on top of that), just don’t even try.
Stop downloading random shit from dockerhub and github. Pick a distro that has whatever you need packaged, install from the repositories and turn on automatic updates. If you need stuff outside of repos, use first party packages and turn on auto updates. If there aren’t any decent packages, just don’t do it. There is a reason people pay RedHat a shitton of money, and that’s because they deal with much of this bullshit for you.
At home, I simply won’t install anything unless I can enable automatic updates. Nixos solves much of it. Two times a year I need to bump the distro version, bump the nextcloud release, and deal with depreciations, and that’s it.
I also highly recommend turning on automatic periodic reboots, so you actually get new kernels running…
Just going off the marketing here:
Git server with CI/CD, kanban, and packages.
From the looks of it, they also seem to bundle the vscode server and a bunch of other stuff. I’m actually kinda surprised they do it with only 1G of RAM.
Not to be that guy, but 12% of 8G isn’t even close to ”heavy as fuck” for a CI/CD and collaboration suite that seems aimed at enterprise users.
You can also tweak how much memory you’d like the jvm to grab with ’-Xms100m’. Any defaults are most likely aimed at much larger deployments than yours.
But yes, Java is a disease.
Be real fukin careful now. You’ll tear my enacs from my cold dead hands
(But yeah, I use evil-mode. Also I edit files on remote servers with vim. I’m a traitor…)
Because you’d get like half the memory bandwidth to a product where performance is most likely bandwidth limited. Signal integrity is a bitch.
The thing is, consumers didn’t push Nvidias stock sky high, AI did. Microsoft isn’t pushing anything sane to consumers, Microsoft is pushing AI. AMD, Intel, Nvidia and Qualcomm are all pushing AI to consumers. Additionally, on the graphics side of things, AMD is pushing APUs to consumers. They are all pushing things that require higher memory bandwidth.
Consumer will get ”trickle down silicon”, like it or not. Out of package memory will die. Maybe not with you next gaming rig, but maybe the one after that.
Wrote a longer reply to someone else, but briefly, yes, you are correct. Kinda.
Caches won’t help with bandwidth-bound compute (read: ”AI”) it the streamed dataset is significantly larger than the cache. A cache will only speed up repeated access to a limited set of data.
Yeah, the cache hierarchy is behaving kinda wonky lately. Many AI workloads (and that’s what’s driving development lately) are constrained by bandwidth, and cache will only help you with a part of that. Cache will help with repeated access, not as much with streaming access to datasets much larger than the cache (i.e. many current AI models).
Intel already tried selling CPUs with both on-package HBM and slotted DDR-RAM. No one wanted it, as the performance gains of the expensive HBM evaporated completely as soon as you touched memory out-of-package. (Assuming workloads bound by memory bandwidth, which currently dominate the compute market)
To get good performance out of that, you may need to explicitly code the memory transfers to enable prefetch (preferably asynchronous) from the slower memory into the faster, á la classic GPU programming. YMMW.
You are correct, I’m referring to on package. Need more coffee.
All your RAM needs to be the same speed unless you want to open up a rabbit hole. All attempts at that thus far have kinda flopped. You can make very good use of such systems, but I’ve only seen it succeed with software specifically tailored for that use case (say databases or simulations).
The way I see it, RAM in the future will be on package and non-expandable. CXL might get some traction, but naah.
I don’t think you are wrong, but I don’t think you go far enough. In a few generations, the only option for top performance will be a SoC. You’ll get to pick which SoC you want and what box you want to put it in.
Apparently AMD couldn’t make the signal integrity work out with socketed RAM. (source: LTT video with Framework CEO)
IMHO: Up until now, using soldered RAM was lazy and cheap bullshit. But I do think we are at the limit of what’s reasonable to do over socketed RAM. In high performance datacenter applications, socketed RAM is on it’s way out (see: MI300A, Grace-{Hopper,Blackwell},Xeon Max), with onboard memory gaining ground. I think we’ll see the same trend on consumer stuff as well. Requirements on memory bandwidth and latency are going up with recent trends like powerful integrated graphics and AI-slop, and socketed RAM simply won’t work.
It’s sad, but in a few generations I think only the lower end consumer CPUs will be possible to use with socketed RAM. I’m betting the high performance consumer CPUs will require not only soldered, but on-board RAM.
Finally, some Grace Hopper to make everyone happy: https://youtube.com/watch?v=gYqF6-h9Cvg
You mean ”hardcore WAF challenge”?
If you’ve taken care to properly isolate that service, sure. You know, on a dedicated VM in a DMZ, without access to the rest of your network. Personally, I’d avoid using containers as the only barrier, but your risk acceptance is yours to manage.
Well, I’d just go for a reverse proxy I guess. If you are lazy, just expose it as an ip without any dns. For working DNS, you can just add a public A-record for the local IP of the Pi. For certs, you can’t rely on the default http-method that letsencrypt use, you’ll need to do it via DNS or wildcards or something.
But the thing is, as your traffic is on a VPN, you can fuck up DNS and TLS and Auth all you want without getting pwnd.
Then you expose your service on your local network as well. You can even do fancy stuff to get DNS and certs working if you want to bother. If the SO lives elsewhere, you get to deploy a raspberry to project services into their local network.
I’d recommend setting up a VPN, like tailscale. The internet is an evil place where everyone hates you and a single tiny mistake will mess you up. Remove risk and enjoy the hobby more.
Some people will argue that serving stuff on open ports to the public internet is fine. They are not wrong, but don’t do it until you know, understand and accept the risks.(’normal_distribution_meme.pbm’)
Remember, risk is ’probability’ times ’shitshow’, and other people can, in general, only help you determine the probability.
Come on, it will be fun!