TL;DR: I have no idea!
(Extremely) amateur history time! This Train facts post has been going around, claiming that in 1825, the fireman aboard the Best Friend caused an explosion by
sitting on a relief valve, because the sound annoyed him.
But my guess is still about 50/50 on whether a butt was involved.
One site says the fireman on the Best Friend was actually a slave, which is an interesting bit of history in and of itself, and wiki says it was common practice to tie down valves for various reasons, like to add some extra pressure to go up a hill, but nowhere seems to mention any thicc rears being involved in this incident.
Many even specifically state that it was “tied down”.
In one museum replica there's something that almost kind of looks like a weight on a lever(Which seems to be how safety valves were done at the time) within reach of the fireman, but it is overhead. A more modern replica doesn't have it, some other illustrations don't.
Moving the thing out of reach was apparently a very early change right after the explosion, so I suppose the replicas might not be perfectly accurate, or might even be intentionally inaccurate for safety. There could well have been a valve in butt range.
There doesn't seem to be anywhere in the fireman's general area to put a valve that could be sat on though, except for obviously bad places. One presumably would very much want the thing overhead and close to the boiler. But, bad designs aren't exactly uncommon even now.
But on the other hand, we have this report(very dated language ahead!!!), saying that it was “Held down”, rather than being tied down.
“I have just returned from examining the situation of the Locomotive Engine “Best Friend,” since the accident of this morning, and have come to the conclusion that the bursting of the Boiler originated from an over-pressure of Steam, and believe it to have occurred from the Safety Valve being held down by one of the Negroes attached to the arranging of the Car, (while the Engineer was attending to the arranging of the Lumber Car) and thereby not permitting the necessary escape of Steam”
And, a page from mysticstamp.com claiming:
“Another account claims the fireman set a piece of lumber on the valve and then sat on it.”
And, a book called “Rails Across Dixie: A History of Passenger Trains in the American South” claims “one source” says he sat on it, with a footnote reference leading to page 77 of “The Age of Steam: A Classic Album of American Railroading”.
Which basically says the same thing, “The first negro ashcat on record sat on the safety valve of the South's first successful locomotive”, but there doesn't appear to be a reference.
Some of the books one finds while researching the topic also seem to reference the idea that railroads were sized to match roman roads, which Snopes makes a pretty convincing argument stating that is only partly true, making the whole thing seem less credible, like as if there's plenty we still don't know.
It's amazing how even in the internet age, questions like this can't be answered in just a five minute Google search.
It's even more amazing how actual history books and museum websites don't always seem to have a clear “One final answer for all”, even when they sound really confident.
A half hour later, I still have no idea if a posterior was involved. But it is really interesting to see an easy to understand case of how some of the details get hard to find over time, sometimes with random stuff crowding out the real stories.
If he did sit on the valve, is posterior thiccness likely had very little to do with it.
I imagine he would have probably been sitting on a small weight that doesn't need much to hold down. Anyone directly occluding an open steam vent with their bottom might find themselves in need of the(Probably quite terrible, as this was 1825) services of a doctor, or failing that the undertaker.
And, as this was a hard job(Like just about any job back then!), I imagine one would likely be reasonably thin.
What do you think happened?
This is a thing that happens in tech, especially open source…
Tech is always slowly improving, but some things still suck,
especially in open source. Barring some kind of hideous uber-GDPR that requires
every coder to be licensed and bonded, it will be figured out eventually.
The problem is that fixing things is a major project. One person can't fix them all. And what we have right now… is kind of good enough.
In the FOSS scene there's this idea that proprietary is pure evil, that might lead someone to look for, or build, a FOSS alternative to every last bit of code.
Unfortunately, that just results in splitting your effort and adding to the existing heap of soon-to-be-abandoned proofs of concept.
Previously I wrote about my theory of decustomization and gave some thoughts on this, but I think I've noticed a new motive for all of this:
The hurry.
I just installed Obsidian Notes, because it's got offline sync. I'm not a fan of the fact that it's closed. But so what? It doesn't affect me. When a FOSS alternative comes out, the cost to switch will be low. In fact, technically I already can just open the files with a text editor.
Vivaldi and Chrome are closed. Brave is more open. But…. I don't like Brave's choices all that much. And it's not like it creates significant lock-in, aside from
a few Chrome specific web platform features, but even that is more of a lock-out than a lock-in, as Mozilla and Brave explicitly reject them, and they are available in the FOSS chromium.
Eventually we will get a really good FOSS browser. Eventually we will get an amazing FOSS note taking solution. And I won't be able to build *anything* as effectively if I spend my time fussing with half baked toy apps and missing features.
Programmers often say “YAGNI”, for “You Aren't Gonna Need It”. But I don't believe it. More accurate would be “You don't need that right now”.
Plan for it if possible. But there's no reason to build it, or make a switch, till you are ready to devote the time it deserves, or you have a legitimate need for it.
It's always going to be easier to do later, when the tech improves.
I have been rather unhappy with all the existing NVR software out there. It generally needs some
crazy text file base config, it almost always, for reasons unknown, must be run in a Docker,
many require manual admin(Thanks to completely unnecessary use of real databases), and they are typically limited to *just* CCTV, not taking advantage of the fact that the problem domain is similar to VJ video walls, QR readers, and the like.
Many don't even have low latency streaming!
And worst of all, they often use more CPU than one might like them to, because they encode, decode, and re-encode, the video. I wanted something way more hands-off that made use of on-camera encoding.
You can take a look at the project here! https://github.com/EternityForest/KaithemAutomation
Just go to the web UI, make an NVRChannel device, set permissions on it, fill in your RTSP URL, and add Beholder from the modules library. Beholder finds all your NVRChannels and gives you a nice easy UI with a lot of what you see in the usual NVR apps.
My first step in starting a new project is always to see how I can avoid starting a new project.
I looked into Frigate, Shinobi, BlueCherry, AgentDVR(Excellent, but NOT FOSS!!!), ZoneMinder, Moonfire, OS-NVR, etc.
None of these were what I wanted. Unfortunately for my sanity, I had a new project personal project idea.
I knew I was going to make this a plugin for my existing Kaithem Automation system, for maximum reuse,
but I had zero clue how to do the streaming.
I spent a lot of time looking into WebRTC, but it turned out to just be too much of a nightmare to work with. I briefly tried HLS, but the latency was too high. The project stalled entirely until I found something interesting.
Video over WebSockets! But how was I supposed to do that? What was I supposed to stream? Video files have packets and framing, you can't just start anywhere.
GStreamer is a framework for media processing. It's node based, so you never touch the content, you just set up processing pipelines. Almost anything you want to do has a node(Called an 'element').
It's basically the only media framework of it's kind. I use it all the time.
Unfortunately, pipelines can refuse to start and it is not always obvious what element is constipating the whole line, or why it would do so, and some elements need obscure settings and routing. It can be rather complicated, with elements having dynamic inputs and outputs appearing at runtime.
But, it succeeds at turning a deeply mathematical, low level challenge of dealing with media, into just your basic everyday “coding” task. Normally you don't even need to worry about syncing audio and video or any of that. It mostly is pretty good at what it does.
Turns out there's a really simple solution to framing streams of video. MPEG-2 Transport streams. Every packet is 188 bytes long. As long as you do things in 188 bytes chunks you can start anywhere. It's perfect!
Even better, mpegts.js supports it using Media Stream Extensions(MSE)! The Web player is done!
Furthermore, HLS uses .ts files linked by a playlist, and a TS file is just a bunch of those 188 byte blocks(It does have to start with a special packet type, but GStreamer handles that).
This means both live and prerecorded playback are essentially solved.
I use the hlssink element in gstreamer for recording, and the filesink with a named pipe(I run all this in a background process, so the appsink element that is actually made for seems less than ideal), which I read in my server(in 188b chunks of course) and send out my websockets.
Apparently, iPhones don't do MSE, and can't play h264 via WebSocket. I solved this the same way so many
other devs do, by pretending iProducts don't exist. Not ideal, but… It's FOSS, if someone wants an iDevice friendly streaming mode, they can figure it out themselves, or pay someone to do so.
Also, h264 and MP4 have multiple profiles. Not all are supported by MSE. You will get incredibly unhelpful error messages if you do anything wrong here.
One big problem is motion detection. Since I want this to run multiple HD cameras on a Raspberry Pi,
I can't decode every frame.
To solve this I use GStreamer's Identity element to drop delta frames. Most cameras allow configuring keyframe interval, and to use this system, you have to set It to something reasonable.
I don't touch the video stream at all, except for the keyframes that can be decoded independently, which should be set to happen every 0.5-2 seconds.
I examine just these for motion. But this creates a real response time issue.
To solve this, I record constantly into a RAM disk, in the form of TS segments. When a recording starts, I already have the few seconds preceding the motion even. Response time is less of an issue when you can capture events that happen *before* the motion.
Still, it does decrease efficiency to be unable to use larger keyframe intervals without missing short events. I'll probably look into other solutions eventually.
While I was at it, I also added QR code reading that can be optionally enabled.
Gstreamer's motion detection wasn't working. It seems to be designed for full-rate video and performs very poorly on 0.5fps video.
To solve this I used Pillow, and an algorithm with a 1-frame memory.
First I take the absolute difference in frames, and erode using MinFilter to get rid of tiny noise pixels.
Next I take the average value of this difference, and go a little higher. This is a threshold value. In theory this should reject minor lighting changes that are uniform across the whole frame, and widely spread out noise. A smarter threshold may be needed to really reject fast changing lighting.
Next I take the RMS value of the whole frame after applying the threshold. This algorithm prioritizes large changes, and closely grouped changed pixels. It reliably detects people even in poor lighting.
I quickly learned that passing cars tripped this. All the time. I really did need motion detection.
I knew nothing of machine learning before this, but I knew pretrained models exist, and that people seem to like tensorflow, kinda.
After the usual trying stuff that doesn't work, I settled on Efficientdet lite(Is Mobiledet better?).
I exported it from the automl repos, and eventually got it working. Turns out integer tflite can be a bit slow on X86, and I wanted this to work on both RasPi and desktop, so I went for a floating point model.
These models basically all seem to fall into two categories. People/faces, and COCO-trained. The COCO dataset has 80 classes, including people, cars, handbags, phones, and many other common objects. Good enough! I don't think I have any hardware that could train a new model anyway.
I can do the deep learning inference in about 0.3s, but there is no reason to burn more CPU than needed, plus, false positives are still an issue. So, I only run detection every 10-30 seconds, unless I detect motion.
Sadly, I can't detect people across the street with the model I'm using. Where I live, everyone seems to be rather spy-friendly(Which I am very happy about) on account of the amount of porch pirates, and local groups are always asking “Anyone have cameras on XXX street?”.
Finally, I wanted this to be usable for art installations. Applying effects to a live video was important. This was easy to solve. I just used Pixi.JS! All the effects are done in-browser on the display side.
It's all still beta, but it *works*! I'm testing it now, fixing bugs as they come up, and it's already
pretty usable.
The disadvantage? I didn't really build all that much of it. This uses about a dozen dependencies. Aside from the UI, and the motion detection algorithm…. there's not much original here. And I'll be honest, I don't have a clue how most of it works. I just pieced it together from existing open code and slapped a UI on.
It's fairly performant, but there's nothing lightweight or elegant about it. I have no idea if it would run on non-Debian systems, and it definitely wouldn't work on windows.
In the future, almost all of it *should* be able to be used as a standalone library outside of Kaithem, but implementing this required adding experimental features to libraries that should be separate projects to allow that, and that all needs to be documented and finalized.
A lot of code cleanup is needed, and I'm honestly a little scared of the community reaction to my dependencies list. I still need to add camera control for the PTZ.
But, as it turns out, getting to a usable point only takes about three weeks of coding, once you find all the pieces. I was expecting these apps to be a lot harder, but it's pretty reasonable…. as long as you don't do anything yourself!
I don't like bash. The only time I use it is for fixed lists of
commands with almost no logic. Here's a vague draft of what I'd do different, subject to change of course
People rm -rf stuff they would rather not, all the time. I don't
do this, because I rarely rm anything. GUI file managers are a lot less likely to
result in a screwup when tired.
Destructive commands should know whether they are being run in a script.
They should prompt you if run interactively. If run in a script, they should
have their normal behavior, but you should have the option to enable prompts.
It's common to run into issues with word splitting. Shell should support
command(foo, bar) syntax. In fact, parens should be reserved as named tuples objects.
Want to do ls? Fine, simple enough.
Want to do dd? Sorry, you gotta do dd(input=“src”, output=“dest”), because DD should *only*
accept named tuples.
Within these, unquoted strings should not be a thing. This should be a sign that you are doing
something resembling “real programming”, not “a quick hacky script” and now your code needs to act like it.
In fact, anything in them should work just like a typical scripting langauge.
(1+1, 4) should evaluate to (2, 4).
Commands should be markable as object aware, or not. Object aware commands should take frames of
messagepack on their stdio.
Byte streams are far too freeform and invite tons of incompatible unusual formats.
I am not interested in spawning 100000 processes to do something with a file. Instead, we should be able to pass objects to servers.
Want to convert one jpg?
loadfile petunias.jpg | convert webp | savefile out.webp
Want to convert 5000?
server convert webp cnv
for i in listfiles{
loadfile i | req cnv | savefile(i+“.webp”)
}
What happens? The shell passes the file object to convert. Convert
handles things in FIFO order. All servers must do so. Commands must also generate a done
frame after processing one “request”. This done frame is not displayed directly.
When that happens, the shell treats whatever came before the “done” as the response,
which is piped to savefile.
Loadfile knows to generate a done frame, like a grocery conveyor belt separator, when it
has sent all the chunks of the file. Req knows to buffer everything and wait for that done
frame before sending it all as one “request”
Want to make a new server? Just write a command that can accept multiple input frames before EOF.
To make things easier, any command that has not already sent a done frame, should auto-send one
when it exits so the programmer doesn't have to think about it.
There should be two kinds of “Get input” command, and one should wait for the done frame.
And servers should auto restart if they exit normally, so that any command can be used as one.
Just wait for a done frame, do stuff, return your data, and exit.
If your command is not object aware, it can be assumed to be a legacy byte streaming application. We can just send it any bytes frames we get, discard any others, and convert
done to EOF, since it likely can't handle multiple files.
Just a quick post to share my current set of extensions!
kylepaulsen.stack-tabs
Most recently used tab stays on the far left
ymotongpoo.licenser
Use the command pallete, type the short name of a license, get a license header.
voldemortensen.rainbow-tags
Make HTML tags have colors that match their matching closing tag
dhide.timestamper
Insert a UNIX timestamp
luisfontes19.vscode-swissknife
A ton of random utils like base64 and timestamp conversion and uuidv4
sirtori.indenticator
Highlight the current indent depth