For a while, I was using KDE. The configurability was pretty great.
Now I'm using Cinnamon. It's just a little bit more standard, and has just a little less to tweak.
On a lot of my software projects, I haven't been removing many features, but I sure have been removing non-standard things and reinvented wheels.
I was using FVWM in a custom Pi Distro, partly because PIXEL gave me issues on a read
only environment. Moving to FVWM got the project working fast, but I've since resolved those issues. The reason for the divergence from the default no longer applies, so I have returned to PIXEL.
Sometimes, the stock solution doesn't work, or we don't have time to learn it, forcing us to quickly hack something together.
But that doesn't mean we should stick with our fragmented, scratch built stuff forever.
It's easy to think with the sunk-cost fallacy and keep maintaining custom stuff,
when the standard issue would only take a little effort to switch to.
This site used to be on Hubzillla, and for a long time, I didn't want to switch away,
because of the effort. But as it turned out, going to the more standard DokuWiki was not
too hard, and was a chance to tidy up a bit too.
I hate minimalist software that requires the user to understand and control every detail of the machine. I don't like programming in C. I have no interest in doing the computer's job for it.
But one time I DO like to get rid of code, is when it's unusual, custom, one-off code. Why? Because that code has *weight*.
Perceived weight, that feeling of being dragged down by software, doesn't always mean the software uses a lot of CPU. Sometimes the burden is on the user, or the developer, not the computer.
If I write an application, I want every bit of effort to go directly to making that application better, in a direct, user accessible way. But if I reinvent the wheel, use some guys brand new language, or something like that, then a lot of my effort just goes to maintaining the infrastructure.
Every custom thing I build, I have to maintain until I throw it away. If I don't want my life bogged down by config files and broken Arch packages, I have to be careful not to build, and especially not to get dependant on, anything that doesn't provide real value.
Sometimes people do this for educational reasons. But doing this on a real project is like wearing ankle weights when you're just trying to get things done.
Some things pretend to make life easier, but they really don't. If your home automation
solution is going to result in someone getting annoyed when someone accidentally uses a the manual
light switch, I want nothing to do with it.
I'm not going to spend a month setting up a solution to a task I do once a month, that takes ten minutes.
And I'm certainly not going to use some terrible pseudo-assistant doing things like automatically
running backups when I plug in a hard drive, when it has no way to tell whether I even meant to plug in that drive.
Systems like that are about as helpful someone giving me the world's most boring math textbook
for Christmas, complete with a passive-aggressive note. It's less of a nice gesture, and more of an extra obligation.
One programming principle we rarely see outside of UI is the Principle of Least Surprise.
If users don't want to be surprised by unpredictable and non-standard behavior, why would programmers?
These days, when looking for ways to improve something, especially a small personal project,
I ask “What have I been doing differently from everyone else, and is it time to stop?”.
I don't believe that perfection is achieved when there is nothing left to take away. Usually,
taking things away just puts them somewhere else, generally on the users lap.
Nobody wants code to be more complicated than it needs to be, but it's ultimately just
a heuristic, and simplicity isn't the only metric that matters.
Points of divergence, like a custom function that could
have just used the standard library, some unusual bit of hardware, etc,
can create problems.
They impede interoperability. They generally add something
to maintain. They add something to learn just for this project, that may not apply elsewhere.
In fact, it almost certainly won't apply elsewhere, because when you get in the habit of building
just the perfect solution to every task, you're probably going to start from scratch again next time.
When I get annoyed at this kind of thing, I often find myself saying “Just right is all wrong”, meaning that if something is perfectly suited to the application in every way, with nothing that isn't needed and everything that is, it's probably going to have issues in the real world,
where the application is less well defined.
Every tool you use has a certain amount of “Necessary Divergence”. When you add this tool,
how much custom, application specific stuff do you need to stack on top to make it useful?
Does the application find your printer automatically, or is there a config file?
Do the default settings work, or is tweaking required? If you need to change something,
is that change a single, isolated, automatable unit, like a config file dropped into a directory,
or is it stored in a database, requiring manual use of GUI tools or detailed knowledge of an API?
I believe this need for high-effort customization is both orthogonal to, and worse than, both
complexity and tight coupling. Instead of something complex that *might* be a hassle,
this high level of divergence is a way to directly express the fact that it *is* a hassle.
Software doesn't have to diverge with the norms to have weight. Anything “heavy” the software
requires of you can eventually feel like a burden, making you wish the thing never existed.
Weight can be a SaaS monthly fee. Weight can be a requirement to run on modern hardware, or to have a quality internet connection.
Weight can be long build times, heavy setup, and, at a certain point, complexity can become a weight that requires effort.
Sometimes weight is vendor lock-in. Sometimes it can be a lack of configurability or scriptability
that keeps it from fitting the application.
Sometimes, it's a negative environment consequence, like tre massive energy use of bitcoin
There are many things software can demand of us. And it had better be worth it!
Sometimes, a bit of custom work can be lighter than the existing solutions. But not that often.
And worse, if you don't bother learning the industry standard, how will you even know if your custom work is truly saving time?
Why should we praise software for being “light” if it drags our life down in other ways?
I've had about enough of heavy software. Not in terms of CPU or RAM, but in terms of hours.
I don't care what the weight is. Whether I'm wasting my time reading manuals, or waiting for it to load, it's all the same. Wasted time(Although time spent reading the manual is slightly worse, because I can't be doing something else).
It's all a problem that I don't want. I want tech to work like it does in the movies.
It should just work, every time. It should be obvious how to use it. It should be fast.
It can be as big and complex as it wants, as long as it's got unit tests and nobody
has to rewrite it every three months.
In a Marvel movie, tech is part of the story. Sometimes a key element, or even most of the
actual plot, but it never overshadows the characters.
I'm sure Fitz and Simmons spend plenty of long hours in the lab debugging just like we do,
but when they're done, and they take the whole thing into the field, *it just works*.
Unfortunately, not all modern tech has that payoff, especially DIY stuff.
Some is just an endless maintenance treadmill, wondering why the lights don't turn on in your custom home automation, or why the transparency isn't right in your riced Linux setup.
And sometimes, this kind of weight is even worse than buggy software.
90s computers crashed ALL the time. But yet, we probably spent less time on
them, and got more productivity and enjoyment than we do now.
Sometimes they didn't work, but we survived, and, except in the kind of epic crashes
that are thankfully uncommon now, maintaining our computer wasn't like a second part time job, at least for most of us.
I'm still going to be sticking to free software whenever I can. But I'm going to l
spend even less time than before customizing and tweaking and building my own experimental stuff.
And I'm going to try even harder to make things work with existing standards, rather than
go off the beaten path.
Because I want to build fantastic apps, or just not build them at all. And even if the app itself is great, in my mind, just the fact of being custom, is a downside that needs to be carefully weighed. Sometimes, it's the way to go, but only carefully.
We deserve to have lives outside of programming. If that means I'm not going to learn the details of runit or FreeBSD this year, or even that I have to pay for a few bits of software, or
even *gasp* get up to turn on a light, or do that most dreadful and shame-filled chore called “Making my own music playlist instead of building a robot to tell me what to listen to” I think that's probably worth it.
One time I needed a random number generator. It was for the PIC10, which has something like a few dozen bytes of RAM. I didn't need security, just speed and reasonably long periods sizes.
After much testing, I came up with one that passes most statistical tests. Emphasis on “most”. It's not a very good RNG, but a few people seem to be using it, so I figured I'd better migrate it here.
It has an undefined period size. It's state loops back to the start at different times depending on what the start is. So far I don't think there's been any issues, but there could be.
Are there very short cycles? Maybe! likely not too short though, as there is a fixed counter that is guaranteed not to loop for 256 bytes, and IIRC the testing I did with randomly chosen seeds usually had periods in the hundreds of millions.
This thing is pure trial and error. There's no logic to it, and I wouldn't even begin to understand how to formally analyze it. If you need good fast numbers, use XORShift, or maybe blake2b hash a counter or something if you need even better.
If you need four bytes of state and very few operations, all single 8 byte instructions, and you don't care if there's a chance that it loops endlessly in a very short cycles, and you don't need security, maybe you need XABC! It's named for the 4 bytes of state it carries, in 4 separate variables.
Just don't ask me how it works. I have no clue. It's blazing fast though!
uint8_t x,a,b,c; uint8_t randRNG8() // Originally unsigned char randomize(). { x++; //x is incremented every round and is not affected by any other variable a = (a^c^x); //note the mix of addition and XOR b = (b+a); //And the use of very few instructions c = ((c+(b>>1))^a); //the right shift is to ensure that high-order bits from b can affect return( c); //low order bits of other variables }
There you have it! You can spend 5+ years on a FOSS automation framework, and a week on a terrible RNG, and find the terrible RNG is actually more useful to more people than most of your more professional projects!
It seems that someone has implemented it in assembly code here: https://rosettacode.org/wiki/Evolutionary_algorithm
Of all the decentralized blog solutions, they seem the most promising. I wish them the best. I may return at some point. They are seriously cool.
But I just can't even. At least not right now. Why? I'm glad (nobody at all) asked!
Some of this may have been fixed. Some may not be an issue at all, for those with
more time to explore. But here's how I saw things, as a casual user.
It's a “click random stuff until you maybe get somewhere” design. And not all that well documented.
In fact, some things aren't documented at all. Syncing between hubs using one option syncs “a couple months of posts” last I checked. What does that mean? It's hardly reassuring when you want to know your content will be preserved.
Dokuwiki: an hour, maybe? Including a script to export my Hubzilla content.
Hubzilla: Hours. Multiple different tutorials needed. I had trouble making an account because of email issues.
I just can't trust something like that. With no one click setup, I can't feel comfortable that this is a solid product that I can reliability reinstall, in a guaranteed way, at any time. It's one of the reasons I avoid Arch!
Is not ideal. I don't feel like I could self host on a raspberry pi all that well.
It's not exactly easy to reskin. It's not clear without a deep dive how you make an app. It all feels very “enthusiasts only”.
It wasn't clear how to change the templates(such as for the randomly generated text at the top of this page!) without a lot of hassle.
Isn't *quite* P2P. It's still very much based on federated hubs and servers. It's not like there's an app you can download that still works when the internet goes down, a la Jami.
There are blog posts. There are articles. There are web pages. There are links between them. (At least when I was using it)Some contexts support relative links to some. Others do not, and you have to use absolute links, breaking the replication feature. It seemed that rendering did not work exactly the same between the various ways of publishing content.
Some applications are supported on some hubs, but not others.
There are too many ways of doing things, and no way to know what to use, till you try them all, and realized that none of them are a very good fit.
What I *wanted* was organized web pages, and for the main page to show recent pages, and DokuWiki makes that easy.
On Hubzilla, the easy option I found was to use blog posts on the stream, and categorize with tags. But then I ran into issues with the tags listing not being displayed how I wanted, and eventually, I just decided to move on rather than to put more time into configuring.
URLs for blog posts have big nasty identifiers. Of course, pages or articles might be different, but… Then you run into confusion with sync, relative links, and the like.
When you put it all together….. I decided I'd rather just use Dokuwiki. It works, it's trusted, it's themable, and there's no fuss. It doesn't discourage me from writing content by it's sheer number of clicks needed for the UI.
I certainly learned a lot. And one of the main things I learned was the value of “just works”. If there were a high performance, blockchain free, fully P2P blog platform that could be installed with one command and had an Android app, I'd be all for it, regardless of complexity.
But Hubzilla has a long way to go before it can replace facebook. Or
perhaps someday we will have a P2P product that works like Doku!
Until then, the KISS principle might have a place after all. A tiny one, because it's a bit overrated, but a place nonetheless.
Your job is not to write code. Nobody will hire you because their company discovered a huge market for lines of code. Almost nobody will download an app and say “Wow, what great code!!!”.
If it doesn't work in the real world, and doesn't meet requirements, it's not good code, no matter how many WorseIsBetter excuses you make. Once you actually learn to respect end-users, it gets a lot easier.
Even buggy code with a good UI and error handling can be better than clean code that expects to live in a perfect world. I 100% will not hesitate to increase software complexity if that's what it takes to handle user errors and unreliable networks. And I don't even care if a few bugs slip in as a result, as long as the whole system, taking into account all the external issues, gets more reliable.
And then the next level after that, is that clever code is trash code. Does your use of Tiermann's First order Polymeric Transform(Not a real thing) make anything faster? Will it help me port this later? Does it enable some cool new feature? Does it reduce bugs?
If not, I don't care. If you've tweaked and tuned and debugged and got everything to be just perfect, down to ten lines with a recursive higher order function, I literally will not care.
If you did it at home and blogged about it, I would think it was cool(As long as you didn't pretend that kind of thing is going to carry over to real life projects), but if I have to deal with it, I'll be annoyed I had to wait for you to write it, then annoyed that I had to read it.
You probably can't do better than a standard library. And if you can, you probably don't have time. And if you do, you'll still need to justify that it had real benefits, because, as previously mentioned, it's not about code, and the community behind the standards is worth more than a slight bit of extra code quality, unless you can show real benefit.
Even copy and pasting saves time, and most of the time, the thing you're copying is very obvious. Why bother reinventing? If this is an educational drill… When do the real projects start?
And maybe most importantly, nobody is going to use your amazing new framework unless you can seriously support it. Maybe it's better than anything that came before, and will inspire a few. But a lot of the people who would care, have their *own* amazing new framework. And the practically minded won't look beyond the word “new”, and will read it as “untested trash full of undocumented behavior that the author learned to work around isn't going to fix”.
In interpersonal growth there's the idea of “listening to understand” vs “listening to reply”. Be honest with yourself about what others are really going to be doing. You can fart the world's best fart, but the other fartists will prefer their own.
Look at things from a high level. What do the trusted solutions that people love do? What are people complaining about? Do the successful projects use original code or libraries? What kinds of changes are likely to create a nightmare of bugs?
Unless you're formally proving things, the high level decisions of coding aren't math or science, they're art. Theory and blog posts don't replace understanding the application and providing value of users.
Which reminds me, programming blogs all say the same 10 oversimplified things, usually about the virtues of oversimplifying things. If you're reading this, you should probably be reading about a quarter the amount of programming content you actually do.
Do you know how to program? Great. Keep doing that and you'll get better, till you decide to change careers.
You don't need to sacrifice your whole life at a glowing rectangular pixel shrine, doing code katas all weekend. You certainly don't need to do competitive coding. And if you do choose to do those, remember that your ultra clever one liners and obscure patterns are just one more nonstandard thing for the maintainers to learn.
And finally, if you gain enough respect that people care about your opinion, you almost certainly WILL have to defend the projects you work on from people who want to make the whole codebase their personal splashy fun times pool of Haskell and Lisp metaprogramming, microservices, and custom build tools, and they might not even know Haskell yet(!). That someone might even be you, and that project might be a website about coin collecting.
Think really hard about what the actual benefits are. Maybe there's something to gain by having everyone learn a new language on the clock. Or maybe, this is a two-person job and you shouldn't make real paid projects into testing grounds, when they should be very simple and easy.
Don't invent more work just because you're bored. Get the task done, and go learn Haskell on your own time, or suggest your boss set aside a few hours for everyone to learn whatever it is. But make sure you never wind up with a project full of super cool tech that only one person understands, just because you couldn't say no.
The difference between actual advice from Marie Kondo and random people vaugely inspired by her really is astounding.
I haven't read her stuff, though I might have to now to understand what this is all about.
But from what I can tell, the knockoffs are often really pushy and act like anyone who doesn't like empty space more than stuff, or who keeps decorations or trinkets, might as well be a pissjug hoarder.
I'd even say they sometimes encourage landfill filling by creating such a rush to get rid of stuff that sometimes taking it to a thrift store gets forgotten.
But I really haven't seen any of the pushiness at all from her. *Someone* is out there saying we should only have 30 books, but not her.
As I see it, the aggressive version of less is more is behind a LOT of bad ideas, and a lot of neutral ideas that are purely a matter of preference that get promoted as being for everyone.
I still have pretty much no interest in “functionalist design” or “spend on experiences” aka restaurants, or learning Vim and Emacs.
But Kondo isn't shoving these down everyone's throats, and often doesn't even seem to be promoting anything even vaugely similar to the stuff we've all been complaining about.
I'm sure she has plenty of advice I wouldn't like or agree with, but then again I'm sure any random person in a crowd would have just as much of it if they wrote a book.
She said that if the image of getting rid of books makes you angry, that should tell you how passionate you are about books.
So Marie Kondo, thanks a lot for showing the world how many people really do like their stuff, and have no interest in tossing it. And thanks for encouraging lots of people who don't actually like their stuff to give it a new life with someone who will enjoy it.
And thanks for showing everyone, in a less political than usual example, how nonsense rumors and biased thinking gets to us all.
Let's all try to do a little less gossip and a little more investigation, because gossip is how you lose friends, and who doesn't love a good game of amateur detective?
I posted one time in the r/90s subreddit, on a thread about how EVERYONE had a magnifying glass in their drawer, even if they didn't need one or know anyone who did.
Someone wondered why, and I suggested that maybe it was a symbol. It meant you're always ready to find something hidden, that there was always something more even in the places you thought you understood.
“And so it is the Herald's task the hidden truth to win,
To see behind the face without and find the face within.”