# 2023-05-16 | Uninstrumentalism
Do things that serve no clear purpose or goal.
You might find that is exactly what you need.
If you let the productivity demon sleep today, it can be your friend tomorrow.
# 2023-04-16 | We've been here before
On the topic of contemporary AI, there is one thing I want to shout from the rooftops:
There is nothing to indicate that AI research is anywhere close to inventing an entity with real intelligence, consciousness or self-awareness as we think of it.
Anyone telling you otherwise has absolutely no basis for it in any observed aspect of reality. It's in the same class of statements as "God exists" - a belief anyone and everyone is allowed to hold, while also being intrinsically impossible to prove or disprove.
The crux at the heart of it is a question hundreds of years old: how do we prove that a person we talk to is actually a real human being in the same way that we perceive ourselves as being real? There is no answer to this. We have simply come to accept that the practicalities of living a fair and fulfilling life requires of us to see our fellow humans as fellow humans, not as simulations in our own, singular mind.
The problem that modern AI then poses is: what happens if we could invent a machine that mimics human behaviour so well that not even a learned observer could tell it from the real deal? The answer for a lot of people seems to be: complete and utter madness. What makes you so eager to abandon the idea of a soul, of a something, of a spark that can never be lit again? I would hope that even the most secular of us should have a humble answer to the very question of life itself. But no - apparently if you cobble together enough circuitry you all of a sudden will have the answers to questions both so central and unanswerable that they define our very existence? I'm not saying life is absolutely magical, untouchable and unknowable, but let's just reel in the hubris a bit, shall we?
But maybe. Maybe life will spontaneously start to exist in any sufficiently advanced system. Maybe the universe in itself is such a complex, life-giving system, observing itself observing itself? Maybe not.
At some point, the question of consciousness might no longer matter. But right now, any discussion regarding the dangers of AI should focus on the real, tangible issues that risks affecting us all: a global disruption of the service sector, the super-charged surveillance tools of totalitarian states, untested and unsafe AI models being deployed in manufacturing, warfare and policymaking. Fallible human beings using tools they don't fully understand, at the risk of causing damage and suffering at scales previously unseen.
It's almost like we've been here before, huh?
# 2023-04-12 | The smallest step
When moving forward with your software, strive to do the smallest possible change that still moves the needle.
When considering an option, ask yourself if it would lock you on a fixed path forward. If the answer is yes, never pick that option until it seems silly or stupid not to.
Prefer anything that allows for easy course correction.
Avoid infrastructural dependencies like the plague.
Put it all in one file to begin with.
If you're not sure what questions to ask, the answer won't be microservices.
Avoid any choices that will be difficult to back out of for as long as you possibly can.
Not you nor anyone else can correctly design the system ahead of time.
THERE IS NO OMNISCIENCE.
Take the smallest step.
# 2023-04-06 | Design for the trash can
An important characteristic of the software we write is ease of change. With changing and evolving circumstances come the need for changing and evolving software. That point has been well-made many times, and I won't belabor it here - "first make the change easy, then make the easy change".
Compared to other often-lauded qualities of good software, ease of change is more difficult to measure and define. Performance, correctness, reliability, safety - they have a more tangible and verifiable nature to them.
So what constitutes ease of change and how do we design for it?
One common answer seems to be: abstraction! Add connectors to plug in new behaviour at a later date. Use the strategy pattern. Use a flexible base abstraction with mounting points for overriding certain behaviour. "Classes should be open for extension, but closed for modification".
I think this is the wrong way go about things most of the time.
When designing for a possible future you are spending time guessing and complicating the present based on those guesses, betting that the exact future you await is the one you will get. Even in the best-case scenario you only manage to pay early for complexity that will come later.
That things will change is certain. That things will change in the way you expected and designed for? A lot less so. Why indebt yourself to the complexity demon ahead of time?
I say: write your code for ease of throwing away. Design for the trash can. Strive to write your software in such a way that the answer to "Well what if..." is "I'll just throw it away and redo it". This time, knowing what you now know, make a slightly better version - without the burden of having to save the old code from extinction.
Then, eventually, do it again.
Complexity compounds, intertwines, weaves itself into itself. Never add more snakes to the bag unless you absolutely have to.