It doesn’t escape my notice that Artificial Intelligence, neural networks, machine learning and the ability to consume vast amount of material and find interconnections, patterns, and insights within it is a kind of ‘mass collaboration’.
On the other hand, it can be a kind of black box, mass-turbation engine, because it’s a single entity talking to itself rather than a multitude of perturbances - unless you consider adversarial networks as a type of debate amongst machines.
This week I attended several AI conferences and pitches from founders. AI is in everything at the moment because we are in the middle of another Silicon Valley hype cycle. These seem to come every 2-3 years. Let’s take a moment to mourn the passing of Web 3, Metaverse, Blockchain, Cloud Computing, Edge Computing, E-Commerce (as a thing), Mobile, Java games. I am forgetting all the others but feel free to remind me.
For what it’s worth, here are a few of my observations.
I haven’t seen anything ‘intelligent’ nor do I expect to in my lifetime. This doesn’t mean that I won’t see something ‘incredible’. I just don’t expect to meet a new man-made form of intelligence. I expect someday that there will be but I’m not holding my breath. Claiming that something is intelligent now sets the wrong expectations and is unhelpful to the development process. It becomes a scapegoat when things go wrong. Too many articles are devoted to a dystopic, SkyNet scenario where AI turns us all into slaves. In the meantime, AI does a shitty job of sorting my email. Let’s all have a reality check, here.
We should be worried about the ethical uses of AI not because it’s going to become a super intelligence that will enslave us but because shitty AI is going to affect our lives in really negative ways. Coporations will use AI to make decisions about our financial transactions, credit worthiness, trustworthiness, candidacy, eligibility, etc. Governments will use AI to determine whether or not we receive assistance, services or welfare. And when these go wrong they have life or death consequences. The reality is more prosaic than what is in Mr Musk’s imagination. Let’s do a better job of rolling out applications into the real world that do less harm.
AI is an impressive prediction engine. Feed it carefully crafted models and it does a good job of learning to answer the questions you want answered. This is pretty amazing at times. But this is predictive behavior and it has its limits. Until machines understand more about the context of their decisions and where they factor into the food chain (e.g. some kind of self-realization), they are going to be slaves to prediction. Let’s not get carried away by anthropomorphizing them too much. We do this enough with our pets as it is (and that doesn’t bother me because it makes my life better not worse).
AI is not necessarily going to cause an energy crunch. I am a fan of fission and fusion but I wouldn’t go so far as to claim that we are going to be multiplying our data centers at a terrific speed because of AI and should therefore start buying stocks in uranium. By all means, go ahead and buy uranium for other reasons. We need fission in the short-term to meet our green energy goals. AI is demanding a lot of energy right now because it’s been rushed to market by Mr Altman and others to capitalize on the hype cycle. The fact that it’s such an energy hog is a bug and not a feature. I’ve seen more advanced forms of AI being developed that actually run faster, use far less energy, and are more economical with their data models and code. So, let’s not get carried away here, either. We are where we are right now because it’s early days and people want to be right rather than do right.
AI will be embedded in everything and will become commoditized in the same way that during the e-commerce hype cycle it was ‘a thing’ and everyone wanted to invest in the next e-commerce company but then every company became an e-commerce company and e-commerce is now something I can integrate into my business with low code or no code thanks to Shopify, Wix and others. There will be no ‘AI company’ in the future, either. Some will be better at implementing it than others and will probably go and tout that for awhile but even Amazon which is arguably one of the best e-commerce companies out there doesn’t talk about e-commerce in its marketing materials. Instead, it talks about being a really convenient shopping experience with excellent logistics and all of that. I don’t see them (or, Netflix, for that matter) advertising their latest recommendation engine algorithm.
Blockchain is a more impressive innovation than AI. AI has been around for over sixty years. I even studied it, myself, in the 80s, coded in prolog, and dreamt of decision trees. Even then, it was touted as the next big thing. But we understood back then what it was capable of. We just needed a few things to fall into place, like massive gobs of data and computing power. There is nothing in AI today that wasn’t foreseen before. Blockchain is different because nobody had any idea that digital code could someday be made unique, precious or scarce. Nobody had any idea that we could eventually run unhackable code on an unstoppable computer (like, Ethereum). This was beyond our imagination. And now that it’s here, it has led to more explosive innovation in the last ten years than AI has done in sixty. And if you couple AI into smart contracts running on-chain then you have an extremely interesting and powerful combination the likes of which we have never seen before. This is far, far more exiting to me.
If you have a business running AI on Etheream then please contact me!