- Joined
- Dec 22, 2024
- Messages
- 18
- Reaction score
- 12
hello there everyone!
I wish you all a splendid 2025!
However, let's cut to the chase.
As the title suggests, I am questioning a lot of the things I have been doing and passionate about for a long time.
Maybe some already knew, but I study AI. I am in the third and final year of my bachelor and have been truly obsessed with the field ever since I started the bachelor.
When I started the bachelor, I found out that I found the thought process behind programming very relaxing. It had something nice to it that other hobbies at the time, which mainly consisted of video games at the time, could not replicate. I decided to self-study a lot out of my own initiative. I decided to learn to program better than most peers of my age and also made sure to teach myself about the way AI (mainly machine learning) works, and therefore also went in-depth into other related fields like statistics & probability theory and linear algebra and now calculus. I started to realize that one of the main reasons that I liked AI was that it could make one very independent. I always fantasized about the idea of creating your own expert-system/personal assistant. How cool and revolutionary would that be?
This was the status quo for me for roughly the entire bachelor except for a month since now. I am keeping relatively close tabs on the news. And honestly, I am starting to become increasingly worried when I think about what the consequences could be of implementing AI in the way that most companies interested/involved, will be implementing them. Recently, because I am looking for internships, I was talking to an actual expert in the field. He mentioned that he expects that WITHIN THREE BLOODY YEARS, roughly 25% of all European jobs, and roughly 33.3% of all American jobs will have disappeared, replaced by AI. I did not expect that shit would hit the fan this quickly, but who am I really to question those predictions when that expert will run circles around me in the fields I am supposed to be specializing in?
Recently I also had another conversation with another company, which also offers internship positions. I was much less than extatic about the internship position. They basically told me about how they want to use generative AI (the AI that creates stuff like images, text and nowadays even video and audio too) to replace their drawing/photography divisions which mainly consisted of some old grannies who had been doing the job for 20 years.
They want to cut costs. They claim there is too little money available; no need to even think about raising people's salaries to with that of inflation. However, while they say that, they coincidentally seem to forget to mention that the CEO's of those companies within this sector do simultaneously have the money to buy another villa, just to ditch it after a couple days for some god-forsaken reason.
You can call me stupid and short-sighted for not having been worried about this prior. I guess there is some fairness in that. I never liked how AI was pitched to replace the dull and soulless jobs, but in reality actually seems to be most aggressively be driving people in creative industries like artists, who have been complaining about AI taking their work away for much longer than I have. I thought this development would take much longer due to big tech's obsession with big-data, which I fundamentally believe, is far from enough for creating artificial intelligence that truly can reason well in general and perform on a level close to that of humans.
But I guess I was mistaken in how quickly those firms worked around that by waging their bets on the relatively small, but what I believe is the final piece to the puzzle; that being reinforcement learning (which is a subfield of AI that concerns itself with agents that learn based on experiences instead of just data (this way of learning is most similar to that of humans)). Now all pieces of the puzzle are there. The hard part is behind us (or rather, those tech companies). The rise of (at least somewhat) competent AI systems is only a matter of (very little) time.
If you take this information, and combine it with the knowledge of how in the western half of the world, wealth is getting more concentrated than ever, the middle class is slowly but surely getting eradicated, home ownership is becoming moreso a dream than a realistic prospect, I can't help but feel absolutely horrible about the idea that I will be aiding in a force that, if it comes to its full fruition, will exaggerate this already enormous, and growing inbalance between the have and have-nots. to a level we have never witnessed before in human history. Let me show you precisely why I feel so bleak.
The reason AI will only (extremily significantly) increase inequality and probably dislocate society to a fundamentally deep level, is because in order to build the huge, famous AI models that will replace people,there are very serious barriers to overcome. The first one is money. In order to train the AI model,you will need a lot, and I mean A LOT, of electricity. In order to get an idea for the magnitude I am talking about, Chatgpt's GPT-4 model has roughly 1.8 TRILLION parameters. Each parameter (which essentially is a number that has some say in what the output of the model looks like) has to get set and re-adjusted again and again during the training session which can take weeks. And of course, in order train those models, you need insanely good hardware. This all comes with a price tag too of course.
This barrier I still consider relatively small. The barrier I think will actually be the real culprit in preventing anyone but the big (tech) firms from essentially lock anyone out of access to these tools is the need for data. All the models seen today need an enormous amount of data to get trained. And guess who have been busy harvesting your and my data without our consent? That's right, google and facebook. The two companies we absolutely can blindly trust to do no harm ever and to always be fair, and transparent in their actions and policies, right?
In short, I think that thanks to AI, we should prepare for a very bleak future if we allow things to get this far. The capability to use AI in real meaningful ways will be restricted to those who effectively hold power, and those who don't will be outcompeted by the prior-mentioned AI models. This technology will dislocate society beyond levels we have ever seen before and only god knows how and if we will be able to solve this problem at hand.
This realization has made it very difficult for me to stay passionate about what I do. Whereas I could easily spend all my spare time self-studying, now I feel completely indifferent towards the self-studying; and when I try, I get this feeling of guilt, resentment and confusion. I don't know what to do. I feel like a parent who has seen their kid grow up to become an absolute monster.
I could use a fresh perspective on this rather complicated matter.
Feel free to respond in whatever way you want.
Have a good night and stay safe.
I wish you all a splendid 2025!
However, let's cut to the chase.
As the title suggests, I am questioning a lot of the things I have been doing and passionate about for a long time.
Maybe some already knew, but I study AI. I am in the third and final year of my bachelor and have been truly obsessed with the field ever since I started the bachelor.
When I started the bachelor, I found out that I found the thought process behind programming very relaxing. It had something nice to it that other hobbies at the time, which mainly consisted of video games at the time, could not replicate. I decided to self-study a lot out of my own initiative. I decided to learn to program better than most peers of my age and also made sure to teach myself about the way AI (mainly machine learning) works, and therefore also went in-depth into other related fields like statistics & probability theory and linear algebra and now calculus. I started to realize that one of the main reasons that I liked AI was that it could make one very independent. I always fantasized about the idea of creating your own expert-system/personal assistant. How cool and revolutionary would that be?
This was the status quo for me for roughly the entire bachelor except for a month since now. I am keeping relatively close tabs on the news. And honestly, I am starting to become increasingly worried when I think about what the consequences could be of implementing AI in the way that most companies interested/involved, will be implementing them. Recently, because I am looking for internships, I was talking to an actual expert in the field. He mentioned that he expects that WITHIN THREE BLOODY YEARS, roughly 25% of all European jobs, and roughly 33.3% of all American jobs will have disappeared, replaced by AI. I did not expect that shit would hit the fan this quickly, but who am I really to question those predictions when that expert will run circles around me in the fields I am supposed to be specializing in?
Recently I also had another conversation with another company, which also offers internship positions. I was much less than extatic about the internship position. They basically told me about how they want to use generative AI (the AI that creates stuff like images, text and nowadays even video and audio too) to replace their drawing/photography divisions which mainly consisted of some old grannies who had been doing the job for 20 years.
They want to cut costs. They claim there is too little money available; no need to even think about raising people's salaries to with that of inflation. However, while they say that, they coincidentally seem to forget to mention that the CEO's of those companies within this sector do simultaneously have the money to buy another villa, just to ditch it after a couple days for some god-forsaken reason.
You can call me stupid and short-sighted for not having been worried about this prior. I guess there is some fairness in that. I never liked how AI was pitched to replace the dull and soulless jobs, but in reality actually seems to be most aggressively be driving people in creative industries like artists, who have been complaining about AI taking their work away for much longer than I have. I thought this development would take much longer due to big tech's obsession with big-data, which I fundamentally believe, is far from enough for creating artificial intelligence that truly can reason well in general and perform on a level close to that of humans.
But I guess I was mistaken in how quickly those firms worked around that by waging their bets on the relatively small, but what I believe is the final piece to the puzzle; that being reinforcement learning (which is a subfield of AI that concerns itself with agents that learn based on experiences instead of just data (this way of learning is most similar to that of humans)). Now all pieces of the puzzle are there. The hard part is behind us (or rather, those tech companies). The rise of (at least somewhat) competent AI systems is only a matter of (very little) time.
If you take this information, and combine it with the knowledge of how in the western half of the world, wealth is getting more concentrated than ever, the middle class is slowly but surely getting eradicated, home ownership is becoming moreso a dream than a realistic prospect, I can't help but feel absolutely horrible about the idea that I will be aiding in a force that, if it comes to its full fruition, will exaggerate this already enormous, and growing inbalance between the have and have-nots. to a level we have never witnessed before in human history. Let me show you precisely why I feel so bleak.
The reason AI will only (extremily significantly) increase inequality and probably dislocate society to a fundamentally deep level, is because in order to build the huge, famous AI models that will replace people,there are very serious barriers to overcome. The first one is money. In order to train the AI model,you will need a lot, and I mean A LOT, of electricity. In order to get an idea for the magnitude I am talking about, Chatgpt's GPT-4 model has roughly 1.8 TRILLION parameters. Each parameter (which essentially is a number that has some say in what the output of the model looks like) has to get set and re-adjusted again and again during the training session which can take weeks. And of course, in order train those models, you need insanely good hardware. This all comes with a price tag too of course.
This barrier I still consider relatively small. The barrier I think will actually be the real culprit in preventing anyone but the big (tech) firms from essentially lock anyone out of access to these tools is the need for data. All the models seen today need an enormous amount of data to get trained. And guess who have been busy harvesting your and my data without our consent? That's right, google and facebook. The two companies we absolutely can blindly trust to do no harm ever and to always be fair, and transparent in their actions and policies, right?
In short, I think that thanks to AI, we should prepare for a very bleak future if we allow things to get this far. The capability to use AI in real meaningful ways will be restricted to those who effectively hold power, and those who don't will be outcompeted by the prior-mentioned AI models. This technology will dislocate society beyond levels we have ever seen before and only god knows how and if we will be able to solve this problem at hand.
This realization has made it very difficult for me to stay passionate about what I do. Whereas I could easily spend all my spare time self-studying, now I feel completely indifferent towards the self-studying; and when I try, I get this feeling of guilt, resentment and confusion. I don't know what to do. I feel like a parent who has seen their kid grow up to become an absolute monster.
I could use a fresh perspective on this rather complicated matter.
Feel free to respond in whatever way you want.
Have a good night and stay safe.