In true Jacob fashion I’ve been doom dabbling in the greater scope of the information age and currently my intrigue is directed mostly at the idea of predictive AI and my behavior, mostly around it’s efficacy and my concerns around free will, destiny and the relationship therein. I guess I need to assert my definitions in that new-age destiny is not necessarily preordained by some higher power but is predestined by my genetics and experience, and that it is predictable through machine learning of my cohort. I think the best example to simplify this is how it’s become clear we no longer need cookies that track people in order to predict purchasing decisions/targeting marketing – phones don’t need to listen to know you want to buy a fridge, they’re actively predicting it.
single exposure, from the car
I think its easy for us to underestimate the power of algorithms because there is no end to the power, its simply limited by data and processing. So, basically, google could know that I’m about to buy a fridge because of signals like my search history, my power usage, infra-red heat detection of my fridge (and the food I pull out of it) via my devices (most new phones have IR etc), data from my robo-vac identifying that my fridge is old through vSLAM & Lidar, maybe its shaking a bunch. In this analogy, it’s a fridge so really it is fine, but when you consider a robot analysing expression, bodily-heat, heart rate, social interactions, the ordering delivery food every day, not exercising etc, it could know that based on some other person before me who exhibited the same behaviours, I am spiralling and perhaps more vulnerable to certain methods of influence.
damn fires – Moogerah
When it comes to predicting one person, this can’t be done simply through learning of that person, the real insight comes from aggregating the cohort and creating buckets of similarities, such as the Fingerprint and categorization through tools like FLoC. For eg, I’m now 33, I do these things I do, I behave and feel the way I do and I’m sure that in my cohort there are 34 year olds who, at 33, exhibited many similar behaviours to me. With those patterns of theirs logged, if I exhibit those patterns when I’m 34, the models have a pretty decent idea that I will continue to follow the patterns exhibited by that person 1 year older than me, 30 years older than me, 1 hour older than me. Phones don’t listen (they do) just not in the way we think, and the reason big data mongrels like Amazon and Google don’t shut that fallacy down is it’s easier for us Humans to swallow that concept, than it is to know that we are predictable, that free-will is a rogue concept and that I am very much like a lot of other people, my sense of uniquity is mostly regulated by my lack of exposure to others and the general emotional unavailability of humans.
Moreton Bay, i was paddling the surf ski when this rolled over, it got a lil hectic
My concern here is that if a robot knows what I’m about to do, I don’t want to do it – being forced into normalcy is a problem for me, because I don’t like crowds (on land and in the mind). Sure, a robot simply knowing what I’m going to do isn’t a problem, but it knowing I can be ushered through my display of similar susceptibilities to others before me is the real problem, and I’m not talking about getting an ad for a fridge, because my fridge is about to die (that is fkn awesome imo), I’m concerned about being used as a tool to rally/empower the goals of some sketchy corporation or government – and in my eyes, It’s a paradox for an ethical corp/gov to use these predictive models, because the very nature of it is unethical, so really there is no chance for this modelling and prediction to be used for good.
cylinder beach undulations
Unless of course we can embed true-altruism into these models, but given these models are built by humans I’m not as confident as I’d like to be – what I mean there is these algorithms are only as good as the data they consume, and the data comes from humans, and most of us are fkd up, therefore the robot is fucked up.. UNLESS we can provide the algorithm with only altruistic data, which means someone making a decision around what is altruistic, which is hard…. Unless you’re the Chinese government and are tracking and scoring people on their morality and civility – for eg, they could just take the top 5% of good people and train models on those people. I am glad I don’t live in those conditions, but I do believe the rest of the world will benefit from their social currency, and I think ethical models/data will be the single most valuable resource (if not already).
Dillon Stephens, my fave to cruise (with and shoot)
Freewill to me and I assume most, in this sense, is that I have some level of control over my direction in life – as free from control as possible, my fence-sitting on a topic or decision is a choice in itself, I don’t want a robot to identify my indecision and influence me in one direction because the decision around which direction I go is being made by a model that has been designed by, most likely, some kind of government regulated tech giant – and I don’t think the tech industry is at all founded on good ethics. Even when I see things that are supposed to be ethical advancements it seems it never is – for eg, Google saying they’re going to stop all non-approved 3rd party cookies on Chrome & Android sounds great right? No more tracking, apparently. This is a HUGE issue because this means data can no longer flow out of Google without their approval/sale, giving them a data monopoly over the users of their platform – I’ve given up on privacy, it’s an illusion, but if only Google have access to me, their power is unbridled, I’d rather lots of people have my data than one organization because in no aspect of existence is monopolisation a good thing.