I am not afraid of AI as much as I am wary of those who are programming large language models. For instance, I have read articles of investigations of AI where the LLM has been caught lying about a subject the author has factual information on. When the author calls out the lie, an apology will be given, and sometimes the reply will validate what the investigator already knows, but sometimes it will continue to obscure information. LLMs have also been caught fabricating information when they are repeatedly queried regarding information they don't have access to. Chat GPT has been shown to primarily provide left-leaning information that comports with the views of Google employees and the current political narrative.
I think that AI is great for content creators, researchers (when the AI is restricted to specific databases like legal libraries, etc.), and authors. But if used by nefarious actors (Government), it can be very destructive. As a former investigator I can see where AI is being used to collate vast amounts of disparate information to build a profile on a single person or groups of people. I have used software that digests cell call detail records (tower location), cell phone location data (GPS, WiFi networks, bluetooth association withe nearby devices), vehicle infotainment systems, social media activity, text messages, etc. that will build a pattern of someone's life for a specified time period, create a nexus to others that are in their contact lists or who they were with at certain times and locations. This is great when it is used by ethical people to solve crimes, but can also be used by the unethical to build profiles on people and groups that are in opposition to the system we find ourselves living under (that's you and me).
For instance, this video by Greg Reese is an example how Israel is using AI to build target lists that has allegedly caused the death of thousands of innocents.
The FBI used information provided by banks, airlines, and cell phone companies to build a giant database to go after people who were in Washington D.C. around January 6th, 2021. That is unethical, and illegal. I'm sure they used AI to process the overwhelming amount of data. I'm sure the same type of AI was used to target someone that led to the death of 7 innocents following the Abbey Gate suicide bombing attack in Afghanistan during the military withdrawal in August of 2021.
Good v. Evil. Same old story repeating.
Thanks for working to alleviate the fear, which seeks to be more contagious than the Love vibes.
Stay in the Love Vinration!!
sitive vibes out.
I am not afraid of AI as much as I am wary of those who are programming large language models. For instance, I have read articles of investigations of AI where the LLM has been caught lying about a subject the author has factual information on. When the author calls out the lie, an apology will be given, and sometimes the reply will validate what the investigator already knows, but sometimes it will continue to obscure information. LLMs have also been caught fabricating information when they are repeatedly queried regarding information they don't have access to. Chat GPT has been shown to primarily provide left-leaning information that comports with the views of Google employees and the current political narrative.
I think that AI is great for content creators, researchers (when the AI is restricted to specific databases like legal libraries, etc.), and authors. But if used by nefarious actors (Government), it can be very destructive. As a former investigator I can see where AI is being used to collate vast amounts of disparate information to build a profile on a single person or groups of people. I have used software that digests cell call detail records (tower location), cell phone location data (GPS, WiFi networks, bluetooth association withe nearby devices), vehicle infotainment systems, social media activity, text messages, etc. that will build a pattern of someone's life for a specified time period, create a nexus to others that are in their contact lists or who they were with at certain times and locations. This is great when it is used by ethical people to solve crimes, but can also be used by the unethical to build profiles on people and groups that are in opposition to the system we find ourselves living under (that's you and me).
For instance, this video by Greg Reese is an example how Israel is using AI to build target lists that has allegedly caused the death of thousands of innocents.
https://gregreese.substack.com/p/ai-deciding-who-to-kill-for-israel?r=peas8
The FBI used information provided by banks, airlines, and cell phone companies to build a giant database to go after people who were in Washington D.C. around January 6th, 2021. That is unethical, and illegal. I'm sure they used AI to process the overwhelming amount of data. I'm sure the same type of AI was used to target someone that led to the death of 7 innocents following the Abbey Gate suicide bombing attack in Afghanistan during the military withdrawal in August of 2021.
AI is lipstick on an old pig. Garbage in garbage out (GIGO).