Bonus Episode 5.5 - Will AI Take Your Job?
Maybe, but I am not that concerned. I am more concerned about it taking all of your money.
Artificial Intelligence is a topic about which endless newsletters, articles, books, and think-pieces can be written, have been written, will be written, and are likely in the process of being written (likely some written by an AI tool itself).[1] Trying to limit the topic to “investing and AI” doesn’t really shrink the corpus very much. One reason, of course, is that everyone wants to write about the topic du jour "How AI is going to change everything!” – which in the investment/finance world is typically shorthand for getting fabulously wealthy with no risk. In the dot-com boom of the late 90s, the “internet” was going to change everything (for older readers, it was “the computer” that was going to change everything). Of course, in many senses, the advent of both the personal computer and the internet did change things, sometimes dramatically (and many people did get fabulously wealthy); but lots of things didn’t change. With respect to investing and economics, the most famous quote about this is from Nobel Laureate Robert Solow: “You can see the computer age everywhere but in the productivity statistics”. This has been termed the “productivity paradox” (or the "Solow paradox") – i.e. the computer and the internet didn’t increase white collar productivity as forecast (or perhaps “as threatened” since that forecast frequently went hand-in-hand with warnings about white-collar layoffs and unemployment - i.e. fear rather than greed).
Now that is a fascinating topic for academic economists, for sure, and I’m sure the explanations for this phenomenon are complex and multi-factorial, but like any good opinion piece writer, I’ll focus on one potential explanation for the productivity paradox that resonates with me (or, maybe fits my desired narrative). Then, I’ll draw analogies between that theory and another similar argument I want to advance! See, this writing thing isn’t that hard. I guess that’s why AI will soon replace us. My preferred theory is that some productivity gains have occurred, but more slowly than (and not as many as) forecast because necessary organizational changes have been stymied (either intentionally or foolishly) by middle management and bureaucracy.[2] But regardless of exactly how important normal human deficiencies (like poor management, biased decision-making, inefficient bureaucracy, and overall aversion to change) are to the productivity paradox, I think that what we did (or did not) learn from the advent of the computer and the advent of the internet will be repeated (and still probably not learned) with advent of AI. To wit, humans will continue to be humans and technology has not been able to surpass that fact, nor will it, even if middle management eventually get their AI girlfriends (by the way, I love how everyone is concerned that “the lonely men will get AI girlfriends”; I suppose everyone just presumes the women aren’t interested in AI boyfriends, so there is no concern about them dating AIs? (You can watch Joe Rogan and Bernie Sanders worrying about that here. I haven’t watched the entire thing but I understand they discuss AI a fair bit).
Musical Interlude:

Accordingly, if AI can’t protect us from ourselves (or humans from humanity), the most important part of investing in the age of AI will be avoiding the normal human frailties which, to be fair, will take slightly different forms or be enhanced by new AI tools. But I am not going to waste this newsletter warning you not to FOMO into “new” AI companies which are going to change everything! See Tesla, Palantir, UPST, NVDA (although NVDA is more of a “buy the shovels” trade, which I appreciate a bit more). Nor am I going to warn you about the reverse – e.g. don’t sell all your stocks and embark on a wild bender to enjoy the last few years before the singularity, when you won’t need any retirement assets because either (i) Bostrom’s concern of being turned into paper clips has come to pass or (ii) Kurzweil’s dream of a utopian period of abundance is upon us). But my idea to write an AI bonus newsletter originated with a Christmas letter from an English professor-friend of mine, with a lengthy musing on the effect of AI on her and her students. And I thought – well, jeez – concerns about AI are really getting pervasive if they are starting to sneak into people’s Christmas cards. She mentioned concerns about the effect of AI on her students (and English generally), but she was also able to find some optimism by recognizing that[3] – at least in the creative fields where Generative AI is threatening to replace all writers, artists, and musicians – we might see that truly individual, personal creative works become even more valuable as they stand out from the AI generated ocean of content. No doubt, dear reader, you have astutely recognized that I don’t use AI to draft these newsletters (or I prompted the AI to liberally sprinkle it with grammatical and spelling errors). Is that lesson more broadly relevant to investing though? – i.e. will future value will be found only in uniquely human creativity and individual (artisanal?) creations, and so we might we better off in eschewing index funds and diving deep into the modern art world? Definitely not. Although Hunter Biden seemed to make some serious money as a newcomer to the art world (and, hmm… maybe if Meghan wants to get out that old easel?!)[4]
Joking aside, the index fund investor may not need to dive into modern art, but they should certainly not ignore financial risks posed by AI capabilities. Specifically, they should familiarize themselves with how Generative AI tools will revolutionize financial and investing scams with content creation, personalization and individual targeting. After all, financial scams and frauds are huge business, and, most importantly, they are frequently devastating to people’s savings, investment portfolios, or even (in the worst cases) their entire retirement…whether that is because they invested in (or invested with) a fraud (e.g. Bernie Madoff), or because they have been actually scammed out of their life savings through any variety of sophisticated social engineering approaches (i.e. intercepted wire transfers, romantic scams, text messages about overdue bills).
So to back up, the AI tools making the most waves recently are LLMs (large language models) and other similar content generation models[5]. These AI tools ingest enormous quantities of writing (or music, or art, or photos) and “create” new versions from those compilations. Now that is a wildly simplified explanation, which I am using because my general point is merely that more and better content is now easier and cheaper to create. This has reverberated in many industries (particularly the creative ones, as you know) with the worst of this AI-created content referred to as slop (Last Week Tonight with John Oliver just had a great episode on this!).
My focus in this newletter is on how these AI tools are going to make frauds and scams so much easier to create (and to fall for). The old standby email of a Nigerian prince needing money transferred to him (so that he can unlock his stolen wealth, which he would then share with you) was typically the same email to every recipient (so the scammer only got the absolute dumbest people to respond)[6]. But, using AI tools, scammers will have the ability to send personalized emails at enormous scale; their AI tool can make countless tweaks to the email to either target the recipients more specifically or employ elaborate series of A/B tests to see which messages work best (with the cost of compute the only constraint)[7]. Similarly, if you were to click on something in that message (or text) you will arrive at a polished and relevant website (created by a separate AI tool) with the correct logos, images, and all the accoutrements of a legit website[8] (e.g. the Nigerian prince email will include a completely fabricated order from the “Nigerian Supreme Court” to show you he is entitled to his fortune and link to a “Leading Nigerian Newspaper” with details on the case!). Again, the compute cost is the only constraint – and that has been dropping very fast for AI content creation.
Of course, this content creation will go far beyond emails and will likely make more sophisticated scams even more difficult to discover – particularly property and investment scams. I can imagine websites, videos of appearances of CEOs on CNBC and/or other “news shows”, fake bios of CFOs, videos of presentations of board members, and/or videos of actual “products” being churned out, each time more professional-looking and convincing to the potential investor. And this shouldn’t be a surprise, because these types of scams (particularly for property investors) have been around for a long time. It’s a huge part of Florida’s history[9] (including Charles Ponzi, himself), and these scams still work on old people, apparently, since they continue to move to Florida. And, of course, some people will see through these scams (the “haters”), but the scammer doesn’t need to fool everyone, just enough investors with enough money to pay their monthly AWS bill. Even without AI assisting him, Trevor Milton was able to dupe thousands and thousands of investors and analysts into believing that his truck company, Nikola[10], had a working prototype. His trick? Just roll it down a hill. A little creative camera work only, no AI needed! With AI tools able to create (or enhance) videos, the “proof” you will see with your eyes will be ever more misleading.
Similarly, Theranos defrauded thousands of people, including various “sophisticated” venture capital funds by purporting to revolutionize blood testing. Again, they didn’t even claim they were using AI to help make these analyses; just wearing a black sweatshirt and diligently copying Steve Jobs’ vibes worked to get Elizabeth Holmes onto the cover of Forbes! Imagine if Theranos had incorporated various AI subterfuges into their product (or claimed that an AI was how they were achieving their testing metrics). Actually, hold that thought, you don’t have to imagine it, you can see it today with Haemanthus, another blood testing firm, which is raising money right now claiming that their AI tool will improve all medical testing. The founder of that company, you might wonder? Well, Billy Evans, the husband of Elizabeth Holmes (and, I presume, an unindicted co-conspirator in his wife’s fraud). He is very focused on explaining that this company is not Theranos 2.0 - so, yeah, what could go wrong? Also, who the fuck is investing in that company? Do they need a financial advisor? Tell'em to call me.
But it’s relatively simple to avoid buying Theranos and Nikola – buy index funds[11], don’t FOMO, and don’t dabble too much in alternatives (which I was able to avoid since I wasn’t wealthy enough to be invited to invest in Theranos; more to come next newsletter on whether alternatives (Private Equity mainly) should be available to individuals in their 401(k)s – you can guess my answer already, I am sure.) But religiously buying index funds won’t solve the problem of the frauds and scams which directly target you (or your parents or your dumbest relatives); namely, the emails, the texts, the calls, the social engineering. And rest assured, the explosion of this high-quality and personalized AI content will be coming for your dumbest relatives shortly (if it's not there now). Even now, the “semi-personalized,” mass-marketed scams are of a higher quality with better targeting; a family member of mine (not dumb!) found out about that when they fell prey to a scam text about overdue EZ pass fines (you might have seen these too, I know I have, probably because I still have an East Coast phone number). No huge loss (except pride). That scam worked because the family member (i) does use EZ Pass routinely and (ii) recently got a new credit card. Thus, they were ready to assume the old credit card attached to EZ Pass had expired and needed to be updated.
Better and cheaper content creation, when paired with better personalized targeting will make this type of story even more prevalent going forward. And AI “bots” are already great at scraping data from the web (that’s how the AI models have been trained, generally speaking), so the next step of automatically creating profiles of individual targets by combining the personal information available online (both illicit and public) is undoubtedly underway. If scammers can deploy a few different AI models to (i) scrape individualized data, (ii) analyze it and create individualized profiles, and then (iii) then deploy customized AI-related content to reach out to these individuals – well, I think you can see the danger. Conceptually it is as simple as compiling data from public breaches: e.g. if “jeanlegrosbill1950@hotmail” was exposed in a breach from Company X, Hospital Y, and Public Utility Z, then an AI will quickly surmise that the owner is likely an elderly Canadian male, living in Vancouver (but probably from Montreal) who speaks French and is a huge hockey fan. And as the cost of compute goes down, cross references will get easier (and consequences worse) as the AI bot scrapes local newspapers to (i) identify his real name and (ii) determine from obituaries that “Jean” just lost his wife, who just happened to be an heiress to a family fortune. Cross references in the United States could target people who have lots of disposable income by checking public donation records to political parties[12].
And, first of all, yes – if you are probably thinking it sounds just like what huge marketing departments have been doing already, true! But of course if big corporations have better marketing copy, better targeting, and better return on their marketing dollar, the bank or investment accounts of their targeted customers aren’t (typically) going to be wiped out. Sure, someone ends up with a car they don’t need, a cruise they shouldn’t go on, beard styling gel that won’t help them with online dating anyway, or a special tour package to see all the Montreal Canadien games west of the Rockies. But it is something! Scammers who will soon be deploying this sort of AI have nothing to offer, except, of course, the promise of a relationship with a young woman who is, coincidentally, also from Montreal and a huge hockey fan, and would love to visit Vancouver. But, you know, money is a little tight, so maybe “Jean” could send her some money for the plane ticket and hotel? Of course, as content generation abilities of AI make impersonating visuals/audio much easier, particularly over a WhatsApp call or a Zoom video conference, “Jean” might actually chat with this “woman” on Zoom before the flight![13] If you are thinking that these scams are already out there (perhaps because you read this NYT article; or this one, or this one) you are right – but it’s not clear how much AI is being deployed currently. Right now, I suspect it has been limited because they've needed humans (convincing and heartless) to run a lot of the scam; but once you can let AI take the reins of impersonating people over the phone/video call, the scale of these types of frauds grows tremendously.
A different family member (also, not dumb!) fell prey to scammers impersonating an airline (going so far as to make a credit card payment over the phone). That family member was subjected to significant consternation, but crucially they ended up (i) not losing any money nor (ii) missing their international flight home. Why? Because when the scammers contacted the airline trying to impersonate the family member (after successfully impersonating the airline to the family member), the airline became suspicious due to ethnic stereotypes a perceived mismatch between the scammer’s accent, the traveler’s surname, and the scheduled flights.[14] If the scammers had access to a Generative AI tool which could seamlessly replicate “appropriate accents” for their targets, it is likely that airline would never have been alerted and the entire scam may not have been discovered until the family member showed up at the airport to find no flight, despite a $3000 charge on their credit card.
So perhaps this has all been a little scary. (Or you have read all this thinking – “ok, ok, but those people are idiots and I am smart! I would never fall for this nonsense!" - and that is even more terrifying, for your heirs at least). But I am not trying to fear-monger (or not more than needed to be useful) and there are many ways to minimize your susceptibility (or the susceptibility of your parents and relatives). First, most importantly, is understanding (and telling them) of the increased risk from AI with scams and frauds. Second, make sure that you (and they) always try to confirm financial instructions and decisions in multiple ways (e.g. go back to the original bank’s website, or call back at a publicly listed phone number). Third, make sure that significant expenditures are, to the extent possible, through trusted intermediaries.[15] This is perhaps the most important thing, as it will allow you to off-load responsibility in detecting sophisticated frauds and scams to others whose (i) interests are aligned with you, and crucially, (ii) who have the ability to develop mechanisms to stay ahead of scammers and frauds. You are probably already using credit cards which have fraud protection (like those texts which say, “are you traveling now and did you authorize a purchase in Italy for $225, press 1 for yes and 2 for no”), but if you need to wire a substantial payment, perhaps go into the bank in person and explain the situation and show them the wire request. If you do everything on your own (maybe because you think crypto is the next big thing), then you might end up on your own! Attorneys, financial advisors, and others in the financial industry frequently have an explicit duty to their clients to avoid these sorts of scams (and insurance for when/if they fail). But even if they don’t, all of the banks and attorneys and financial advisors want your business (and you need to still have all your money to be good business for them😊) and they also don’t want to be written up in the NYT for having facilitated some large scale fraud.[16]
Using trusted intermediaries doesn’t eliminate risk obviously (again, see Bernie Madoff), and many Ponzi schemes, in fact, depend on your overreliance on friends and families and their recommendations. But, nevertheless, in an attempt to find an slight optimistic slant to the fact that we need to be more skeptical of areas where advanced AI tools can easily dupe us – perhaps we will see more face-to-face interactions start to return, thus improving our society by forcing us to recognize each other’s humanity? (No, no, that’s clearly ridiculous!) But, I will try to do more things in person or with people for significant expenditures and purchases – i.e. banking with and signing documents at the local credit union, meeting artisans and artists in person, maybe starting to use travel agents again as I get older? It should go without saying, but don’t “date” people who you have never met in person! If you want an AI girlfriend, at least be intentional about it and just pay the monthly fee upfront. I mean, I assume it will be subscription-based model, so the company can take away your AI girlfriend as soon as you lose your job (or even if just your credit score declines). Perhaps you will be able to substitute for a cheaper model? You know, you’ll get the AI girlfriend with ads[17] so she suggests particular restaurants you should order from for date night, online classes you can take to improve your job prospects (if they exist?), plus hair plugs and botox injections for your next interview (it will be all buy now, pay later with Klarna, of course). Best case scenario, I suppose they give you a few months of your current AI girlfriend (at a discount) just like the cable company does whenever you call up and threaten to cancel? So it’s a Brave New World, I guess where the “ruling oligarchy[’s]…lust for power can be just as completely satisfied by suggesting people into loving their servitude, as by flogging and kicking them into obedience”)[18]. But there I go again with the pessimism and dystopian fears! Maybe for a good optimistic finale we should go to a musical coda.
Musical Coda:
You can usually turn to Paul Simon (or, if in real trouble, Simon and Garfunkle) for an uplifting or upbeat song. But my selection is from Paul Simon's best album, Graceland and, although shocking to me, apparently because I am old now, it's not the most popular Boy in the Bubble song on YouTube, as a song with the same name exists sung by someone named Alec Benjamin. No link to that one because it is fucking terrible, despite Alec Benjamin's Wikipedia page listing Paul Simon as an inspiration.


[1] I thought about adding a variety of links to various different think pieces, but realized that was a ridiculous task and added no value. I.e. a perfect example of a task I should to an AI (or rather to you, and you to Google). But I did decide to test an AI’s ability to write 2-page summaries with the prompt: Explain in 2 pages why AI content creation and sophisticated LLMs may lead to more financial fraud and higher quality scams and what people can do to protect themselves against these scams. If you aren’t familiar with how well these perform, you can read the Microsoft Copilot one here; note that I created this only after I wrote my bonus newsletter 😊!
[2] Am I biased because I sat through too many pointless meetings at a “leading tech company?”. Surely not, how dare you suggest that!
[3] By the way, I am using an em-dash here purposefully, although apparently it is a hallmark of AI writing, in that the LLMs always use the em-dash correctly and individuals rarely spend the time to try and get the correct dash 😊!
[4] It was always surprising to me that there was not more focus by Republicans on Hunter Biden selling his “art” while his father was president. That was horrifying; and deeply corrupt. Personally, I would probably slot it in somewhere as slightly worse than a Trump family mobile phone release, but not nearly as bad as a Trump and Melania pair of crypto coins (and crypto dinner!).
[5] We don’t have what is called AGI – artificial general intelligence – which would be a computer which could do anything you ask at a level (surpassing, if it were a superintelligence) human ability. Computers can, in fact, outperform humans at lots of discrete tasks (chess being a famous one in the annals of AI research) but people don’t typically call those chess computers “AI”. But LLMs, because they answer in natural language (instead of chess notation) are certainly perceived to be closer to AGI. And now we have other generative AI tools which create art, videos, music, etc.
[6] Historically a tried and true method of scam artists as it allowed for self-selection by the targets. Sprinkle your approaches with typos and poor grammar, because you want only the dumbest people to respond (since they are so much more susceptible to scams than the people who would be turned off by poor typos and grammar). But now this might be less important, if the AI enhanced scams can work on smarter and more conscientious people – scam emails that are targeting you, dear reader (a sophisticated, intelligent, erudite denizen of the internet) might not have any errors at all. Maybe they will flatter you by incorporating various literature references (instead of dumb sports metaphors or old Montreal Canadiens references)?
[7] I suspect that personalized/individualized emails will also have a higher likelihood of making it through spam filters (although there are lots of factors that go into spam filters, I know, and I am sure Google and others are aware of this risk).
[3] This is why you should avoid clicking on links. It is safest to go to large company’s websites directly and then log in to take actions (e.g. a link in a text or email might appear to go to the legit website, but will have subtle distinctions that you won’t notice if the AI has created a good enough copy of the legit website).
[4] From whence comes the phrase “sell you some swampland in Florida”.
[5] Amazing that Milton’s company was literally using the first name of a famous inventor since that inventor’s last name was already being used by a more successful EV company which actually was making cars. That always struck me as a huge red flag – although perhaps his plan was less a scam on the public and rather an attempt to provoke Elon into buying Nikola before they had to actually produce anything? In any event, he is still wildly wealthy (and not in jail) due to a Trump pardon (which was definitely NOT related to his $1.8M donation to Trump’s campaign, or the fact that his attorney was Brad Bondi, the brother of the current Attorney General) and was extensive enough to avoid restitution obligations of $168M.
[6] Although unfortunately you are still exposed to things like MSTR (a volatility machine) and TSLA, which are in several big index funds, despite their very, very strong meme-like stock tendencies.
[7] Interestingly, I guess that the economics already work for snail mail sent directly to property owners (names scraped from public records, obviously) asking them if they need to sell their property right away for cash! Especially since that has the hallmarks of an adverse selection problem [LINK] for the buyer – i.e. I’d be concerned that the only people who want to sell quickly for cash are ones who know about significant termite/flood/mold or environmental contamination. But I guess these purchases have to try and weed those people out and prey on the truly, truly desperate.
[1] Has AI already passed the Turing Test? A somewhat debated topic, mainly because people keep trying to change the goal posts for the Turing Test (which is more of a thought experiment in any event, in my mind), but I think it is somewhat accepted that the big LLMs can "pass" it.
[2] The airline then contacted the family member to confirm/double-check the purported changes and was able to sort out the matter.
[3] Ideally you should figure out a mechanism to alert you (or your siblings) or a financial advisor, or bank, or a trusted friend of any changes in relatives spending habits (though this can be difficult for obvious reasons).
[4] As you undoubtedly know, I am not shy about pointing out that, in fact, financial and investment advisors (or investment companies) are a pretty huge cause of fraud (as Madoff made abundantly clear, as did FTX, albeit more controversially). So there is no easy away to avoid all frauds – but interestingly enough, in the Madoff case and in the FTX case, many people ultimately lost substantial funds, but crucially, not all of their invested money – in the Madoff case because of efforts to spread losses and seek recompense through financial institutions working with and early investors invested with Madoff (as well as the decision to not give any credit to investors for their fictitious account statements) and in the FTX case due to pursuing deep-pocketed financial institutions involved, and (of course) a bounce-back in certain asset prices.
[5] I hesitate to recommend this episode of Black Mirror, as it is quite well done and accordingly, deeply, deeply disturbing. But just in case all this summer sun is making you yearn for some depressing TV (or the East Coast heat wave has you locked inside) you should check out Season 7, Episode 1, “Common People”. Not linking to it, but available on Netflix and Amazon.
[6] From a fascinating letter from Aldous Huxley to George Orwell, written after Huxley had read 1984 and comparing their two imagined dystopian futures. Also - cool trivia fact (I just learned!) was that Huxley taught Orwell French at Eton. And I guess if my parents had sent me to Eton, instead of Robert E. Lee High School, perhaps I could written my generation’s 1984 – instead of a finance newsletter.