ao link
Business Reporter
Business Reporter
Business Reporter
Search Business Report
My Account
Remember Login
My Account
Remember Login

American View: How I Learned to Stop Worrying and (Just Barely) Accept ML Technology

Linked InTwitterFacebook

I sat down to write this week’s column Sunday afternoon, less than a day after returning from this year’s Security Human Risk Summit in Las Vegas. Despite bringing home a notebook full of cool program ideas, staring at yet another blank new Microsoft Word document left me exasperated. After twelve years of crafting columns for the American View by-line, I still find it daunting trying to come up with a timely, entertaining, useful, and new topic every week… especially when I’ve got a backlog of laundry to do, groceries to put up, and appointments to schedule for the new workweek sitting just outside my office door. Ugggggh ... 


Fortunately, I remembered that I’d emailed myself a decent column idea from the summit. After an hour of clearing out backlogged new messages, I found it. As I’d suspected, I found a great premise from two lectures I’d sat on the potential use of (and limitations of) machine learning platforms in our mission space. Despite taking opposite tacks at the outset, both speakers eventually came to the roughly the same conclusion. 

 

First, Kerry Tomlinson from Ampere News delivered a great cautionary segment titled How ChatGPT can both help and hurt cybersecurity and awareness programs. She deftly explored the inherent weaknesses of a technology that’s completely devoid of self-awareness and prone to dangerous hallucinations. She walked us through specific prompts she’d entered and showed us how the tool’s outputs had consistently contradicted its own premises … when it wasn’t outright lying. 


A few lectures and coffee breaks later, Horatiu Petrescu of Aura Information Security gave an upbeat presentation about his own research titled Combining ChatGPT and Fogg Behaviour Model to Design Your Program. I was quite sceptical of Horatiu’s conclusions until we got the Q&A session; insightful queries from the audience allowed Horatiu to clarify that his advice to let ML tools “do the heavy lifting” for one’s program content goals was meant in terms of generating ideas, not finished products. Supplement and stimulant, not replacement.


This position synched up well Kerry’s: ML tools, she warned, are great predictors of text based on content models that they’ve studied but the tools themselves have no awareness of what they’re “saying.” They don’t understand anything; they only mimic the styles and phrasing of what others have written based on whatever libraries the tools were trained on. Use ML tools with extreme caution, she said. Triple-check everything they give you … because an ML tool’s output could well be complete drek. 

Cow manure can, at the very least, be useful as fertilizer. That’s what inspired the conclusion we’re heading for.
Cow manure can, at the very least, be useful as fertilizer. That’s what inspired the conclusion we’re heading for.

Based on these analyses, you’d be justified in assuming that security awareness practitioners like us would treat ML tools – like ChatGPT – as if they were radioactive. Counting on an amoral and irresponsible program to generate content sounds suicidal to one’s personal and institutional credibility. That’s how I’ve been viewing ML tools ever since they first got popular. 


Along those lines, my pal Fran from Candour Agency in Norwich and I were chatting before the summit began about seemingly-dystopian use cases for pseudo-AI and ML technologies. A few days before I left for Vegas, she sent me this great snippet in the hopes it might inspire an American View piece: 
“The consensus for an uptake in AI tech as a whole, holds a generally negative sentiment. Just 19% of financiers responding to Advanced’s survey felt that the workplace would be positively transformed by robots and AI-based technology, despite 40% also stating their organisation’s leadership is currently prioritising technology investment, linking to the fears held by many regarding job security in the world of ever-developing automation and AI adoption.”


Fran’s quote activated my confirmation bias like a full-body stomp on a land mine. I’ve stolidly been in the camp that opposes the use of ML tools as any sort of replacement for human creative labour. Being advised that most of the tech world might feel as warry about ML tech as I do was gratifying. Emotionally, it suggested that I was right. Intellectually, though, that line about “leadership” looking for ways to jump on the Hot New Tech™ got me thinking about how we might – if so directed – find ways to get some utility out of such tools. 
As both Kerry and Horatiu warned us at the Summit, ML outputs don’t know what they’re writing.

 

They have a dangerous tendency to confidently declare “truths” that are exact opposite of published analysis. They also create wholly fabricated quotes, citations, and “evidence” to justify their nonsense positions. As such, nothing coming out of a ML tool can be trusted to be correct. Everything that an ML tool generates must be treated with extreme scepticism. 

Pro Tip: Enter every ML output review asking “You seriously expect me to believe that?”
Pro Tip: Enter every ML output review asking “You seriously expect me to believe that?”

From a practical standpoint, that means that everything coming out of an ML tool must be cross-checked for balderdash by a knowledgeable human. How exactly is a technology that can’t be trusted supposed to replace or even augment a human worker, then? 


I think Kerry and Horatiu had the right of it when they recommended that ML tools be used specifically for overcoming writer’s block. Just because the darned things might be talking bollocks doesn’t mean that they don’t add some value … One of the greatest irritations I face every weekend is finding that obscure spark of inspiration that gets me off the blank new Word document and into a story hook that I can actually write. 


Fair’s fair: I can imagine ChaptGPT (or whatever comes next in the ML field) being useful for generating bloody stupid statements of fake “facts” that I can tear apart and correct. Even when the output is blatantly, insultingly wrong, the outrage bot-splained to will probably prove more than sufficient motivation to get off high-centre and start typing something … That on its own will definitely be worth the extra labour since it’ll save me anywhere from thirty to ninety minutes each weeks of time spent staring blankly at a monitor. 


Not ideal, sure, but useful. What more do you really ask for from an immature new technology? 

Linked InTwitterFacebook
Business Reporter

Winston House, 3rd Floor, Units 306-309, 2-4 Dollis Park, London, N3 1HF

23-29 Hendon Lane, London, N3 1RT

020 8349 4363

© 2024, Lyonsdown Limited. Business Reporter® is a registered trademark of Lyonsdown Ltd. VAT registration number: 830519543