FICTA 24 brought together experts to discuss AI’s transformative potential and the challenges ahead. My keynote, “AI: the spearhead of progress, not quite there”, covered the current state of AI, its applications across various industries, and the ethical considerations crucial for its future. The conference underscored the importance of interdisciplinary collaboration and featured esteemed speakers who provided valuable insights into AI and technology.
Introduction
Last month, I had the distinct honour of being asked to deliver the keynote address at the prestigious Frontiers of Intelligent Computing: Theory and Applications (FICTA 2024) conference. While I couldn’t attend in person due to injuring my ankle, I recorded a video as a stand-in to ensure I could still contribute to this significant event. I was invited to speak Professor Preeti Patel of London Metropolitan University after her colleague, Associate Professor Qicheng Yu, attended my talk at CyberASAP Pathfinder in Birmingham (Jan. 2024).
The theme of my presentation was “AI: the spearhead of progress, not quite there.” As Head of Technology and Engineering at Cyber Tzar, I have been deeply involved in the intersection of AI and cybersecurity, and I was thrilled to share my insights on the current state and future potential of AI technologies.
Video on Loom: “AI, the spearhead of progress. Not there yet.”
Transcript of “AI, the spearhead of progress. Not there yet.”
Good morning esteemed professors, students, and AI enthusiasts.
It’s an honour to be asked to speak at London Metropolitan University’s AI and Advanced Technologies Conference. Today, I’m excited to discuss a topic that lies at the heart of modern innovation and future possibilities: AI, the spearhead of progress. Not there yet.
AI has been the subject of intense debate, admiration, and speculation. On one hand, we have Mark Anderson, a renowned tech entrepreneur, who refers to AI as the spearhead of progress in his Techno-Optimist Manifesto. On the other hand, Ray Kurzweil, a leading futurist, reminds us in his interview that we are not there yet. These two perspectives encapsulate the dual nature of AI’s journey, its incredible potential, and the challenges that lie ahead.
To understand the present and future of AI, we must look at its past. AI’s journey began in the 50s with pioneers like Alan Turing who posed the question, can machines think? The creation of the first neural network models and the development of the Dartmouth Proposal in 1956 marked the birth of AI as a field of study. Over the decades, AI has seen periods of intense research activity, known as AI summers, and periods of reduced funding and interest, known as AI winters.
In the 1980s, the advent of expert systems brought AI into commercial applications, though limitations soon became apparent. The late 1990s and early 2000s saw a resurgence of interest, with advancements in machine learning and the availability of large data sets. The development of deep learning in the 2010s, exemplified by breakthroughs such as AlphaGo’s victory over a human Go champion, has propelled AI into the mainstream.
This historical context highlights the utilitarian nature of AI development and the continuous quest for innovation. Whilst at the Home Office, we used machine learning to understand threat data across AI. AI promises to revolutionise every aspect of our lives, from healthcare and education to transportation and entertainment. AI is poised to revolutionise and redefine how we interact with the world.
Anderson’s vision of AI as the spearhead of progress is not far-fetched. Consider AI’s role in medical diagnostics, where machine learning algorithms can detect diseases earlier and more accurately than human doctors. In education, personalised learning systems adapt to each student’s needs. To delve deeper into healthcare, imagine an AI system that can analyse vast amounts of medical data in seconds, providing doctors with insights that were previously impossible. This could lead to early detection of diseases like cancer, potentially saving millions of lives. AI can also assist in drug discovery, significantly reducing the time it takes to bring new medications to market.
In education, AI-driven platforms are transforming the learning experience. Adaptive learning systems can tailor education to each student’s strengths and weaknesses, making learning more efficient and enjoyable. Students who struggle with certain subjects can receive personalised tutoring, while those who excel can be challenged with advanced material. Anderson’s Techno-Optimist Manifesto embeds a deep-seated belief in technology’s ability to solve humans’ greatest challenges. He argues that AI, combined with other advanced technologies, can lead us into a new era of prosperity.
This optimism is not unwarranted. History shows us that technological advancements have consistently driven economic growth, improved living standards, and extended human capabilities. Consider the Industrial Revolution, born in my hometown of Birmingham, which transformed economies and societies. Today, we stand on the brink of another revolution driven by AI. Social benefits are immense, from increased productivity and efficiency to entirely new industries and job opportunities. AI can help us address some of the most pressing issues of our time, such as climate change, resource scarcity, and global health challenges.
We are witnessing AI breakthroughs that were once the stuff of science fiction. Autonomous vehicles, natural language processing, and AI-driven drug discovery are just a few examples. Companies like DeepMind, OpenAI, and Google are at the forefront of these innovations, pushing the boundaries of what AI can deliver. These advancements are not only impressive but also indicative of AI’s potential to be the spearhead of progress. Autonomous vehicles, for instance, have the potential to reduce traffic accidents, decrease congestion, and provide mobility to those who cannot drive. Natural language processing enables AI to understand and generate human language, facilitating more natural interactions between humans and machines. AI-driven drug discovery accelerates the development of new treatments, potentially saving countless lives.
In the financial sector, AI algorithms are used for fraud detection, analysing transaction patterns to identify suspicious activities. This has significantly reduced instances of fraud and saved billions of dollars. In agriculture, AI-powered drones and sensors monitor crop health, optimise irrigation, and predict yields, leading to more efficient and sustainable farming practices. In the retail industry, AI enhances customer experience through personalised recommendations and chatbots that provide instant customer support. In manufacturing, AI-driven predictive maintenance systems monitor machinery in real-time, predicting failures before they occur and reducing downtime.
These case studies illustrate AI’s transformative impact across various industries, driving efficiency and innovation. However, Ray Kurzweil’s perspective that we are not there yet serves as a crucial reminder that despite the incredible strides we’ve made, AI is still in its infancy in many ways. Current AI systems are powerful and narrow in their scope. They excel in specific tasks but lack the general intelligence and adaptability of the human mind. Furthermore, issues of bias, ethical considerations, and the need for massive amounts of data remain significant challenges.
Bias in AI is particularly pressing. AI systems learn from data, and if that data contains biases, the AI can perpetuate and even amplify those biases. This leads to unfair outcomes in areas such as hiring, lending, and law enforcement. Addressing bias requires careful consideration of the data used to train AI systems and ongoing monitoring to ensure fairness and equity. Ethical considerations are also paramount. As AI becomes more integrated into our lives, we must ensure that it is developed and used in ways that respect human rights and dignity. This includes protecting privacy, ensuring accountability, and preventing misuse. Developing ethical guidelines and frameworks is an essential part of responsible development.
As we advance AI, we must address ethical and societal concerns. The deployment of AI systems raises questions about privacy, job displacement, and the potential for misuse. It is imperative that we develop AI in a way that is transparent, accountable, and aligns with human values. The involvement of diverse stakeholders, including ethicists, policymakers, and the general public, is essential in shaping the future of AI, and that includes all of us. Privacy is a major concern, as AI systems often require access to large amounts of personal data. Ensuring that this data is collected, stored, and used responsibly is critical to maintaining public trust. Job displacement is another significant issue. While AI can create new job opportunities, it can also render certain roles obsolete. Preparing the workforce for this transition through education is crucial.
AI has often been depicted as a potential threat in popular culture. Movies like “Demon Seed” and “The Terminator,” and books like “I Have No Mouth, and I Must Scream,” have portrayed AI as dangerous and malevolent. While these stories are fiction, they reflect deep-seated fears about losing control over powerful technologies. These fears, though understandable, should not overshadow the potential benefits of AI. It is our responsibility to ensure that AI is developed and used in ways that are safe and beneficial. This involves rigorous testing, transparent development processes, and robust ethical guidelines. By addressing these concerns head-on, we can build public trust and ensure that AI is seen as a tool for good.
Popular culture has played a significant role in shaping public perception of AI. Films like “Ex Machina” and “Blade Runner” explore the ethical and existential implications of creating intelligent machines. While these narratives often emphasize the dangers of AI, they also highlight the importance of ethical considerations and the need for responsible development. These cultural depictions serve as a reminder of the potential consequences of unchecked technological advancement. They encourage us to reflect on the kind of future we want to create and the role that AI should play in it. By engaging with these stories, we can better understand public concerns and address them in meaningful ways.
The regulatory landscape for AI is evolving rapidly. Governments around the world are recognizing the need for frameworks that ensure the responsible development and deployment of AI. In the US, the National Institute of Standards and Technology (NIST) is working on an AI risk management framework to provide guidance on the safe and ethical use of AI. Similarly, the European Union’s AI Act aims to establish a comprehensive regulatory framework for AI within the EU. In the UK, the Centre for Data Ethics and Innovation (CDEI) is leading efforts to develop guidelines and best practices for AI ethics. These initiatives are crucial for fostering a regulatory environment that encourages innovation while protecting individual rights and societal values. By developing clear regulations and standards, we can ensure that AI is used responsibly and ethically.
Ethical AI development is not just about compliance with regulations, it’s about embedding ethical values into the entire lifecycle of AI systems, from design to deployment. This involves diverse teams that bring different perspectives, continuous monitoring for biases and unintended consequences, and a commitment to transparency and accountability. It’s about ensuring that AI serves humanity. Explainability is a key component of ethical AI. AI systems, particularly those based on deep learning, can be complex and difficult to understand. Ensuring that AI decisions can be explained and understood by humans is crucial for accountability and trust. This means developing methods to make AI systems more transparent, ensuring that users can understand how decisions are made.
Mark Anderson emphasizes that AI is ultimately a tool designed to help us, not an enemy to be feared. AI’s purpose is to augment human capabilities, making us more efficient and effective. It’s a partner in progress, enabling us to tackle complex problems that were previously insurmountable. AI is tireless, can analyse vast amounts of data rapidly, and provides insights that drive innovation. Anderson’s vision is one where AI serves as an extension of human intelligence. It’s a vision of collaboration between humans and machines, where AI enhances our abilities rather than replacing us. This is not a future where humans are obsolete, but one where we are empowered to achieve more than we ever thought possible.
As head of technology and engineering at Cybersar, a cyber risk management platform, I have witnessed first-hand the transformative power of AI. At Cybersar, we utilise AI to enhance cybersecurity and protect organizations from threats. Generative AI is a powerful tool in our arsenal. It helps us protect against domain takeover and hijacking by finding rogue domain names and testing domains for brand and content similarity. This proactive approach ensures that we can identify and mitigate threats before they cause harm. In addition, generative AI assists us in finding company data lost across the internet. It generates synthetic data for breach and data exfiltration research, helping us understand potential vulnerabilities and how they can be exploited. This enables us to stay ahead of cybercriminals and protect our clients’ sensitive information.
Machine learning plays a crucial role in our operations as well. It helps us qualify vulnerabilities for potential impact and likelihood by analysing trends in threat intelligence data. This allows us to prioritise our efforts and focus on the most significant threats. Focusing on continuous threat exposure management, we address what really is affecting our clients and customers. Machine learning also identifies key insights, trends, and patterns in the value. This data-driven approach provides us with valuable insights that inform our strategies and enhance our ability to protect our clients. The work we do at Cybersar exemplifies the power of human-AI interaction. AI is not replacing our cybersecurity experts; rather, it is enhancing their capabilities. Our experts use AI tools to analyse data more quickly and accurately, enabling them to make better-informed decisions. This synergy between human expertise and AI technology is what drives our success.
Anecdotally, I often find that much of AI is actually using AI itself. One question I frequently ask people is, do you have your own large language model? Followed by, how many nodes and what’s the run rate on that? These questions highlight the complexity and scale of modern AI systems and the infrastructure required to support them, and the reality that most AI development is using existing AI platforms to drive innovation. As we look to the future, it is clear that overcoming the challenges of AI requires collaboration. Governments, industries, and academia must work together to develop ethical guidelines, promote transparency, and ensure that AI benefits all of humanity. Public-private partnerships, investment in education, and continuous research are essential to navigating the complex landscape of AI.
It is crucial to engage the public in discussions about AI. Addressing fears and misconceptions through education and open dialogue can build trust and ensure that people understand the benefits and limitations of AI. By working together, we can create a future where AI is truly a force for good. The future of AI is incredibly promising. We can expect to see even more sophisticated AI systems capable of performing tasks that were once thought to be the exclusive domain of humans. Advances in natural language processing, computer vision, and robotics are to continue to push the boundaries of what AI can achieve. We will also see AI playing a more significant role in addressing global challenges such as climate change, healthcare, and sustainable development. The integration of AI with other emerging technologies such as quantum computing and the Internet of Things will open up new possibilities and drive further innovation. As AI continues to evolve, it will become an even more integral part of our daily lives, enhancing our abilities and improving our quality of life.
As AI transforms various industries, it is essential to prepare the workforce for the changes ahead. Education and training programs must adapt to equip individuals with the skills needed to thrive in an AI-driven world. This includes not only technical skills but also critical thinking and creativity. Lifelong learning will become increasingly important as the pace of technological change accelerates. Educational institutions have a vital role to play in this transition. By incorporating AI and data science into curricula, universities can prepare students for the opportunities and challenges of the future. Collaboration between academia and industry will be crucial in developing relevant and up-to-date educational programs.
However, another point to consider is AI fatigue. With the rapid advancements and hype surrounding AI, there’s a real risk of people turning off. Is AI just the key? AI is not a panacea. It’s a tool that, when used correctly, will drive significant progress. To combat AI fatigue, it’s important to set realistic expectations and communicate the potential and limitations of AI clearly. By focusing on practical applications and tangible benefits, we can maintain public interest and support for AI development. Public perception of AI is crucial to its successful adoption. Educating the public about AI and its implications can help dispel myths and build trust. AI literacy programmes, public awareness campaigns, and transparent communication are key. Engaging with communities and involving them in discussions about AI can also help address concerns and ensure that AI development aligns with societal values. By promoting AI literacy, we can empower individuals to make informed decisions about the use and impact of AI in their lives.
Interdisciplinary research is key to unlocking the full potential of AI. Collaboration between computer scientists, ethicists, sociologists, and other disciplines can provide a more comprehensive understanding of AI’s impact and help develop more holistic solutions. This interdisciplinary approach can lead to innovations that are not only technically advanced but are socially and ethically responsible. Universities and research institutions play a vital role in fostering interdisciplinary research by creating environments that encourage collaboration across disciplines. By doing so, we can drive the development of AI in ways that benefit all of society. AI development is a global effort with countries around the world investing in AI research and development. International collaborations such as the Global Partnership on Artificial Intelligence (GPAI) aim to promote responsible AI development and address global challenges. These initiatives are crucial for sharing knowledge and resources. By working together on a global scale, we can ensure that AI development is inclusive and benefits people worldwide. International cooperation can also help address issues such as data privacy, security, and ethical considerations, creating a more harmonious society.
As we embrace the future, it is essential to maintain a balance between optimism and caution. AI holds tremendous potential to drive progress and improve our lives, but we must also address challenges and ethical considerations that come with it. By fostering a culture of innovation, responsibility, and collaboration, we can ensure that AI serves as a force for good. The future of AI is not predetermined; it is shaped by the choices we make today. By embracing the potential of AI and working together to address its challenges, we can create a future where AI enhances human capabilities and contributes to a better world.
In conclusion, AI represents the spearhead of progress, driving innovation and transforming our world. While we are not there yet, the journey is filled with promise. By addressing these challenges, embracing ethical considerations, and fostering collaboration, we can harness the power of AI to create a brighter future for all. Now is the time to take action. As students, researchers, and professionals, you have the power to shape the future of AI. Embrace the opportunities, address the challenges, and work together to create a world where AI is truly a force for good. Let us be the pioneers of this new age, driving progress and innovation for the benefit of humanity.
Thank you for your attention. It’s been very good to be able to speak to you today, and apologies for not being there in person. I hope you’ve enjoyed the talk. Goodbye.
Wayne Horkan, FICTA 2024, London Metropolitan University
Summary
The FICTA 24 conference was a remarkable event, showcasing the latest advancements and discussions in AI and technology. My keynote address, “AI: the spearhead of progress, not quite there yet,” explored the dual nature of AI’s journey, its significant impacts across industries, and the ethical challenges that must be addressed. Esteemed speakers like Dr. Jesús Requena Carrión, Prof. Yonghong Peng, Clarke V Simmons, and Aninda Bose provided valuable insights into machine learning, AI innovation, power grid modernization, and scientific publishing. The conference underscored the importance of collaboration between academia, industry, and policymakers to harness AI’s potential responsibly and ethically. As we move forward, it’s crucial to engage in open dialogues, promote AI literacy, and develop frameworks that ensure AI benefits all of humanity.
References
Frontiers of Intelligent Computing: Theory and Applications (FICTA) and London Metropolitan University (LMU):
- Professor Preeti Patel
- Associate Professor Qicheng Yu
- Assitant Professor Maitreyee Dey
- https://www.londonmet.ac.uk/research/centres-groups-and-units/ai-and-data-science-research-group/events/12th-international-conference-on-frontiers-of-intelligent-computing
- https://ficta.co.uk/
- https://ficta.co.uk/keynote
- https://ficta.co.uk/assets/doc/shareddoc/FICTA%202024_Program%20Schedule_25%20May%202024.pdf
Speakers at FICTA 2024:
FICTA 24 featured an impressive lineup of keynote speakers, each bringing unique perspectives and expertise:
Dr. Jesús Requena Carrión – https://www.robotics.qmul.ac.uk/people/jcarrin
Faculty of Science and Engineering, Queen Mary University of London, UK
Topic: Reconceptualizing Machine Learning from a Deployment-First Perspective
Dr. Carrión is Executive Vice Dean at Queen Mary School Hainan and Head of the Data Science and Engineering Research Group. His work focuses on statistical data analysis, machine learning, and biomedical engineering.
Prof. Yonghong Peng – https://www.mmu.ac.uk/staff/profile/professor-yonghong-peng
Professor of Artificial Intelligence, Manchester Metropolitan University (MMU), UK
Topic: Rapid Evolving Landscape of Frontier AI: Innovation and Cyber Risks
Professor Peng’s research spans AI, Data Science, Cyber Security, and Mathematical Modelling. He focuses on advancing AI technologies to improve the data science lifecycle and address emerging cyber risks.
Clarke V Simmons – https://www.linkedin.com/in/clarke-simmons-a215781/
Founder and CEO of Neuville Grid Data, London, UK
Topic: Modernisation of Power-Grid
Simmons is an experienced leader in the energy sector, having started and sold two data-driven businesses in the US. He holds patents in laser physics and electrical monitoring systems.
Aninda Bose – https://www.springer.com/us/authors-editors/aninda-bose/5084
Executive Editor, Springer Nature London (UK)
Topic: Nuances of Scientific Publishing
Bose oversees the publication of scientific books in applied sciences, computational intelligence, and energy. He has a strong background in publishing, art direction, and leadership.
Mark Anderson’s Techno-Optimist Manifesto:
Ray Kurzweil, his ideas, and his interview with Joe Rogan:
- https://en.wikipedia.org/wiki/Ray_Kurzweil
- https://en.wikipedia.org/wiki/Accelerating_change
- https://en.wikipedia.org/wiki/Technological_singularity
- https://www.youtube.com/watch?v=w4vrOUau2iY&ab_channel=PowerfulJRE
Alan Turing:
Dartmouth Proposal in 1956:
AlphaGo’s victory over a human Go champion:
DeepMind:
OpenAI:
Google AI:
NIST AI Risk Management Framework:
- https://www.nist.gov/itl/ai-risk-management-framework
- https://en.wikipedia.org/wiki/National_Institute_of_Standards_and_Technology
European Union’s AI Act:
- https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX%3A52021PC0206
- https://en.wikipedia.org/wiki/Artificial_intelligence
Centre for Data Ethics and Innovation (CDEI):
Global Partnership on Artificial Intelligence (GPAI):
“Demon Seed” movie:
“The Terminator” movie:
“I Have No Mouth, and I Must Scream” book:
- https://www.goodreads.com/book/show/415459.I_Have_No_Mouth_and_I_Must_Scream
- https://en.wikipedia.org/wiki/I_Have_No_Mouth,_and_I_Must_Scream
“Ex Machina” movie:
“Blade Runner” movie:
Cyber Tzar: