Here at FuturU, we recognise the transformative potential of AI in enhancing our services and supporting our stakeholders. AI can help us streamline processes, improve accuracy, and reduce turnaround times.
But, as a famous uncle of a certain superhero once said, “with great power comes great responsibility” – although, if you’re a comic person like me, it was actually Spiderman who said it in the original comics. No matter who said it, it’s very true and especially so with AI.
It’s crucial to balance this innovation with compliance, adhering to regulatory guidance such as that provided by Ofqual, the Office of Artificial Intelligence, and the Department of Education. This ensures that our AI implementations are fair, valid and secure, and that we maintain public confidence by always keeping the human decision-making element at the heart of everything we do.
In this article, I’ll outline some of the challenges we’ve overcome with the help of AI, and our plans for the future.
Getting started with AI
Our first big win was implementing AI in routine administrative tasks which were taking up valuable time. AI-powered automation tools have streamlined these processes, freeing up our team to focus on more strategic initiatives.
For example, as Head of Quality & Compliance at FuturU and a Dyslexic, I use AI to summarise publicly available updates, policies, and other documents. This allows me to prioritise which ones need a deeper dive. I now use software to turn multiple documents into audio podcasts discussing all viewpoints, which I listen to while running.
Compared to what we are doing now though, these examples seem like very basic uses of AI. We have since expanded into areas we could have only dreamt about when we first started this journey across all divisions.
We have now set up dedicated task forces to explore and implement AI solutions, ensuring we stay ahead in this exciting field. Everyone has reviewed their daily processes to identify where AI could support and where it can’t. This has given each team a clear roadmap for developing AI use, and they are now testing its limits while learning at the same time. The team is discovering new ways AI can help every day, turning tasks that took days to complete into minutes, leaving them to do the all-important human oversight and validation process.
Integrating AI across our operations
While we don’t want to go into too much detail on our cases right now, as they are only tests, we are actively testing the use of AI across all our provisions, including our Social Care Learning, End-Point Assessment Organisation, Awarding Organisation, and our Delivery Centre. It has revolutionised the way we carry out our operations, making them more efficient and innovative.
AI-driven insights are helping us to speed up all timeframes from months and weeks to days and hours. Our teams, senior leaders, and the task force are dreaming big and learning fast, all while keeping within regulatory requirements.
Our approach to embedding AI is strategic and inclusive. We aim to integrate AI into every facet of our operations. This involves continuous training for our staff to ensure they are proficient in using AI tools and fostering a culture of innovation and growth mindset. As anyone who has used AI knows, it’s all about the prompts and examples you give it that make the difference. We are all learning just as much as AI is.
Within our Social Care Learning we have been using AI for a while to help build scenarios for embedding learning, and a wide variety of AI tools to help with image creation, animation and voice overs. While using AI in course creation does require Subject Matter Experts to check the accuracy of content, it won’t be long before AI does most of the heavy lifting in the creation of content. We are sure it won’t be long before the learner is able to access learning experiences completely unique to them too.
AI is also gearing up to support our learners on our bespoke learning platform. AI will shortly help learners identify and navigate learning pathways to achieve their objectives as well as offer first line user support and guidance to all our users.
Our AI Toolkit
We actively use Copilot AI, but we have adopted other AI tools in our day-to-day practices. In addition, as we continue to grow, we are always looking at new AI tools that can help us. We also have access to our own AI model we can utilise.
Unofficially, I’m working on an 80/20 rule: 80% is AI doing the heavy lifting and 20% the all-important human oversight, reviewing, and validating. A human will always be present to input the prompts, add documents to review, and give it feedback to make a better output. That is why we tell our teams to treat AI like a human when speaking to it. This model is working well in my own AI use cases.
Recognising the limitations of AI
While AI offers numerous benefits, it is not without its limitations. It’s still early days for AI, and nothing can be taken for granted; a close eye is needed on everything going in and out. Here’s a handful of reasons why:
Potential bias: AI systems can inadvertently introduce bias based on the data they are trained on. It is crucial to ensure that this is not the case in anything they integrate with.
Human oversight and integrity: While AI can handle a significant portion of the work, human oversight is essential to review AI-generated outputs for fairness and accuracy during creation and afterwards. AI must be used to support, not replace, human judgement in marking and quality assurance. The validity of what we do depends on the reliability and accuracy of assessments, which should involve human experts.
Alignment with regulatory requirements: AI tools must be programmed to adhere strictly to Ofqual’s General Conditions of Recognition, ensuring that we always meet their required standards and those of others.
Data privacy and transparency: AI systems must ensure the confidentiality and security of data. This includes safeguarding assessment materials and personal information. The use of AI in processes must be transparent to everyone, including students, educators, and employers. Clear communication about how AI is used and the role of human oversight is essential to maintain confidence.
The future of AI at FuturU: What’s next?
Looking ahead, we envision expanding our AI capabilities to all elements of our business. This is dependent on their capabilities growing and our confidence in what it produces. Our goal is to stay at the forefront of AI innovation, continuously improving our services and making a positive impact on the education and social care sectors.