The key theme from the 2023 Queensland Futures Institute’s Forum – Leading the Adoption of AI – was the broad opportunity to leverage AI tools across many use cases to support innovation and productivity. Organisations should think strategically about the best applications to drive value, but must also consider specific challenges, particularly around culture, data and information use.
The conversation also highlighted the need to maintain a human-centric view on the development and implementation of the technology, both around the culture it is used in, as well as around the impacts it may have on disrupting industries or supporting education outcomes.
|
|
|
|
|
GREG BROWN Practice Manager Antares Solutions |
JACINDA EULER Principal Brisbane Girls Grammar School |
PETER JARRETT General Manager Business Aspect
|
SUMMARY OF PANEL COMMENTS
- The adoption of AI will create cultural and societal shifts in organisations, schools and communities, as it disrupts industries, supports the innovation and productivity of organisations, and supports education outcomes.
- Successfully implementing the right AI model will differ between organisations, depending on use
cases, culture and risk management around data security.
- Effectively implementing these tools, as well as the necessary regulation and governance will require
a human-centric approach which avoids impeding on innovation.
MODERATOR SUMMARY
|
Wayne Gerard
- There are many uses and benefits of AI - from better interpreting medical imagery to identify cancer cells, to reducing the time it takes to start and produce reports, to building websites. There are lots of opportunities around AI which must be considered before we implement it within our organisations,
economy, and society.
- We must be thoughtful and progressive around the use of AI, because the winners in the global economy will be those who understand the best ways to use it right now. Like every good business practice, this will require thoughtful planning and execution, and consistent iteration – and fortunately, Queensland is home to many businesses which can achieve this.
|
COMMENTS FROM THE PANEL
|
Prof Marek Kowalkiewicz
- As a Professor and Head of Chair in Digital Economy at QUT, and through industry experience at SAP in California, Silicon Valley, Singapore, and Australia, as well as at Microsoft research in Beijing, generative AI and AI has been a focus of my work for about 20 years, but this has changed dramatically in the last 12 months. We are now talking about generative AI and thinking about its impact on businesses.
- As a university employee, this has helped me become massively more efficient by using algorithms to assist on work projects. As an educator and Executive MBA teacher, I encourage the use of AI and generative AI in students’ work. Through these uses, it is clear that we must continuously consider the ethical challenges of artificial intelligence use.
|
|
Jacinda Euler
- As the Principal of Brisbane Girls Grammar School, with over 1,500 secondary
students, it is of great interest to consider the potential impact of AI on education and schools. Schools are a microcosm of broader societal trends, so can provide a unique perspective on the opportunities around the use of AI.
- The greatest potential benefit is the personalisation of learning in classrooms, at a
scale which is not currently achievable for all students across an entire class. We may see tools to assist in learning and particularly for assisting neurodiverse students.
- The greatest challenge we currently face is understanding which AI developments, tools or implementations are worth pursuing in order to devote the resources to that of the greatest benefit.
|
|
Peter Jarrett
- Business Aspect is a wholly owned subsidiary of Data3, which advises customers on organisational change driven through technology.
- AI provides a strategic competitive advantage across three tiers: consumers,
organisations, and then nation states.
- Firstly, we are seeing massive consumer uptake of AI, which will be seen through
continued use of ChatGPT over the next few years, as new methods of interacting with AI emerge. This will include voice, vision and other form factors that drive demand, and push into enterprise use.
- For organisations, we will see Bring-Your-Own implementation models in organisations to facilitate the use of AI and are already seeing demand from boardrooms and management for this. We are also seeing vendors of existing technology solutions implementing AI into organisations’ technology stacks. Navigating enterprise use of AI will therefore require business policy shifts to achieve, as management considers budget, talent and other policy constraints.
- On a nation state level, we will potentially see huge shifts over the next three to five years, including discussions around supply chain and chip manufacturing, cross-border data flows, copyright laws, ethics and ESG. The implications of these shifts will flow through to businesses.
- There are a number of key elements to consider around these shifts going forward. Firstly, like in other technological developments, the first wave is not the last. Secondly, there will always be ‘loser’ developments.
- For AI, this may revolve around the risk of large language models being trained on
misinformation. We need to ensure adequate controls are implemented to mitigate this risk. Thirdly, given the fast rate of development of AI, we will need to learn how to undertake policy, strategy, implementation and change management simultaneously.
|
|
Wayne Gerard
- QIC recently undertook a deep dive into AI to better understand the global investment landscape. There are currently hundreds of companies producing products and services around AI, so there is a clear need for organisations to ensure they are able to discern which of these is the most beneficial and relevant. Alternatively, organisations may implement in-house solutions or acquire new business areas to procure this technology.
|
|
Greg Brown
- Antares Solutions helps customers understand and utilise AI, which is not a new
technology, but rather an evolution. Although the success and general popularity of ChatGPT has highlighted various benefits and use cases, in reality, we have seen similar leaps in technology previously.
- Queensland’s business leaders already have the opportunity to take advantage
of this technology, as it is already accessible in so many forms. However, while we may already be able to use it, we must ensure adequate controls, safeguards, and measures are in place around its use, particularly in organisations. Once this is achieved, and the technology is implemented in organisations, it will become a significant value driver.
|
PANEL QUESTIONS
How is the adoption of AI going to impact our culture and our community?
|
Jacinda Euler
- Culture is probably one of the most important in defining a school community. It’s not just how we behave, but it’s our attitude to learning and our willingness to adopt and introduce change. As a part of this, we must be open and responsive to technological changes despite not fully understanding all of its implications.
- Institutions must embrace this change culture as they come to understand the broader benefits of AI and the best pathway forward. Bringing about this culture is the first step to implementation. Once institutions understand the purpose and uses of AI, they can then prioritise what will be required to move forward and implement change. This culture will also guide expectations around the use of the technology.
- It will be important to preserve human connection through this culture even once the technology is implemented.
|
What are some of the common challenges that you are seeing organisations and communities, tackle
and think about, and how are they progressing to work through some of those challenges?
|
Peter Jarrett
- There are five key challenge areas around AI which we must understand and address to take full advantage of the technology.
- Firstly, we must ensure data quality and availability, implementing the right strategies and governance controls to protect this fundament of AI.
- Secondly, we must retain a high availability of talent to define and build AI solutions, which can be achieved through organisational skill development and partnership with universities, research institutions and start-ups.
- Thirdly, we must implement ethical frameworks and guidelines around the use of AI.
- Fourthly, scalability and integration must be a key consideration in the implementation of AI. This may include assessing existing tech stacks to identify how AI can be supported and scaled out within it. This often takes the form of a pilot implementation project.
- Finally, we must develop financially sustainable business cases in order to invest
in this technology. Implementing AI is not necessarily cheap, so running a pilot or prototype to prove its value is critical to supporting further investment within organisations.
|
How can organisations fully realise and track the benefits from the use of AI, while enabling teams to
engage with this emerging technology safely and thoughtfully?
|
Greg Brown
- It is critical to understand how to implement the right safeguards and controls for each use case to deliver the greatest benefits and efficiency gains of AI. A lot of current research indicates that only around 1 in 6 or 1 in 7 organisations currently using AI are measuring operational metrics such as FTE. Six or seven industry leaders are looking at why and where we can deliver staff productivity increases. This is relevant both in internal uses, as well as external efficiencies through service delivery to customers, and within memberships and partnerships. If this can be achieved in a supervised, monitored and controlled way, we can achieve real efficiency gains.
- We must continue to provide a human connection element to these interactions, and also maintain ethical boundaries through the use of AI in automation. This will see business focusing on connection, which will shape how AI is implemented to best support these interactions while still delivering these efficiencies.
- Although there are many, broad use cases of the technology across businesses,
in customers interactions and by staff today, it will be important to prioritise implementation in areas which these efficiencies will result in the highest value.
|
How will AI progress over the next couple of years and how will it impact our economy?
Will it result in job redundancies or enable people to perform their roles better?
|
Prof Marek Kowalkiewicz
- AI will empower workers and is not exclusively about achieving full automation, but rather, mindful automation that creates a better foundation to achieve more and improve the way we work.
- While we cannot predict the exact outcome of this technological development,
we are seeing three behaviours (RACERS) which may dictate how AI may impact organisations and the economy moving forward.
- The first behaviour is revenue automation, where organisations automate their revenue generating activities, rather than just automating tasks and processes.
- The second behaviour is continuous evolution, which is changing the way
organisations grow and evolve. This involves moving from future business planning as we currently know it, to continuous experimentation of new business models through incremental changes, implementing the most successful changes more broadly.
- The final behaviour is relationship saturation, implementing technologies such as
chatbots in sales or customer assistance. This elevates the traditional B2B or B2C business models to Business-to-AI-business/consumer.
- These behaviours will become more visible over the next 5-10 years as organisations adopt AI.
|
What practical regulation, governance or policy interventions might need to be put in place
around AI?
|
Peter Jarrett
- Consumers’ adoption of the technology will outpace policy development. However, all levels of government are currently working out the right frameworks and guidelines needed to apply this technology.
- A key consideration in this process is to take a humanistic perspective on how the
technology might support people throughout their work environment or within their organisational culture. This is the first step to uncovering where guardrails may be needed.
- Implementing controls without understanding this perspective risks being overly
burdensome with regulation, ultimately impeding innovation.
|
|
Wayne Gerard
- A critical aspect to practically implementing new solutions is applying the right controls without impeding innovation, whilst still enabling people to experiment and share the lessons learned. New technological developments of the past such as the uptake of the Software as a Service model changed business and revenue models when they were implemented. AI will have a similar impact on business models and will require organisations to better understand how to practically adopt the technology to leverage its benefits.
|
What will we see around the adoption of AI in the next 6-12 months, or a longer frame of reference of
over 3-5 years?
|
Jacinda Euler
- As compared to corporate organisations, schools are better placed to pause and
proactively consider the greater benefits of AI and the potential restrictions which may need to be imposed. While there is potential for the technology to be leveraged to support learning and education outcomes, there is also potential for misuse. As such, it may be managed similarly to how social media use is governed in schools.
- If this correctly managed, there is potential to create benefits in the classroom.
As such, it will be important to consider the platform with the appropriate frame of reference for education, as the technology will be of great benefit and will be able to be adapted in this setting to mitigate the challenges discussed.
|
|
Greg Brown
- Maintaining the right frame of reference is critical. While business plans are often
based on a 3-5 year outlook, technology implementation usually only has a 12-18 month view, given the high pace of change.
- There are many use cases of the technology today which will build momentum to
broader use cases in the future. Additionally, bringing innovative minds together by building a centre of excellence or enablement around AI to lead its implementation is critical to building this momentum. This requires a proactive approach and continued investment from organisations.
- Industry must both maintain a strong understanding of how AI can most benefit
them, but also keep an agnostic perspective around the diverse range of broader applications of the technology to ensure they are taking full advantage of the opportunities we will see in the future.
|
|
Prof Marek Kowalkiewicz
- AI will continue to disrupt the market in similar ways to previous technological
developments. For example, the stock photography industry may be revolutionised in the next 6-12 months as generative AI is able to create images from prompts. Additionally, knowledge aggregators like Quora will be considerably disrupted by AI. These are examples of industries which will be significantly changed in the near future.
- Some AI will need to be trained in order to be relevant for specific industry
applications. This may give rise to a new industry of generative AI optimisation, which teaches specific platforms what they need to be effectively used in specific settings.
- Overall, this will see considerable disruption in the next 6-18 months, significantly reshaping and creating entire industries.
|
|
Peter Jarrett
- The disruption caused by AI can be overwhelming from a human perspective. This will give rise to a strengthened need for authenticity, discernment, critical thinking and validity.
- Although we’ve seen significant recent developments in AI, the hype around the
technology may give rise to some disappointment, as certain expected applications may not come to fruition.
- This ‘froth’ may last for 12-18 months, after which we will start to fully understand and implement genuine use cases and start to see interaction and development in the next generation of AI.
|
How can organisations support their staff to experiment with AI?
|
Greg Brown
- Organisations must create a safe environment to enable their people to experiment with developing tools and applications of AI. This needs to mitigate the risk of data breaches and provide focus to employees to ensure they are still doing their job.
- Once this controlled environment has been established, organisations will create a culture which fosters innovation and enables their staff to proactively learn how to use the technology to improve their productivity.
|
|
Jacinda Euler
- Organisations should be open to discussing and encouraging the use of AI where it is appropriate for their own context. For example, for educators it is a useful tool to bring together what they are able to deliver now and could be delivered in the future.
- Students will always be curious, unfettered by old-fashioned ideals and open to
exploration. Schools can therefore leverage this to encourage and support the adoption of these technologies and sustain a culture of learning.
|
|
Prof Marek Kowalkiewicz
- Organisations may no longer be able to track the ‘shadow automation’ occurring due to their staff experimenting with these technologies themselves. This has become commonplace given the prevalence of public AI tools over the past 12 months.
- Because of this, organisations must create a culture to encourage and celebrate their staff trying to improve their performance – as those who have done this are innovative and curious.
- To use a more considered implementation approach, organisations can encourage a ‘side car’ model which encourages employees to do their traditional job but also attempt to do the same tasks with AI. This may add workload in the short-term but is a successful approach to careful adoption in the long-term.
|
Given the high resource requirements of developing generative AI platforms, what do organisations
need to know to implement the right one?
|
Peter Jarrett
- The recent Queensland AI Summit described the takers, shapers and makers of
AI large language models. Given these can cost $10 billion to build, many of the organisations within Queensland will start with taking what is already available.
- Interestingly, there has been some debate around whether large language models are more US-centric, and therefore if Australia should consider building its own large language model based on our own data sets to provide data sovereignty.
- The use of our own data represents the ‘shapers’, or those organisations who
take models and build them up or refine them with their own data sources before implementation. This is a scenario we may see become common, particularly for financial institutions.
- Makers are those organisations leading the development of models, such as OpenAI, which are making the large investments in development. These organisations are likely already developing future applications of these concepts.
|
How can industry groups collaborate effectively around AI to remain globally competitive?
|
Prof Marek Kowalkiewicz
- In the early days of enterprise systems, everyone was asking similar questions around approaches to building own enterprise systems, which usually involved in-house early computers and developers.
- Over time, a small number of organisations started developing and selling these
solutions at scale, eventually becoming significant players in the technology industry.
- The development of AI will likely see a similar shift in development, as it is relatively easy to build a large language model that works but extremely difficult and costly to develop models which work well.
- In the next few months, we are likely to see an emergence of new business models which see organisations sharing training data and development.
|
|
Peter Jarrett
- Generative AI works on the basis of probability to determine the most logical
sequences of words in a sentence to respond to a prompt. This means it does not always understand context.
- A lot of investment is required to tailor a model perfectly in-house, while there are
many public tools which draw on larger sums of data than organisations can actually access or provide, so an outsourced solution may be suitable for many organisations.
- Deciding between an in-house or outsourced model depends on the use case,
particularly when organisations must safeguard their data or intellectual property. Organisations must be careful to maintain control over certain aspects such as data security, but in some uses, may benefit from using a public model.
|
AUDIENCE QUESTIONS
What’s the role of industry and regulation in protecting the intellectual property?
|
Greg Brown
- Industry and regulation will play a significant role, given the need for authenticating the source becomes an issue when outsourcing information online. This gives rise to the need for a supervised model which implements adequate safeguards and policies to control against publishing unverified, unethical or questionable content.
- This is highly relevant for creative content, particularly around breaching
copyright privacy.
- It is critical for industry and regulation to play a role in ensuring information is only published if it is verified and properly referenced, in order to protect intellectual property and avoid potential legal consequences.
|
|
Peter Jarrett
- Industry controls are starting to emerge around creative content, such as watermarking in Adobe AI-generated content. This may eventually support monetisation around creative work.
- This was similarly considered in the context of NFTs, with micropayments being used to support creative work.
- Despite the belief that imitation is the greatest form of flattery in art, there is an
increasing need for controls to be implemented. Regulators and governments need to catch up and implement these controls in order to uphold copyright laws.
- However, the economics and business models behind the development and use of these tools, as well as societal expectations, will likely also contribute to ensuring fair use.
|
|
Jacinda Euler
- The key challenge around the use of these tools is teaching critical thinking. It has
never been more important to understand the sources of information and teach students the importance of credibility, audience awareness, accuracy and discernment.
- Additionally, understanding and addressing biases in content, and how it represents different cultures, genders, and races will remain important, whether or not this technology is utilised.
- The future of education is becoming more complex and will require placing increased emphasis on students’ awareness of context and responsibility
|
How can we plan for a future where trust in technology needs continual reassessment, given risks
around content degradation and the potential for hallucination in generative AI?
|
Prof Marek Kowalkiewicz
- Although we have become familiar with the authoritative results and objectivity of technology, generative AI should not be considered as such, because of the nature of large language models as probabilistic/stochastic/non-deterministic outputs. Despite being trained well, and being accurate 80-90% of the time, hallucinations may still occur.
- Users of the technology need to be aware of this and organisations must determine the appropriate use cases for their staff to use these tools. For example, brainstorming, reviewing or refining text, or creating stock photography are applications which do not rely on the accuracy of outputs, so may be areas where this technology is most commonly used and potentially disruptive.
|
|
Peter Jarrett
- Although there is significant potential for creative uses of AI, we must continue to be able to verify the outputs of these technologies if we are relying on their accuracy.
|
|
|