MONASH DEEPNEURON
✦
MONASH DEEPNEURON ✦
Law and Ethics.
DeepNeuron is constantly looking at expanding its operations to be able to further our mission where we aim to…
“Empower students and researchers to use artificial intelligence (AI) and high performance computing (HPC) through ethical, hands-on research projects.”
— Monash DeepNeuron
What is our L&E team purpose?
Ensuring ethical standards are being upheld for MDN projects current and prospective
Educating MDN and the wider community on the ethical, legal and political issues surrounding AI
Pioneering novel research in AI ethics and producing meaningful output
Advocating for the safe and values driven approach to AI development
Law and Ethics Blogs.
In recent years, we have seen the rapid growth of technology and at its centre, Artificial Intelligence (AI), and its innovative and alarming implementation in society. On September 5th 2023, Monash DeepNeuron’s Law and Ethics Committee collaborated with Monash University and AiLECS lab, who are at the forefront of developing AI so that it can assist law enforcement and be utilised to promote community safety, in the industry event, Protecting the Future: Combatting the Online Surge of Child Exploitation.
Although Australia has begun to settle into its ‘post COVID-19 era’, levels of psychological distress and life satisfaction amongst Australians are still yet to recover to what they were pre-pandemic. Some have referred to this as the ‘shadow mental health pandemic’.
Artificial Intelligence (AI) began stirring controversy in 2022 with the release of Midjourney, which utilises AI and text-to-image generation to allow millions of Discord users to generate art.
[Cover image shared by Twitter user Whyenn (@WhyTheEnn)]
ChatGPT’s launch in November 2022 elicited a wide range of reactions in academic circles. Educational institutions have moved quickly to establish a position on its use in assessments, and Monash University has been no exception.
[image from https://nexus-education.com/blog-posts/unleashing-the-power-of-chat-gpt-in-education/#]
From autonomous vehicles to social media algorithms, the rapid advancements in artificial intelligence (AI) technology have revolutionised many industries - one being healthcare and medicine.
[image from https://iabac.org/blog/the-ethical-implications-of-ai-in-healthcare]
The rise of autonomous vehicles has made exciting waves in the automobile industry. Yet, the increased tendency for Artificial Intelligence (AI) systems to replace human actors has brought a new set of legal and ethical challenges to the table.
[image from https://dda.ndus.edu/ddreview/are-self-driving-cars-safe/]
Consider the following hypothetical: Jane has recently applied for a job as a software engineer at Tech Co. Jane’s application was rejected. Unbeknownst to Jane, Tech Co uses a machine learning system to automate their hiring process. The AI system chose the other applicants, all less experienced and also all male, over Jane.
[image from https://datapopalliance.org/lwl-25-discrimination-in-data-and-artificial-intelligence/]
On 30 July 2021, the Federal Court of Australia decided that AI systems can be inventors. In a word-first determination of Thaler v Commissioner of Patents, the Honourable Justice Beach found that AI systems can be the inventors on a patent application under Australian patent law.
[image: iStockphoto]
How does one determine whether an action is good or bad? Most people use some form of moral framework to rationalise the ‘goodness’ of their decisions.
[image: Atlassian]
In today’s blog post — We are excited to introduce our latest initiative — a DeepNeuron Law and Ethics (LAE) Committee.