Health

Tech leader tries to work MAGIC with AI incubators and research collaborations



Humberto Farias has been following the explosion of generative AI very closely.

Farias is the co-founder and chairman of Concepta Technologies, a technology company specializing in software development and programming in the fields of mobile, web, digital transformation and artificial intelligence.

For example, he notes that Apple is putting generative AI at the center of the lives of hundreds of millions of iPhone users. But with recent data breaches, patient privacy issues, and other IT concerns, he says he worries that health IT teams will be more likely to view AI as a threat than a tool.

The question is: How can health systems protect valuable patient data while still collecting Benefits of Generative AI?

Farias has launched the Concepta Machine Learning and General Intelligence Center, or MAGIC, a collaborative research program, virtual incubator and service center for artificial intelligence and advanced technologies.

Healthcare IT News recently spoke with Farias to learn more about MAGIC and understand the concerns he’s heard from healthcare CTOs about implementing AI. He offered practical tips and examples for securely implementing AI and machine learning, and described what he believes should be a key focus for CIOs, CISOs, and other security leaders at hospitals and health systems as AI and machine learning continue to transform healthcare.

Q. Please describe your new organization, MAGIC. What are your goals?

ONE. Our mission is to push the boundaries of AI research and development while providing practical applications and services that solve real-world problems. At MAGIC, we aim to promote cutting-edge research for both fundamental technologies and applied solutions, support and nurture early-stage AI projects, educate and train AI professionals, provide consulting services, and build collaborative networks.

Some of our initial partnerships include healthcare companies dedicated to improving healthcare for patients, hospitals, and clinical teams. They combine assessment, analytics, and education, then measure it all to improve healthcare for everyone. Through our partnerships, we deploy AI to help programs run more efficiently and cost-effectively for their teams.

We are ready to partner with large health systems on some of the key issues they face when implementing AI. We have partnered with health systems like Advent Health on other software technologies and are well-equipped to address the unique regulatory and patient privacy issues facing the healthcare industry.

Q. What are some concerns you’ve heard directly from healthcare CTOs about implementing AI into their business structures?

ONE. I’ve heard from healthcare CTOs that their primary concern regarding implementing AI into their business architecture remains privacy and data security. Healthcare executives want to ensure the privacy and security of sensitive patient data is a top priority, given the stringent regulations from HIPAA and other requirements.

There is also uncertainty around how AI solutions can integrate with legacy systems and whether they will be compatible, as well as navigating the complex legal landscape to ensure AI solutions comply with all relevant laws and guidelines.

There is also a cost to implementing AI and many healthcare CTOs are uncertain about the return on investment this technology can bring. I am always looking to reduce these costs by collaborating with colleagues and ensuring we are not operating in silos – learning from mistakes and leveraging successes from other industry leaders.

At the same time, there is a shortage of skilled personnel to develop, deploy, and manage AI systems. Health systems are already strapped for budgets and facing cuts, so partnering with an AI research program can help meet this need and help accelerate the use of AI across their entire organization.

We are working to educate health systems on how to use AI for simple purposes like reducing repetitive administrative tasks and large-scale projects that can improve workflow for providers and actual patient care.

Finally, there are always ethical concerns when it comes to AI, and healthcare CTOs want to ensure AI is used ethically, especially in decisions that directly impact patient care. Top concerns in this area are informed consent and data bias.

Patients need to know that AI is being incorporated into their healthcare, as well as ensure that the data used to train AI algorithms does not lead to biased healthcare decisions that exacerbate disparities in healthcare outcomes across different demographic groups.

Q. What are some practical tips and examples you can share for deploying AI safely and securely, especially when considering sensitive medical data?

ONE. There are a number of ways healthcare executives can implement AI safely and securely. One of these is through data encryption. It is important to always encrypt sensitive medical data between networks and when stored in records systems to protect against unauthorized access.

Another tip is to implement strong access control mechanisms to ensure that only authorized personnel can access sensitive data. Large healthcare facilities should use multi-factor authentication, role-based access control, and 24/7 monitoring systems. Conducting regular security audits is another way to ensure security and safety by continuously monitoring to detect and respond to potential threats.

Regulating compliance is another tip for ensuring trust; you’ll do this by implementing AI in compliance with regulatory frameworks such as HIPAA and GDPR. Another tip is to prioritize developing and adhering to ethical guidelines when using AI, ensuring a focus on fairness, transparency, and accountability.

Stanford Health Care, for example, has an ethics board that reviews AI projects for potential ethical issues.

Q. What do you think CIOs, CISOs, and other security leaders at hospitals and health systems should focus on as AI continues to explode in healthcare?

ONE. The use of AI in healthcare is inevitable, so the primary focus of CIOs, CISOs, and other security leaders should continue to be on ensuring data privacy and security and protecting patient data from breaches. Ensuring programs comply with regulations is a top priority.

Healthcare leaders should also focus on developing a scalable and secure IT infrastructure that can support AI applications without compromising performance or security. Then, to support this system, provide ongoing training to employees at all levels – from staff to vendors to the C-suite – on the latest AI technologies and security measures to minimize the risk of human error.

To ensure a safety plan is in place, healthcare leaders should develop and maintain a comprehensive risk management strategy that includes regular assessments, incident response plans, and continuous improvement.

Collaboration is key to creating the best team ready to tackle the challenges of the world we live in, encouraging collaboration between IT, security and clinical teams to ensure AI solutions meet the needs of all stakeholders while maintaining security and compliance standards.

The HIMSS AI Forum in Healthcare is scheduled to take place September 5-6 in Boston. Learn more and sign up.

Follow Bill’s HIT coverage on LinkedIn: Bill Siwicki
Email him: [email protected]
Healthcare IT News is a publication of HIMSS Media.

news7g

News7g: Update the world's latest breaking news online of the day, breaking news, politics, society today, international mainstream news .Updated news 24/7: Entertainment, Sports...at the World everyday world. Hot news, images, video clips that are updated quickly and reliably

Related Articles

Back to top button