Advanced Insights into AI-Powered Diagnostics: An Analytical project Framework for Healthcare Innovation

Authors

  • Gursahildeep Singh Sidhu University of North Alabama, United States.
  • Araf Nishan International American University. Los Angeles, California, United States.
  • Md Humayun Kabir Mehedi International Islamic University, Chittagong, Bangladesh
  • Asma Ul Hosna Patowary Bangladesh University of Professionals, Dhaka, Bangladesh

Keywords:

AI diagnostics, U.S. healthcare, adherence, satisfaction, implementation challenges, training, resource allocation

Abstract

This study explores which factors affect U.S. healthcare professionals’ familiarity with, adherence to, satisfaction with and perceived challenges of using AI diagnostic protocols depending on the role of employment and healthcare facility type. With AI technology supporting the diagnostic process more and more, it is important to understand the adoption barriers and satisfaction levels across different clinical environments, allowing potential for AI to be optimized in healthcare. The study was a cross-sectional survey of 100 US healthcare professionals from public hospitals, private hospitals and clinics. The survey evaluated levels of familiarity with AI protocols, levels of adherence, levels of satisfaction and challenges in the implementation of AI. Analysis of data was done by SPSS, descriptive statistics, Chi-Square tests, ANOVA, logistic regression and correlation analysis. Quantitative findings were contextualized through the open-ended responses about implementation challenges that were thematically analyzed. The study revealed a strong positive relationship between familiarity with AI diagnostic protocols and adherence, with healthcare professionals who were trained more adherent. AI protocols were equally well received across job roles, although experienced professionals (e.g. pediatricians) were more satisfied with AI protocols than residents and interns and they had more concerns about autonomy. While public hospitals were the most satisfied with AI diagnostics, smaller clinics faced more patient specific challenges and infrastructure limitations. We found that experienced professionals were more aware of protocol gaps and that clinician feedback is critical for refining AI diagnostic tools. The results of this study suggest that AI integration in U.S. healthcare requires tailored strategies, from role-specific training to facility-based resource allocation to active clinician participation in AI protocol development. Identified barriers to adoption of AI by healthcare professionals at all experience levels and how to address them and we identified how to increase support for healthcare professionals across the clinical spectrum in the U.S. to more effectively, equitably and efficiently adopt AI in a variety of clinical settings, leading to better patient outcomes and operational efficiency.

References

Ahmed, H., & Robinson, T. (2022). The role of training in improving AI adoption in clinical practice. Journal of Clinical Medicine, 11(12), 2891. https://doi.org/10.3390/jcm11122891

Alam, L., & Mueller, S. (2021). Examining the effect of explanation on satisfaction and trust in AI diagnostic systems. BMC Medical Informatics and Decision Making, 21(1), 178.

Aurangzeb, M., Tunio, M., Rehman, Z., & Asif, M. (2021). Influence of administrative expertise on human resources practitioners on the job performance: Mediating role of achievement motivation. International Journal of Management12(4), 408-421.

Busnatu, Ș., Niculescu, A. G., Bolocan, A., Petrescu, G. E., Păduraru, D. N., Năstasă, I., ... Martins, H. (2022). Clinical applications of artificial intelligence-An updated overview. Journal of Clinical Medicine, 11(8), 2265.

Choudhury, A., & Asan, O. (2022). Impact of accountability, training, and human factors on the use of artificial intelligence in healthcare: Exploring the perceptions of healthcare practitioners in the US. Human Factors in Healthcare, 2, 100021.

Davenport, T., & Kalakota, R. (2022). The potential for artificial intelligence in healthcare. Future Healthcare Journal, 6(2), 94–98. https://doi.org/10.7861/futurehosp.6-2-94

Di Nardo, F., Chiarello, M., Cavalera, S., Baggiani, C., & Anfossi, L. (2021). Ten years of lateral flow immunoassay technique applications: Trends, challenges and future perspectives. Sensors, 21(15), 5185.

Erdal, H. G., Yilmaz, T. R., & Demir, S. K. (2023). Factors affecting AI adoption in healthcare: A systematic review. BMC Health Services Research, 23(1), 789. https://doi.org/10.1186/s12913-023-09876-5

Esteva, A., Kuprel, B., Novoa, R. A., Ko, J., Swetter, S. M., Blau, H. M., & Thrun, S. (2017). Dermatologist-level classification of skin cancer with deep neural networks. Nature, 542(7639), 115–118. https://doi.org/10.1038/nature21056

Gupta, J., McIntyre, S., & Lee, M. (2022). Addressing concerns of new healthcare providers in AI usage. International Journal of Health Sciences, 15(4), 309–324. https://doi.org/10.26719/ijhs.23.309

Hernandez-Boussard, T., Bozkurt, S., & Ioannidis, J. P. A. (2023). The digital divide in AI adoption: Implications for healthcare disparities. The Lancet Digital Health, 6(3), e123–e130. https://doi.org/10.1016/S2589-7500(23)00234-5

Hussein, R., Smith, A., & Davis, L. (2022). Resource allocation challenges in small healthcare facilities for AI adoption. Journal of Public Health Policy, 42(1), 23–45. https://doi.org/10.1057/s41271-022-00365-8

Johnson, K., & Patel, R. (2022). Public vs. private hospital dynamics in AI satisfaction. AI in Healthcare, 21(2), 134–150.

Jones, L. M., Smith, A. B., & Williams, C. D. (2021). Engaging clinicians in AI ethics: Strategies for effective integration. Journal of Medical Ethics, 50(2), 89–95.

Kasula, B. Y. (2021). Ethical and regulatory considerations in AI-driven healthcare solutions. International Meridian Journal, 3(3), 1–8.

Kim, D., Lee, S., & Park, J. (2022). Analyzing resource impacts on AI integration in public hospitals. Healthcare Technology Journal, 33(6), 545–557. https://doi.org/10.1016/j.htj.2022.06.009

Laï, M. C., Brian, M., & Mamzer, M. F. (2020). Perceptions of artificial intelligence in healthcare: Findings from a qualitative survey study among actors in France. Journal of Translational Medicine, 18, 1–13.

Micocci, M., Borsci, S., Thakerar, V., Walne, S., Manshadi, Y., Edridge, F., ... Hanna, G. B. (2021). Attitudes towards trusting artificial intelligence insights and factors to prevent the passive adherence of GPs: A pilot study. Journal of Clinical Medicine, 10(14), 3101.

Nagendran, M., Chen, Y., Lovejoy, C. A., Gordon, A. C., Komorowski, M., Harvey, H., ... Maruthappu, M. (2020). Artificial intelligence versus clinicians: Systematic review of design, reporting standards, and claims of deep learning studies. BMJ, 368.

Obermeyer, Z., Powers, B., Vogeli, C., & Mullainathan, S. (2019). Dissecting racial bias in an algorithm used to manage the health of populations. Science, 366(6464), 447–453. https://doi.org/10.1126/science.aax2342

Peterson, L., Roberts, M., & Thompson, G. (2021). Clinician-driven AI development: A model for sustainable healthcare innovation. Health Innovation Journal, 29(3), 312–329.

Price, W. N., & Cohen, I. G. (2019). Privacy in the age of medical big data. Nature Medicine, 25(1), 37–43. https://doi.org/10.1038/s41591-018-0272-7

Quinn, T. P., Sen, T., & Parker, W. (2020). Public perceptions of artificial intelligence and machine learning in healthcare: A scoping review. BMJ Open, 10(7), e033952. https://doi.org/10.1136/bmjopen-2019-033952

Rajpurkar, P., Irvin, J., Ball, R. L., Zhu, K., Yang, B., Mehta, H., ... Ng, A. Y. (2018). Deep learning for chest radiograph diagnosis: A retrospective comparison of the CheXNeXt algorithm to practicing radiologists. PLoS Medicine, 15(11), e1002686. https://doi.org/10.1371/journal.pmed.1002686

Rani, S., & Kaleem, M. (2022). Experienced physicians' perspectives on AI as a complementary tool. Journal of Medical Ethics, 48(4), 245–252. https://doi.org/10.1136/medethics-2021-107456

Singh, P., & Thompson, L. (2021). Identifying weaknesses in AI diagnostic tools: The role of clinical experience. Journal of Healthcare Informatics, 19(2), 89–102.

Thomasian, N. M., Eickhoff, C., & Adashi, E. Y. (2021). Advancing health equity with artificial intelligence. Journal of Public Health Policy, 42(4), 602.

Torres, F, et al. (2022). Addressing barriers in AI protocol adherence in U.S. healthcare. American Medical Review, 16(5), 199–210.

Wu, H., Li, Y., & Chen, Z. (2023). Incorporating clinician feedback into AI tool development for improved diagnostics. Journal of Medical Systems, 47(3), 45–58. https://doi.org/10.1007/s10916-023-01845-2

Author Biographies

Gursahildeep Singh Sidhu, University of North Alabama, United States.

Gursahildeep Singh Sidhu

Master of Business Administration,

University of North Alabama, United States.

 

Araf Nishan, International American University. Los Angeles, California, United States.

Araf Nishan

MBA in Business Analytics,

International American University.

Los Angeles, California, United States.

Md Humayun Kabir Mehedi, International Islamic University, Chittagong, Bangladesh

Md Humayun Kabir Mehedi

Department of Business Studies,

International Islamic University, Chittagong, Bangladesh

Asma Ul Hosna Patowary, Bangladesh University of Professionals, Dhaka, Bangladesh

Asma Ul Hosna Patowary

Master in Public Health,

Bangladesh University of Professionals,

Dhaka, Bangladesh

Downloads

Published

2022-06-30

How to Cite

Sidhu, G. S., Nishan, A., Mehedi, M. H. K., & Patowary, A. U. H. (2022). Advanced Insights into AI-Powered Diagnostics: An Analytical project Framework for Healthcare Innovation. Journal of Business Insight and Innovation, 1(1), 60–75. Retrieved from https://insightfuljournals.com/index.php/JBII/article/view/32