Pandemic Leads Education Professionals to Seek Changes in Their Careers

The COVID-19 pandemic has left the global workforce in a state of flux, with the vast majority of employees — including those in the education sector — looking for changes in their careers and in the workplace.

According to a recent survey of more than 1,000 education professionals, 83 percent want to make some changes in their career, with 89 percent saying their organizations need to do more to listen to their needs. And for 88 percent, the meaning of career success has changed for them. For many (43 percent), that means that "achieving a work-life balance is a bigger factor to achieving success now." For 29 percent, it means having some control (flexibility) over "where and when they work."

The survey, conducted by Oracle and Workplace Intelligence, found that the lives of education professionals have been impacted negatively in the last year (84 percent). A significant portion of the education workforce felt "stuck" in their careers (73 percent). Almost a third (30 percent) said their mental health worsened in the last year. More, 40 percent, said they "felt like they lost control over their career in the past year." And 26 percent said that this last year has "left them feeling unmotivated to pursue career goals." Almost half (45 percent) said they'd like to gain new skills. And 35 percent said their employers should offer more opportunities for skill development and professional learning.

Interestingly, large percentages of respondents seem to trust technology, in particular artificial intelligence, to help them in their careers and decision-making. According to the survey, 68 percent of education professionals would trust an AI chatbot to help them with career-related decision; 52 percent said using AI would help them feel more empowered in their careers; and nearly half (49 percent) said they'd be more likely to stay with an employer that used AI "to support career growth."

More than a third said they'd like to use technology to help them "identify skills they need to develop." And almost a third (30 percent) indicated they "would like technology to help give them specific steps to progress toward career goals."

Results of the full survey, which included 1,010 education professionals and more than 13,000 respondents in other sectors, can be found on the Oracle site.

About the Author

David Nagel is the former editorial director of 1105 Media's Education Group and editor-in-chief of THE Journal, STEAM Universe, and Spaces4Learning. A 30-year publishing veteran, Nagel has led or contributed to dozens of technology, art, marketing, media, and business publications.

He can be reached at [email protected]. You can also connect with him on LinkedIn at https://www.linkedin.com/in/davidrnagel/ .


Featured

  • pattern featuring interconnected lines, nodes, lock icons, and cogwheels

    Red Hat Enterprise Linux 9.5 Expands Automation, Security

    Open source solution provider Red Hat has introduced Red Hat Enterprise Linux (RHEL) 9.5, the latest version of its flagship Linux platform.

  • glowing lines connecting colorful nodes on a deep blue and black gradient background

    Juniper Launches AI-Native Networking and Security Management Platform

    Juniper Networks has introduced a new solution that integrates security and networking management under a unified cloud and artificial intelligence engine.

  • a digital lock symbol is cracked and breaking apart into dollar signs

    Ransomware Costs Schools Nearly $550,000 per Day of Downtime

    New data from cybersecurity research firm Comparitech quantifies the damage caused by ransomware attacks on educational institutions.

  • landscape photo with an AI rubber stamp on top

    California AI Watermarking Bill Garners OpenAI Support

    ChatGPT creator OpenAI is backing a California bill that would require tech companies to label AI-generated content in the form of a digital "watermark." The proposed legislation, known as the "California Digital Content Provenance Standards" (AB 3211), aims to ensure transparency in digital media by identifying content created through artificial intelligence. This requirement would apply to a broad range of AI-generated material, from harmless memes to deepfakes that could be used to spread misinformation about political candidates.