May 23, 2008

Adult Learning via Technology

In assessing technology’s role in distance education for adults, Gibson (2000) argues that four questions must be considered—what is being accessed, how are programs designed, who is being served, and with what results? At the time of Gibson’s article in 2000, these were valid concerns for study. After 8 years and significant advances in technology and the options available for distance learning, the question is whether these questions still valid and worthy of additional study.

Range of Programs

Gibson lists higher education, business and industry, military and government, health care, and adult and continuing education as the primary groups utilizing distance education. Within the business and industry sector, usage of technology-enabled learning continues to grow. Sugrue & Rivera (2005) reported that the percentage of learning hours provided via technology has increased from 8.8% in 2000 to a projected 32.5% in 2005 (p. 14). Similar industry reports show this trend continuing for the foreseeable future. In higher education, Allen & Seaman (2007) report that from 2002 until 2006 the percent of online enrollment versus total enrollment in degree-granting postsecondary institutions grew from 9.7% to 19.8% (p. 5). In 2006, 3.5 million students were taking at least one online course, up from only 1.6 million in 2002 (p. 5). Another interesting statistic is:

“…whether online degrees are as good as those granted by face-to-face programs. Overall, only about one-in-five institutions disagrees with the statement that “online degrees have the same level of respect as face-to-face degrees.” About one-half are neutral, and the remaining portion (27 percent) agree with the statement.” (Allen & Seaman, 2007, p. 15)

The question of reach and usage may be interesting to study, but the acceptance and credibility factor—to this author—would seem more critical to study.

How Programs Are Designed

The design of distance education has been evolving since the early days of interactive video, then computer aided instruction, then computer based training, then web based training, and now eLearning. Many early adopters tended to focus either on flashy multimedia “edutainment” programs, or on converting PowerPoint presentations or paper-based materials to an online format. Unfortunately, edutainment-type programs were beyond the economic reach of many, and the proliferation of “page turner” eLearning programs helped coin the phrase “bLearning” (as in boring learning). The design of programs, in terms of their engagement level, instructional integrity, and transference of skills and knowledge is something very worthy of continued study. Especially with the advent of many rapid-development tools becoming available on the market, many are tempted to sacrifice instructional integrity for quick results. Quantifying the impact of programs built on solid adult learning and instructional design principles versus those built with a more limited focus (perhaps on content only) could be very beneficial justification for the field of distance education.

Who Is Served

From my experience as a developer, student, and facilitator/professor of eLearning and distance education, I can anecdotally support Gibson’s (2000) claim that the majority of online learners are women. I have also observed that in higher education in particular, there often is greater racial diversity represented as well. This is a refreshing change versus many traditional universities. However, given Gibson’s alarming words regarding the risk of being perceived as “less well educated” if one selects distance education over traditional bricks-and-mortar institutions, I think it is imperative that this be studied over time. Again, as mentioned earlier in this paper, the acceptance and credibility factor of online learning would be important to study in order to help further validate it as a viable—and in some cases perhaps even preferable—method for education.

Results

Gibson (2000) raises a number of interesting questions in regards to what results may come from greater usage of distance education, particularly from the perspective of potential social impact and change and the democratization of learning access. This is an admirable, and perhaps idealistic, goal. The question, though, is this a matter for scholarly study or perhaps more of a political or policy question? In lieu of a definitive answer, perhaps we can all take whatever steps we can within our own spheres of influence toward Gibson’s ideal that all people are “ensured quality of access to and success in lifelong learning” (p. 436).


References
Allen, I. E., & Seaman, J. (2007). Online nation: Five years of growth in online learning Retrieved May 15, 2008, from http://www.sloan-c.org/publications/survey/pdf/online_nation.pdf

Gibson, C. C. (2000). Distance education for life long learning. In A. Wilson & E. R. Hayes (Eds.), Handbook of adult and continuing education (pp. 423-437). San Francisco: Jossey-Bass.

Sugrue, B., & Rivera, R. (2005). State of the industry Retrieved May 15, 2008, from http://www.astd.org/NR/rdonlyres/563C2472-1F53-4BEE-8213-7CC19BC532C5/0/ASTD_StateoftheIndustry_2005.pdf


- Robin

Copyright Robin Donnan 2008. All Rights Reserved.
http://www.perfassocinc.com


May 15, 2008

Technology and Life-Long Learning

Technology-enabled learning has become a staple in how adults partake in life-long learning. Government and military were early adopters of technology-enabled learning, particularly via simulations. Workplace eLearning programs aim to provide just-in-time access to key job-related knowledge and skills training. Academic eLearning programs provide greater accessibility to higher education for busy working adults. Against this context, what can we make of Kasworm & Londoner’s (2000) three elements they argue "influence the design and conduct of technologically-mediated learning systems for adults" (p. 234)?

1. “Who should be responsible and have authority in the design and conduct of the adult learning experience [for technology-based learning programs]?” (p. 234)

Kasworm & Londoner (2000) posit that authority for learning process and design can reside with many different parties: instructional designers, technical support groups and experts, learners, and instructors. They argue that the traditional ISD model of instructional design—when applied to technology-enabled learning—is too inflexible to “target specialized learner needs, create unique classroom dynamics, or introduce new and varied content” (p. 235). Furthermore, Kasworm & Londoner argue that technical experts run the risk of becoming too “enamored by the ‘bells and whistles’ possible in the technologically-mediated learning approach” (p. 235).

In my experience as a designer and developer of eLearning since 1990, I have personally witnessed both of these issues. I have seen instructional designers struggle to adopt their behavioral, and often sequential, mindset to the more fluid and flexible approach that is preferred when developing engaging eLearning. For example, the tendency of instructional designers new to eLearning often is to create a “page-turner” program with a certain number of content screens followed by a multiple choice question. After 10 minutes of this for the learner, this is no longer engaging nor are the multiple choice questions likely to be measuring knowledge acquisition much beyond the knowledge or comprehension level. I have also seen programmers and media developers create beautiful interactive media, but with questionable instructional benefit. For example, what is the true educational value of a simulated 3D office environment with full-motion video inserts of talking heads? Other than requiring significant bandwidth and graphics processing power, how does having this type of “edutainment” help the learner truly learn the material any better or at a higher cognitive level? In fact, I often caution clients to beware of eLearning vendors who say their programs are highly interactive when what they really are is highly entertaining and media rich; true learning comes from the learner interacting with the program based on decisions they make—it does not mean passively watching the monitor as a video or animation plays.

So who is responsible and should have authority in the design and conduct of the adult learning experience for technology-based learning programs? Simple. The learner should. It is up to the instructional designers and technology experts to come together to create programs that are engaging, educational, and provide for flexible pathing through the material, e.g., test-out options for more advanced learners and remedial review for those who need additional assistance. Therefore, the learner should be considered throughout the design and development process, and be an active member during Pilot testing by providing constructive feedback to help improve that and future programs.

2. “How can practitioners create learning processes and design for these interactions and connections [between learners, instructors, and content] to occur?” (p. 237)

In a related vein, Kasworm & Londoner (2000) caution about “the repetitive nature of technology learning modules, the lack of instructional connections to the adult learner’s background and current communities of practice, and the lack of critical engagement in the content knowledge and skills” (p. 237). One potential solution is the use of blended learning—a mix of technology with instructor-led or other high-touch (rather than high-tech) techniques. Bielawski & Metcalf (2005) expand this definition of blended learning to include “different online learning delivery methods, such as asynchronous and synchronous course delivery [that] can be used to create effective training and development solutions that reflect a sophisticated blending of new e-learning technologies and alternative approaches to instructional design” (p. xvii). In fact, some of the different eLearning categories presented by Bielawski & Metcalf (2005) include facilitated synchronous learning (i.e., live online training), facilitated asynchronous learning (e.g., online learning such as that offered by Walden University), self-paced computer- or web-based training, and collaboration tools (e.g., online chat, web conferencing, and discussion boards).

In my experience, I have used all of these instructional techniques either on their own or as part of a blended learning strategy. In selecting courses that make good candidates for delivery via strictly technology-delivered methods, I recommend selecting those courses that have (1) a large audience, (2) stable content, (3) interaction needs that translate well to the selected eLearning approach, and (4) a content area with natural “pull” (i.e., one that learners are motivated to complete). In addition, technical, procedural, and factual content are excellent candidates for eLearning delivery; “soft skills” and application can certainly be accomplished via eLearning, but preferably only if the budget allows for a more extensive development effort and if the team has access to instructional designers and technology experts experienced in building interactive simulations. Another viable option for this type of learning is to use a blended approach. For example, present the procedural/factual prerequisite knowledge via self-paced technology and then the application, analysis, and synthesis components via an instructor-facilitated method (whether face-to-face classroom or facilitated online learning). Collaboration tools (e.g., online chat, web conferencing, and discussion boards) can also be used very effectively (1) before learning to establish the learning community and ensure understanding of prerequisite knowledge; (2) after the learning event to reinforce the training and ensure transfer of the new skills and knowledge; and (3) by the team designing and developing the learning itself.

3. “What should be the place and role of evaluation and critical reflection for instructors, designers, technical support personnel, and learners?” (p. 238)

The last element Kasworm & Londoner’s to consider is evaluating learning via technology. Kasworm & Londoner (2000) lament that “traditional learning systems often use both formative and summative evaluation to improve a learning event…however, there is limited discussion of models and strategies for effective evaluation within technology-mediated instruction” (p. 238). Perhaps this was the case at the time of the writing of their article, but progress has been made to ensure evaluation is a critical component of distance learning programs. Allen (2003) recommends the use of rapid prototyping as a method for formative evaluation via successive cycles of design, creation, and evaluation (p. 137). Bielawski & Metcalf (2005) recommend establishing and tracking measurements of both business impact and training efficiency. For example, business impact metrics might include participation, progression, and satisfaction; and training efficiency metrics might include cost and time measures, level 1 evaluations, level 2 tests, and level 3 assessments (per Kirkpatrick’s model) (Bielawski & Metcalf, 2005, p. 145).

In my experience, one can design distance learning that provides the learner with multiple opportunities to evaluate their own knowledge—and to give them control over their learning experience based on the results of that evaluation. For example, pre-tests can be used to give learners the ability to opt out of particular sections/modules; these results can also help direct the learner to those sections of the course that best meet their needs. Pre-test versus post-test results also can be compared to provide quantitative evidence of increased learning resulting from completion of the learning program. Additionally, if the organization links their training to a Learning Management System, these results can be rolled up to analyze trends at a module, course, curriculum, or even particular demographic or subgroup level—all very important data and feedback points to help shape current and future programs.

And one last very important point—in evaluating eLearning programs, it is important to remember to give more credence to the feedback received from the learners—and not the sponsors/those paying for the program. I have seen too many programs fail because a key stakeholder said, “I know what the learners need and want” rather than finding out for sure from the learners themselves. Adopting the approach used by information technology professionals, the design and development of distance learning and eLearning programs should always be user-centered.

References

Allen, M. W. (2003). Michael Allen's guide to e-learning: Building interactive, fun, and effective programs for any company. Hoboken, NJ: John Wiley & Sons, Inc.

Bielawski, L. & Metcalf, D. (2005). Blended elearning: Integrating knowledge, performance support, and online learning (2nd ed.). Amherst, MA: HRD Press.

Kasworm, C., & Londoner, C. A. (2000). Adult learning and technology. In A. L. Wilson & E. R. Hayes (Eds.), Handbook of adult and continuing education (pp. 224-241). San Francisco: Jossey-Bass.

- Robin

Copyright Robin Donnan 2008. All Rights Reserved.
http://www.perfassocinc.com


May 7, 2008

Individual Employee Development

In workplace learning and performance improvement, determining the strategy and direction for individual employee development is a critical task. Ideally, the process begins before the employee enters the workplace—with the development of a competency-based development structure—and extends throughout their tenure with that organization. Once within an organization, individual development should include components designed to aid in acculturation, to enhance self-awareness and lay the groundwork for further learning, and to maximize both formal and informal learning methods to facilitate both informational and transformative learning.

Competency-Based Development Structure

Beginning in the 1990s, many organizations learned that to compete in a global marketplace, it was critical to capitalize on the core competencies that differentiate the services and products they offer (Prahald & Hamel, 1990). Based my observations as a workplace learning and performance improvement professional, the predominant approach has been to define leadership competencies that are common for all positions in the organization and then to define functional competencies specific to particular job clusters. This approach helps to not only ensure consistency in the organizational culture, but also ensures that “individuals… efforts are not so narrowly focused that they cannot recognize the opportunities for blending their functional expertise with those of others in new and interesting ways” (Prahald & Hamel, 1990, p. 5). As such, the beginning of any employee development plan should begin with the definition of core leadership and functional competencies. These competencies can then be used as criteria for recruiting, whether external hires or internal transfers and promotions. Competencies can be used to help identify the learning and knowledge needs of both incumbents and new employees. Competencies should also be used as criteria for performance assessments and each employee’s development planning.

Acculturation and Self-Awareness

Once hired, new employees need to be acculturated to the organization. Klein & Weaver’s (2000) research “revealed that employees attending…orientation training were significantly more socialized on 3 of the 6 socialization content dimensions (goals/values, history, and people) than employees who did not attend the training. Employees attending the orientation training also had significantly higher levels of affective organizational commitment than nonattendees” (Abstract). Additionally, new employee orientation has been shown to have a positive impact on employee retention, with employees “69% more likely to remain with the company after three years if they completed a full orientation program” (as cited in Sims, 2002, p. 6). This is an important argument for conducting new employee orientation when considering the thousands spent on recruiting and hiring a new employee, as well as the cost to the organization of recruiting and training new employees if turnover is high.

New hire orientation can be an important opportunity not only for acculturation, but also to lay the groundwork for future work and learning during employment with that organization. For example, learning and personality instruments such as the Strengths Finder and Myers-Briggs Type Indicator (Buckingham & Clifton, 2001; Kroeger & Thuesen, 1992; Lawrence, 1979/1982) can be effective tools to enhance self-awareness and thus improve team and learning results (Lankard, 1996). Experience has shown how having the work force complete the same instrument creates a common language and understanding that employees can use when resolving conflicts, deciding how best to approach others with ideas, and ensuring balanced team composition so that multiple perspectives and abilities are represented.

Formal Learning

With core leadership and functional competencies defined and employees acculturated and self-aware, it is time for on-the-job work experience and workplace learning to play their role. Curriculums should be designed to support the organization’s competency model, and employees need to have different development opportunities available for them should they find themselves deficient in a particular competency area. Deficiencies may be identified through self-assessment or assessment of the employees’ work (Lankard, 1996).

Formal learning programs need to be designed with respect to Knowles’ concepts of andragogy, in particular addressing learners’ need to know the rationale/why of what they are learning, supporting the self-concept of the learner, and providing opportunities for learners to share their prior experiences and relate new learning to current challenges (Knowles, Holton, & Swanson, 1973/2005). To ensure workplace learning is transformational as well as informational, it is critical to incorporate action learning, hands-on experience, feedback, and reflection (Burton, 2006; Imel, 1998; Pedler, 1983/1987). Incorporating problem solving on actual work problems or using case studies that simulate common work problems will have the greatest impact and retention of new knowledge and skills (Lankard, 1996; Pedler, 1983/1987). Additionally, performance support such as job aids and access to online resources (e.g., Help systems and knowledge bases) can provide formal support and reinforcement after the learning program.

Informal Learning

Supporting formal learning, informal learning can have additional informational and transformational learning benefits. Informal learning can be defined as the individually driven lifelong learning that occurs outside training or a classroom. Lifelong learning, in turn, encompasses many of the meta-learning skills that can be applied to everything that one learns over a lifetime including an individual’s ability to “take responsibility for learning, learn through research, reflect and evaluate, [and] use information and communications technology” (Kerka, 2001). Progressive companies that strive to be knowledge-enabled learning organizations need to pay attention to these transformative elements of learning as well as the more common human capital and information learning elements.

Some important methods supporting informal learning include communities of practice and the growing number of Web 2.0 collaboration and social software tools. As Rozwell (2008) shared in a presentation earlier this week, social networks are self-forming communities and groups that often form knowledge collectives. These communities and collectives can be an unlimited source of knowledge sharing and creation that make up an organization’s ‘learning ecosystem.’ Some of the most common tools for informal learning include mentoring, coaching, internships, offering access to experts, and providing virtual meeting and collaboration space for communities (e.g., via web conferencing and discussion boards). The benefits of investing in these informal as well as formal methods of learning are to create channels for collaboration, do more with existing resources, and improve communication, teamwork, problem solving, and learning (Rozwell, 2008).

Mentoring in particular can have many positive benefits for employee development—for both the mentor and the one being mentored. As Hansman (2000) argues, mentoring is “integral to learning in the workplace, to receiving career help, and for developmental and psychosocial support” (p. 494). In addition to employee development, mentoring can also further support efforts to transfer knowledge, bridge gap between different work groups, and develop high potential employees (Hansman, 2000). To structure a mentoring program for employee development, it would be important to provide the opportunity for mentors and protégés to be able to select one another; in lieu of that, group mentoring may be an attractive option. Additionally, executive coaching using external professionals would be the most effective approach for ensuring that high-level employees in the organization are also receiving the feedback and guidance they need to improve their individual (and therefore the organization’s) performance.

Conclusion

To develop an effective employee development program, it is important to think holistically and act systematically. Aim for development of the whole employee—as not just a receptacle for important informational learning that will enable them to contribute to the organization, but also as a lifelong learner with important underlying core competencies and intelligences that can equally benefit the organization by improving how work gets done. In implementing the employee development program, act systematically so that there are clear links between hiring criteria, performance expectations, workplace learning opportunities, and job responsibilities. Making the growth of employees a priority and showing them a clear path with options customized to meet their needs can help organizations reap the benefit of having a workforce that is more innovative, engaged, and loyal.

References

Buckingham, M. & Clifton, D. (2001). Now discover your strengths. New York: The Free Press.

Burton, J. (2006). Transformative learning: The hidden curriculum of adult life. Work Based Learning in Primary Care, 4(1), 1-5.

Hansman, C. (2000). Formal mentoring programs. In A. Wilson & E. Hayes (Eds.), Handbook of adult and continuing education (pp. 493-507). San Francisco: Jossey-Bass.

Imel, S. (1998). Transformative learning in adulthood (ERIC Digest No. 200). Columbus, OH: ERIC Clearinghouse on Adult Career and Vocational Education. (ERIC Document Reproduction Service No. ED423426)

Kerka, S. (2001). The balancing act of adult life. Retrieved March 7, 2008 from http://www.cete.org/acve/docgen.asp?tbl=digests&ID=114

Klein, H. J., & Weaver, N. A. (2000). The effectiveness of an organizational-level orientation training program in the socialization of new hires [Abstract]. Personnel Psychology, 53(1), 47-66.

Knowles, M. S., Holton, E. F., & Swanson, R. A. (1973/2005). The adult learner (6th ed.). Burlington, MA: Elsevier Butterworth-Heinemann.

Kroeger, O., & Thuesen, J. M. (1992). Type talk at work. New York: Delacorte Press.

Lankard, B. (1996). Acquiring self knowledge for career development. In ERIC Digest No 175. Retrieved April 25, 2008 from http://www.cete.org/acve/docgen.asp?tbl=digests&ID=28

Lawrence, G. (1979/1982). People types and tiger stripes: A practical guide to learning styles (2nd ed.). Gainesville, FL: Center for Applications of Psychological Type.

Pedler, M. (1983/1987). Action learning in practice (3rd ed.). Brookfield, VT: Gower Publishing.

Prahald, C. K., & Hamel, G. (1990). The core competence of the corporation (HBR OnPoint Article No. 6528). Boston, MA: Harvard Business School Publishing Corporation, Harvard Business Review.

Rozwell, C. (2008, May). Web 2.0 in the learning ecosystme: Challenges and benefits of adoption. Poster session presented at Saba and Gartner's Web 2.0 in the Learning Ecosystem, Webinar.

Sims, D. (2002). Creative new employee orientation programs. New York: McGraw-Hill Professional.

- Robin

Copyright Robin Donnan 2008. All Rights Reserved.
http://www.perfassocinc.com


May 6, 2008

Personal Information Filtering

Dalkir (2005) defines information filtering as the process by which one can “go through an enormous amount of information to find the small portion that is relevant to us” (p. 238). With the vast amount of information we all have access to today, it becomes imperative to develop our own personal information filtering process. For me personally, I use different approaches information filtering processes—one for work and one for school.

For work, I rely upon consistent electronic file structures to enable easy retrieval of information. This structure is used for files as well as for email (including the use of rules to file emails automatically into the correct folders). I also use color coding of my calendar and any physical file folders to more easily differentiate between clients and projects.

For school, I rely upon EBSCO alerts and an AskSam database that I started in my second quarter at Walden (Seaside software, n.d.). To stay abreast of new articles related to my research focus, I’ve set up a number of EBSCO alerts that are automatically emailed to me; I also have them set up in a Firefox page via RSS feeds so that I can quickly preview the articles as they arrive. For my research database, I’ve applied a number of the concepts from this program. I first spent time defining the taxonomy and structure for my database; this involved defining the key categories and search terms I anticipated using when needing to retrieve information from my research. With key words and categories defined, I then began to enter all my journal articles, class notes, and assignments. The most critical thing I’ve learned is to stay disciplined with this entry process, e.g., using the week between quarters to update my database based on the previous quarter. Now finishing my sixth quarter, I’m already finding my database to be an incredibly useful tool for quickly locating the article or reference I may recall based on just a portion of the title or a quote. It has also been very useful in supporting my further research and writing when I need to search the over 400 entries in my database to see how many related hits I have on a particular topic.


References:

Dalkir, K. (2005). Knowledge management in theory and practice. Burlington, MA: Butterworth-Heinemann.

Seaside software. (n.d.). askSam. Retrieved May 6, 2008 from http://www.asksam.com/brochure.asp

- Robin

Copyright Robin Donnan 2008. All Rights Reserved.
http://www.perfassocinc.com


May 3, 2008

KM Enablers

The four key enablers of knowledge management include infrastructure, culture, measures, and technology. Culture relates to organizational norms. Infrastructure relates to the “roles, organizational structures, and skills from which individual [KM] projects can benefit” (Davenport & Prusak, 1998, p. 155). Measures relate to being able to provide proof of the benefit of a knowledge management initiative. (This can include qualitative evidence gathered from success stories, as well as quantitative evidence such as an increase in an organizations’ intellectual capital in the form of patents, process, plans, new products, etc.) In addition, technology relates to the enabling platform upon which many KM initiatives are built.

In a comparison of different authors’ critical success factors for knowledge management, technology infrastructure and willingness to share are the top two items listed (Alazmi & Zairi, 2003). This underlines the importance of two of the four KM enablers: culture and technology.

Culture relates to such knowledge enabling (or inhibiting) factors as willingness to share, support for learning from mistakes, encouragement to share knowledge, allowing time for reflection, and recognition for new knowledge created (Davenport & Prusak, 1998; Kline & Saunders, 1993). McDermott (1999) argues that “the difficulty in most knowledge management effort lies in changing organizational culture and people's work habits. It lies in getting people to take the time to articulate and share the really good stuff. If a group of people don't already share knowledge, don't already have plenty of contact, don't already understand what insights and information will be useful to each other, information technology is not likely to create it" (p. 104).

In considering technology’s role in KM, Davenport & Prusak (1998) argue that “technology’s most valuable role in knowledge management is extending the reach and enhancing the speed of knowledge transfer” (p. 125). At the same time, they warn to not place too much emphasis on technology, citing “an excessive focus on technology [as] the most common pitfall in knowledge management” (p. 173). This sentiment is echoed by Fahey & Prusak (1998) who caution, “although IT is a wonderful facilitator of data and information transmission and distribution, it can never substitute for the rich interactivity, communication, and learning that is inherent in dialogue. Knowledge is primarily a function and consequence of the meeting and interaction of minds. Human intervention remains the only source of knowledge generation" (p. 273).


References:

Alazmi, M., & Zairi, M. (2003). Knowledge management critical success factors. Total Quality Management, 14(2), 199-204.

Davenport, T., & Prusak, L. (1998). Working knowledge: How organizations manage what they know. Boston: Harvard Business School Press.

Fahey, L. & Prusak, L. (1998). The eleven deadliest sins of knowledge management. California Management Review, 40(3), 265-276.

Kline, P. & Saunders, B. (1993). Ten steps to a learning organization (2nd ed.). Salt Lake City, UT: Great River Books.

McDermott, R. (1999). Why information technology inspired but cannot deliver knowledge management. California Management Review, 41(4), 103-117.


- Robin

Copyright Robin Donnan 2008. All Rights Reserved.
http://www.perfassocinc.com