[100% Off] Conceptual Foundations of PLS Path Modeling
Duration: 10.0 hours
Learn the concepts of the PLS algorithm, reliability and validity, bootstrapping, mediation and moderation.
You may also like:
PROJECT HERMES: The Evolution of Digital/Internet Marketing9.0 hours Free
Machine Learning60.0 hours 0$ 45.00$
Google IT Support Professional Certificate240.0 hours 0$ 99.99$
Adobe InDesign Made Easy. A Beginners Guide To InDesign11.0 hours Free
Operating Systems and You: Becoming a Power User35.0 hours 0$ 45.00$
Conceptual Foundations of PLS Path Modeling provides a comprehensive introduction to the most critical foundational concepts of PLS path modeling. Virtually the entire course consists of narrative lectures accompanied by powerpoint slides and some readings. The course does not teach how to use any particular specific PLS software modeling package. The course is very useful as a preliminary course to any other "hands-on" course that teaches how to use specific PLS path modeling (or related) software (such as SmartPLS 2.0 or 3.0; WarpPLS; the semPLS or plspm packages in R; ADANCO; pls-gui.com; and so on). Participants learn the conceptual basics of the following critical path modeling terms and processes: What is PLS path modeling?, formative versus reflective constructs, assessing reliability and validity, bootstrapping and blindfolding, how to estimate direct, indirect, total, mediating and moderating effects.
This course is intended for graduate students, faculty and other researchers who seek explicit and comprehensive explanations and of the foundational concepts that underlie PLS path modeling. It addresses basic issues such as: How does the PLS algorithm 'work'? What are the differences between the outer measurement and inner structural models in a path model with latent variables? What are the fundamental distinctions between formative and reflective constructs? What can one determine about direct, indirect, and total effects? About mediating and moderating effects? What do path coefficients, weights and loadings tell you about the underlying data relationships? What are latent variable ‘scores’ or values? What do the predictive levels of variance explained in the endogenous latent variables actually mean?