DataCamp Assessments Reliable?

SUNDAY, JUNE 20, 2021 • 5 MINS

I came across this article on LinkedIn recently that caught my eye since it explored DataCamp's skill assessments and the validity behind them as it relates to the professional data scientist certification. As someone who recently obtained the certification back in late April, this hits particularly close to home because I want to be sure this certification will hold water as more individuals obtain it.

The article covers the following measures that the data science team at DataCamp uses to ensure their skill assessments remain credible:

  1. Split-Half Reliability
  2. Test-Retest Reliability
  3. Face Validity
Split-Half Reliability

Split-half reliability is all about ensuring that different questions across users will yield similar results assuming those questions pertain to a particular skill level. The last result you want is two similarly skilled individuals taking the assessment with slightly differing questions and getting very different results. It was reassuring to see that this measure scored high, indicating that the assessment is robust and can draw a similar conclusion even when questions vary. I really like this. Then, nobody can continuously retake the assessment and get a high score simply because they memorized the question/answers rather than truly understanding the data science or programming concepts behind the questions.

Test-Retest Reliability

Test-retest reliability is all about ensuring that when users take an assessment multiple times over a certain period, they will get similar results. While this score was not as high as those with split-half reliability, it completely makes sense. The article notes that interim learning can occur between the time a user retakes an assessment given the recommendations the assessment gives after you complete it. Speaking from experience, I would take some time to review the testing material prior to retaking an assessment I wanted to improve my score on. Naturally, this would decrease the test-retest reliability because I would come back with stronger skills than if I did nothing (which is not the point of the assessments). Even with this being said though, test-retest reliability was high with DataCamp using a seven day window, still supporting the notion that DataCamp's assessments are credible. This makes sense too because I have no doubt there are individuals who do no interim learning and try retaking the assessment after a couple of days.

Face Validity

Face validity is all about ensuring that the scores make sense outside of the assessment (in other words, can be transferable to a role at a company). If a user agrees with their score after taking the assessment, then this measure will be high. From the graph that the article showed, this was high and consistent across time, reaffirming these assessments are the real deal. My first thought about this was "well, people can just lie that they do not agree with their bad score because many humans are naturally optimistic towards their skills." However, after thinking more about it, there are probably not that many people who would do this given there is no prize for doing this. It is not like you will land a job if you disagree. You are only lying to yourself.

Final Thoughts

Overall, knowing that DataCamp is using these measures to constantly assess how the assessments are doing from a credibility perspective is very reassuring. Moreover, seeing that the results are high across all measures makes me even more certain that nobody is rigging the system to land a certificate when they are not qualified. I will admit, this was one of my fears after obtaining the certificate given it was just launched and nobody knew for certain what to expect. Armed with this new knowledge, it makes me confident that this certificate will continue to be noteworthy to employers into the foreseeable future, whether I have to directly reference this article or not. Great job DataCamp! As someone who has consistently used your platform since May of 2018, I cannot wait to see what is on the horizon.