Traditional stochastic programs optimize the expected value of some function that depends on the decision variables as well as on some random variables that represent the uncertainty in the problem. Such formulations assume that the probability distribution of those random variables is known. However, in practice the probability distribution oftentimes is not known or cannot be accurately approximated. One way to address such ambiguity is to work with distributionally robust stochastic programs (DRSPs), which minimize the worst-case expected value with respect to a set of probability distributions. In this presentation we discuss some recent advances in the research on DRSPs. In particular, we study the question of how to identify the critical scenarios resulting from solving a DRSP. We demonstrate that not all, but only some scenarios might have an “effect” on the optimal value/solution. Computational and analytical results show that identifying effective scenarios may provide useful insight on the underlying uncertainties of the problem.