Numerous applications rely on implication rules either as models of causal relations among data, or as components of their reasoning and inference systems. Although mature and robust models of implication rules already exist for 'perfect' (e.g., boolean) scenarios, there is still a need for improving implication rule models when the data (or system models) are uncertain, ambiguous, vague, or incomplete. Decades of research have produced models for probabilistic and fuzzy systems. However, the work on uncertain implication rules under the Dempster-Shafer (DS) theoretical framework can still be improved. Given that DS theory provides increased robustness against uncertain/incomplete data, and that DS models can easily be converted into probabilistic and fuzzy models, a DS-based implication rule that is consistent with classical logic would definitely improve inference methods when dealing with uncertainty. We introduce a DS-based uncertain implication rule that is consistent with classical logic. This model satisfies reflexivity, contrapositivity, and transitivity properties, and is embedded into an uncertain logic reasoning system that is itself consistent with classical logic. When dealing with 'perfect' (i.e., no uncertainty) data, the implication rule model renders the classical implication rule results. Furthermore, we introduce an ambiguity measure to track degeneracy of belief models throughout inference processes. We illustrate the use and behavior of both the uncertain implication rule and the ambiguity measure in a human-robot interaction problem.