Digital audio and video have recently taken a center stage in the communication world, which highlights the importance of digital media information management and indexing. It is of great interest for the multimedia research community to find methods and solutions that could help bridge the semantic gap that exists between the low-level features extracted from the audio or video data and the actual semantics of the data. In this paper, we propose a novel framework that works towards reducing this semantic gap. The proposed framework uses the apriori algorithm and association rule mining to find frequent itemsets in the feature data set and generate classification rules to classify video shots to different concepts (semantics). We also introduce a novel pre-filtering architecture which reduces the high positive to negative instances ratio in the classifier training step. This helps reduce the amount of misclassification errors. Our proposed framework shows promising results in classifying multiple concepts.