Due to the popularity and development of social networks and web video sites, we have witnessed an exponential growth in the volumes of web videos in the last decade. This prompts an urgent demand for efficiently grasping the major events. Nevertheless, the insufficient and noisy text information has made it difficult and challenging to mine the events based on the initial keywords and visual features. In this paper, we propose an adaptive semantic association rule mining method in the NDK (Near-Duplicate Keyframes) level to enrich the keyword information and to remove the words without any semantic relationship. Moreover, both textual and visual information are employed for event classification, targeting for bridging the gap between NDKs and the high-level semantic concepts. Experimental results on large scale web videos from YouTube demonstrate that our proposed method achieves good performance and outperforms the selected baseline methods.