Analyses of existing public face datasets have shown that deep learning models (DLMs) grapple with racial and gender biases, raising concerns about algorithmic fairness in facial processing technologies (FPTs). Because these datasets are often comprised of celebrities, politicians, and mainly white faces, increased reliance on more diverse face databases has been proposed. However, techniques for generating more representative datasets are underdeveloped. To address this gap, we use the case of defendant mugshots from Miami-Dade County's (Florida, U.S.) criminal justice system to develop a novel technique for generating multidimensional race-ethnicity classifications for four groups: Black Hispanic, White Hispanic, Black non-Hispanic, and White non-Hispanic. We perform a series of experiments by fine-tuning seven DLMs using a full sample of mugshots (194,393) with race-ethnicity annotations from court records and a random stratified subsample of mugshots (13,927) annotated by a group of research assistants. Our methodology considers race as a multidimensional feature particularly for a more diverse face dataset and uses an averaged (consensus-based) approach to achieve a 74.84% accuracy rate based on annotated data representing only 2% of the full dataset. Our approach can be used to make DLM based FPTs more inclusive of the various subcategories of race and ethnicity as they are being increasingly adopted by various organizations including the criminal justice system.