Driver Monitoring Systems (DMS) track the movement of people using a camera inside the vehicle using AI to predict driver alertness to decide the safety of the driver and people on the road. Cameras collect huge amounts of data in all light conditions and activities of people inside the car. This data carries a wealth of insights into driver movement. Hence we propose a new label centric-approach by labelling the camera data with inclusive AI constructs for a more expansive annotation of the same dataset instead of the typical model-centric approach or data-centric approach to improve the performance of this AI. We used the DMD multi-model dataset for driver monitoring scenarios which come with labelling movement to track 8 actions of the human texting with the left or right hand, talking, drinking, phone call with the left or right hand, reaching the side of the car or combing hair. We developed a binary classification CNN model for movement in the car. We tested the model trained against an inclusive AI constructed labeling option on the same dataset where we expanded the labeling to track the movement of hair flying, scarf fluttering, hand waving or rubbing eyes. The results showed that the inclusive AI construct improved model performance without any change to the model algorithm tuning. Hence we recommend using a label-centric approach to improve labeling of data from camera streams such as the autonomous vehicle to be inclusive on an expanded construct for labels covering all people of all cultures in all lights, all hair, dress, and actions so AI model performance can be improved by capturing more knowledge by being more inclusive of all humans and their actions inside the vehicle.