Decision-makers can reject model forecasts and steer clear of expensive blunders by being aware of when an AI model is unsure about its predictions. In general, one anticipates a prediction model’s performance to improve at the expense of lessening coverage when a reject option is made available (i.e., by predicting on fewer samples). However, such an enhancement might not be seen by all the data’s subpopulations and might even have negative effects on some of them. We’ll cover techniques to make selective classification [1] and regression [2] effective for everyone in this talk as well as current developments in trustworthy uncertainty quantification. The use of generative models to inform decision-makers about the areas of high and low confidence in AI will also be covered in this session. We will examine a few sample cases in-depth using an open-source toolkit called UQ360 (https://github.com/IBM/UQ360) to demonstrate how uncertainty relates to other principles of trustworthy AI.