Người báo cáo:
Prof. Nguyen Tien Zung (Torus AI & University Paul Sabatier, Toulouse - France)
Thời gian: 14h chiều Thứ 5, ngày 07/05/2026
Địa điểm: Phòng 507 nhà A6
Abstract: In this talk, I will cover the following topics:
- Distorted probabilities: why it's good for the AI to show not mathematical but rather distorted probabilities
- Natural loss functions: why the square loss and the log loss (cross entropy) are the two most natural loss functions, but in practice it's better to use other convex, focal loss functions, and why all convex loss functions are in a sense equivalent up to a reparametrization of probabilities Convexity of the ROC curve
- How big is big data ? Actually whatever amount of data you have, it's still quite small, and it's more important to have "dense" data than "big" data.
- The "unknown" class: why it's important to have it in any classifier
- Hierarchical classification: why hierarchy matters
- Exclusive vs inclusive classification
- Partial annotation: data are often only partially annotated, how to deal with this problem ?
- Few shot and zero shot learning: how to recognize things that one has never seen before ?!
- Cross validation done right
- How to learn very rare classes ?