|
You are here |
wp.sigmod.org | ||
| | | | |
iclr-blogposts.github.io
|
|
| | | | | The topic of fairness in AI has garnered more attention over the last year, recently with the arrival of the EU's AI Act. This goal of achieving fairness in AI is often done in one of two ways, namely through counterfactual fairness or through group fairness. These research strands originate from two vastly differing ideologies. However, with the use of causal graphs, it is possible to show that they are related and even that satisfying a fairness group measure means satisfying counterfactual fairness. | |
| | | | |
modeling-languages.com
|
|
| | | | | We present a novel robust hashing mechanism for models. Robust hashing algorithms (i.e. hashing algorithms that generate similar outputs from similar input data) are useful as a key building block in intellectual property protection, authenticity assessment and fast comparison and retrieval solutions | |
| | | | |
www.unite.ai
|
|
| | | | | Some machine learning models belong to either the generative or discriminative model categories. Yet what is the difference between these two categories of models? What does it mean for a model to be discriminative or generative? The short answer is that generative models are those that include the distribution of the data set, returning a [] | |
| | | | |
www.kdnuggets.com
|
|
| | | Looking to integrate ChatGPT into your data science workflow? Here's an example along with tips and best practices to get the most out of ChatGPT for data science. | ||