Techno Blender
Digitally Yours.

LCE: The Most Powerful Machine Learning Method? | by Kevin Fauvel, PhD, CFA, CAIA | Nov, 2022

0 41


Introducing the new state-of-the-art tree-based method and its potential further advancements

Photo by Petr Slováček on Unplash.

As shown in “Why Do Tree-Based Models still Outperform Deep Learning on Tabular Data?” [Grinsztajn et al., 2022], the widely used tree-based models remain the state-of-the-art machine learning methods in many cases. Recently, Local Cascade Ensemble (LCE) [Fauvel et al., 2022] proposes to combine the strengths of the top performing tree-based ensemble methods — Random Forest [Breiman, 2001] and eXtreme Gradient Boosting (XGBoost) [Chen and Guestrin, 2016], and integrates a supplementary diversification approach which enables it to be a better generalizing predictor.

This article first introduces LCE and then compares its performance to that of Random Forest and XGBoost on different public datasets. Then, based on user feedback (tens of thousands of package downloads in the first few months after release), this article presents some ideas to further enhance the capacities of LCE and to build possibly the most powerful machine learning method. Finally, it explains how one can participate in the advancement of LCE.

Random Forest and XGBoost rely on the methods of bagging and boosting respectively. Bagging has a main effect on variance reduction, while boosting has a main effect on bias reduction, rendering these two methods complementary to address the bias-variance trade-off faced by machine learning models. Thus, LCE combines both bagging and boosting, and learns different parts of the training data based on a divide-and-conquer strategy to capture new relationships that could not be otherwise discovered globally. For more detailed information about LCE, please refer to my article “Random Forest or XGBoost? It is Time to Explore LCE”.

The following subsections detail the methodology employed and the results obtained using the public implementations of the aforementioned algorithms. To keep this article around the average length on Medium, the comparison focuses on the task of classification. Future work could explore the comparison on the regression task.

Evaluation Setting

Datasets The comparison has been made based on 10 public datasets with different characteristics from the UCI Machine Learning Repository [Dua and Graff, 2017]: Avila [De Stefano et al., 2018], Breast Cancer [Bennett and Mangasarian, 1992], Iris [Fisher, 1936], MAGIC Gamma Telescope [Dvořák and Savický, 2007], Nursery [Zupan et al., 1997], Shill Bidding [Alzahrani and Sadaoui, 2020], Shoppers Purchasing Intention [Sakar et al., 2018], Steel Plates Faults [Buscema et al., 2010], Wine [Aeberhard et al., 1992], Wireless Indoor Localization [Rohra et al., 2017].

Table 1 contains the datasets and their descriptions. The datasets have been randomly selected among the small/medium-sized categories, where tree-based ensemble methods usually outperform.

Implementations The following packages have been used on Python 3.10:

  • Local Cascade Ensemble (LCE): package lcensemble in version 0.3.2
  • Random Forest (RF): package scikit-learn in version 1.1.2
  • XGBoost (XGB): package xgboost in version 1.6.2

These implementations are all compatible with scikit-learn, which allow a unique pipeline for a consistent comparison (see code below).

Hyperparameters Optimization The classical hyperparameters in tree-based learning (max_depth, n_estimators) for all models are set by grid search with a 3-fold cross-validation on the training set. Grid search is performed with the model selection tool GridSearchCV from scikit-learn.

The ranges of values for max_depth and n_estimators are the ones usually adopted for these methods. For a fair comparison, the range of values for max_depth used by LCE for XGBoost (xgb_max_depth) is the same as the one used to evaluate XGBoost ([3, 6, 9], see LCE documentation).

These classical hyperparameters have been chosen for this general comparison. In case of a specific application, it could be interesting to consider another set of hyperparameters. Please refer to the documentation of each method for more information.

Metrics For each model, the accuracy on the test set of each dataset is reported. The average rank and number of wins/ties over all datasets are also shown as summary statistics.

Code The Python code used for the comparison is as follows (formatted with black):

Results

Table 1 presents the results obtained based on the methodology previously introduced. The best accuracy for each dataset is denoted in boldface.

Table 1: Accuracy results on 10 datasets from the UCI repository [Dua and Graff, 2017]. Datasets are sorted in ascending order of their sizes. Abbreviations: Class. — number of classes, Dims. — number of dimensions, Sampl. — number of samples.

First, we observe that LCE is a better generalizing predictor as it obtains the best average rank across all datasets (LCE: 1.0, Random Forest: 2.1, XGBoost: 2.0).

Then, we can see that combining the strengths of Random Forest and XGBoost, while adopting a supplementary diversification approach, LCE outperforms both methods on 4 out of the 10 datasets, on both categories of datasets (small/medium-sized). For the rest of the datasets, LCE obtains the same prediction performance as the best performing method between Random Forest and XGBoost (3 wins/ties with Random Forest and 4 wins/ties with XGBoost). Therefore, LCE design allows it to keep the best of both Random Forest and XGBoost methods across the datasets tested, and its supplementary diversification approach can enable it to outperform both of the methods.

As previously introduced, LCE is a high-performing, scalable and user-friendly machine learning method for the general tasks of Classification and Regression. In particular, LCE:

  • Enhances the prediction performance of Random Forest and XGBoost by combining their strengths and adopting a complementary diversification approach.
  • Supports parallel processing to ensure scalability.
  • Handles missing data by design.
  • Adopts scikit-learn API for the ease of use.
  • Adheres to scikit-learn conventions to allow interaction with scikit-learn pipelines and model selection tools.
  • Is released in open source and commercially usable — Apache 2.0 license.

However, LCE capabilities can be further enhanced and some developments are required to respond to a variety of user needs.

Next Steps

Based on the first round of user feedback and my own experience developing LCE, it would be valuable for the community to add the following features to LCE:

Modeling flexibility:

  • Base method: LCE uses XGBoost as base method in each node of a tree. It would be beneficial to offer to the end-users the possibility of choosing the base method, e.g., other boosting methods like LightGBM [Ke et al., 2017] or any other machine learning method with a main effect on bias reduction, by adding it as parameter of LCE, according to their specific applications.
  • Loss functions: LCE uses the standard loss functions available in scikit-learn (e.g., cross-entropy, MSE). These standard functions do not fit all user-specific applications. Therefore, it would be valuable to add the possibility of adopting a custom loss function for the end-users (for more information, please refer to current discussions in scikit-learn GitHub issue).
  • Multi-task learning: LCE currently adopts traditional single-task learning. However, in many applications (e.g., medical risk evaluation, financial forecast of multiple indicators), it is desirable to solve multiple related machine learning tasks to enhance generalization performance. A recent work [Ibrahim et al., 2022] proposes to address this challenge by extending differentiable trees with a regularizer that allows for soft parameter sharing across tasks, which could be an interesting option to consider.
  • Streaming: to respond to the needs of emerging applications to process massive volume of evolving data streams (e.g., for cyber security, energy management), it would be valuable to develop a streaming version of LCE that would integrate strict memory requirements and fast processing times. The work of Gomes et al. [2017] with an adaptive approach could be an interesting starting point in this direction.

Faithful Explainability-by-Design: current explainability methods for ensembles are post-hoc methods (e.g., SHAP [Lundberg et al., 2017], Anchors [Ribeiro et al., 2018]), which cannot provide faithful explanations [Rudin, 2019]. Faithfulness is critical as it corresponds to the level of trust an end-user can have in the explanations of model predictions, i.e., the level of relatedness of the explanations to what the model actually computes. Faithfulness has been highlighted by regulatory bodies as a pillar for accountability, responsibility, and transparency of processes including AI components [European Parliament and Council, 2021]. Therefore, it would be valuable to integrate directly in the design of LCE a mechanism that would allow the extraction of explanations supporting its predictions.

Scalability: computational time at both training and inference stages is key for many applications, therefore it would be valuable to:

  • CPU: accelerate current CPU implementation.
  • GPU: add GPU support (e.g., CUDA).
  • Distributed on cloud: add support for distributed training on multiple machines (e.g., AWS, GCE, Azure, and Yarn clusters) and the integration with cloud dataflow systems (e.g., Flink and Spark).

Programming languages: LCE is currently implemented in Python. To fit different user needs, it would be valuable to implement it in other languages (e.g., C++, R, Java, Julia, Scala).

Tutorials: to make more people discover LCE, it would be valuable to design tutorials in various media outlets, targeting different audiences.

How to participate?

They are multiple ways to participate in LCE; for instance, school projects for students, research questions for academics, performance maximization through collaboration for professionals:

  • Add a star to LCE GitHub repository. It seems insignificant but it is key for LCE referencing and visibility.
  • Answer queries on the issue tracker, investigate bugs, and review other developers’ pull requests to make sure the existing version is running as expected.
  • Extend LCE capabilities by adding new features like the ones presented in previous section.

For organizations, it is possible to sponsor the project to cover the expenses needed to develop LCE along with the highest standards (e.g., professional services like a robust continuous integration infrastructure, workshop expenses).

This article presents the new tree-based machine learning method LCE, and shows it obtains the best performance on a public benchmark by comparing it to current top performing methods. Then, it introduces key directions for participation, to further advance LCE and release its full potential. If you have any question, you can contact me here.

S. Aeberhard, D. Coomans and O. de Vel. Comparison of Classifiers in High Dimensional Settings. Tech. Rep. no. 92–02, 1992. (License: CC BY 4.0)

A. Alzahrani and S. Sadaoui. Clustering and Labeling Auction Fraud Data. In Data Management, Analytics and Innovation, 269–283, 2020. (License: CC0)

K. Bennett and O. Mangasarian. Robust Linear Programming Discrimination of Two Linearly Inseparable Sets. Optimization Methods and Software, 1: 23–34, 1992. (License: CC0)

L. Breiman. Random Forests. Machine Learning, 45(1):5–32, 2001.

M. Buscema, S. Terzi and W. Tastle. A New Meta-Classifier. Annual Meeting of the North American Fuzzy Information Processing Society, 1–7, 2010. (License: PDDL)

T. Chen and C. Guestrin. XGBoost: A Scalable Tree Boosting System. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 2016.

D. Dua and C. Graff. UCI Machine Learning Repository, 2017.

J. Dvořák and P. Savický. Softening Splits in Decision Trees Using Simulated Annealing. Proceedings of ICANNGA, 2007. (License: ODbL)

K. Fauvel, E. Fromont, V. Masson, P. Faverdin and A. Termier. XEM: An Explainable-by-Design Ensemble Method for Multivariate Time Series Classification. Data Mining and Knowledge Discovery, 36(3):917–957, 2022.

R. Fischer. The Use of Multiple Measurements in Taxonomic Problems. Annals of Eugenics, 7: 179–188, 1936. (License: CC0)

H. Gomes, A. Bifet, J. Read, J. Barddal, F. Enembreck, B. Pfharinger, G. Holmes and T. Abdessalem. Adaptive Random Forests for Evolving Data Stream Classification. Machine Learning, 106: 1469–1495, 2017.

L. Grinsztajn, E. Oyallon and G. Varoquaux. Why Do Tree-Based Models still Outperform Deep Learning on Typical Tabular Data? In Proceedings of the 36th Conference on Neural Information Processing Systems Datasets and Benchmarks Track, 2022.

S. Ibrahim, H. Hussein and R. Mazumder. Flexible Modeling and Multitask Learning Using Differentiable Tree Ensembles. In Proceedings of the 28th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 2022.

G. Ke, Q. Meng, T. Finley, T. Wang, W. Chen, W. Ma, Q. Ye and T. Liu. LightGBM: A Highly Efficient Gradient Boosting Decision Tree. In Proceedings of the 31st Conference on Neural Information Processing Systems, 2017.

European Parliament and Council. 2021. Artificial Intelligence Act. European Union Law.

S. Lundberg and S. Lee. 2017. A Unified Approach to Interpreting Model Predictions. In Proceedings of the 31st International Conference on Neural Information Processing Systems.

M. Ribeiro, S. Singh and C. Guestrin. 2018. Anchors: High-Precision Model-Agnostic Explanations. In Proceedings of the 32nd AAAI Conference on Artificial Intelligence.

J. Rohra, B. Perumal, S. Narayanan, P. Thakur and R. Bhatt. User Localization in an Indoor Environment Using Fuzzy Hybrid of Particle Swarm Optimization and Gravitational Search Algorithm with Neural Networks. In Proceedings of 6th International Conference on Soft Computing for Problem Solving, 2017. (License: CC BY 4.0)

C. Rudin. 2019. Stop Explaining Black Box Machine Learning Models for High Stakes Decisions and Use Interpretable Models Instead. Nature Machine Intelligence, 1: 206–215.

C. Sakar, S. Polat, M. Katircioglu and Y. Kastro. Real-Time Prediction of Online Shoppers’ Purchasing Intention Using Multilayer Perceptron and LSTM Recurrent Neural Networks. Neural Computing and Applications, 6893–6908, 2018. (License: CC0)

C. de Stefano, M. Maniaci, F. Fontanella and A. Scotto di Freca. Reliable Writer Identification in Medieval Manuscripts through Page Layout Features: The ‘Avila’ Bible Case. Engineering Applications of Artificial Intelligence, 72: 99–110, 2018. (License: CC0)

B. Zupan, M. Bohanec, I. Bratko and J. Demsar. Machine Learning by Function Decomposition. In Proceedings of the 14th International Conference on Machine Learning, 1997. (License: CC0)


Introducing the new state-of-the-art tree-based method and its potential further advancements

Photo by Petr Slováček on Unplash.

As shown in “Why Do Tree-Based Models still Outperform Deep Learning on Tabular Data?” [Grinsztajn et al., 2022], the widely used tree-based models remain the state-of-the-art machine learning methods in many cases. Recently, Local Cascade Ensemble (LCE) [Fauvel et al., 2022] proposes to combine the strengths of the top performing tree-based ensemble methods — Random Forest [Breiman, 2001] and eXtreme Gradient Boosting (XGBoost) [Chen and Guestrin, 2016], and integrates a supplementary diversification approach which enables it to be a better generalizing predictor.

This article first introduces LCE and then compares its performance to that of Random Forest and XGBoost on different public datasets. Then, based on user feedback (tens of thousands of package downloads in the first few months after release), this article presents some ideas to further enhance the capacities of LCE and to build possibly the most powerful machine learning method. Finally, it explains how one can participate in the advancement of LCE.

Random Forest and XGBoost rely on the methods of bagging and boosting respectively. Bagging has a main effect on variance reduction, while boosting has a main effect on bias reduction, rendering these two methods complementary to address the bias-variance trade-off faced by machine learning models. Thus, LCE combines both bagging and boosting, and learns different parts of the training data based on a divide-and-conquer strategy to capture new relationships that could not be otherwise discovered globally. For more detailed information about LCE, please refer to my article “Random Forest or XGBoost? It is Time to Explore LCE”.

The following subsections detail the methodology employed and the results obtained using the public implementations of the aforementioned algorithms. To keep this article around the average length on Medium, the comparison focuses on the task of classification. Future work could explore the comparison on the regression task.

Evaluation Setting

Datasets The comparison has been made based on 10 public datasets with different characteristics from the UCI Machine Learning Repository [Dua and Graff, 2017]: Avila [De Stefano et al., 2018], Breast Cancer [Bennett and Mangasarian, 1992], Iris [Fisher, 1936], MAGIC Gamma Telescope [Dvořák and Savický, 2007], Nursery [Zupan et al., 1997], Shill Bidding [Alzahrani and Sadaoui, 2020], Shoppers Purchasing Intention [Sakar et al., 2018], Steel Plates Faults [Buscema et al., 2010], Wine [Aeberhard et al., 1992], Wireless Indoor Localization [Rohra et al., 2017].

Table 1 contains the datasets and their descriptions. The datasets have been randomly selected among the small/medium-sized categories, where tree-based ensemble methods usually outperform.

Implementations The following packages have been used on Python 3.10:

  • Local Cascade Ensemble (LCE): package lcensemble in version 0.3.2
  • Random Forest (RF): package scikit-learn in version 1.1.2
  • XGBoost (XGB): package xgboost in version 1.6.2

These implementations are all compatible with scikit-learn, which allow a unique pipeline for a consistent comparison (see code below).

Hyperparameters Optimization The classical hyperparameters in tree-based learning (max_depth, n_estimators) for all models are set by grid search with a 3-fold cross-validation on the training set. Grid search is performed with the model selection tool GridSearchCV from scikit-learn.

The ranges of values for max_depth and n_estimators are the ones usually adopted for these methods. For a fair comparison, the range of values for max_depth used by LCE for XGBoost (xgb_max_depth) is the same as the one used to evaluate XGBoost ([3, 6, 9], see LCE documentation).

These classical hyperparameters have been chosen for this general comparison. In case of a specific application, it could be interesting to consider another set of hyperparameters. Please refer to the documentation of each method for more information.

Metrics For each model, the accuracy on the test set of each dataset is reported. The average rank and number of wins/ties over all datasets are also shown as summary statistics.

Code The Python code used for the comparison is as follows (formatted with black):

Results

Table 1 presents the results obtained based on the methodology previously introduced. The best accuracy for each dataset is denoted in boldface.

Table 1: Accuracy results on 10 datasets from the UCI repository [Dua and Graff, 2017]. Datasets are sorted in ascending order of their sizes. Abbreviations: Class. — number of classes, Dims. — number of dimensions, Sampl. — number of samples.

First, we observe that LCE is a better generalizing predictor as it obtains the best average rank across all datasets (LCE: 1.0, Random Forest: 2.1, XGBoost: 2.0).

Then, we can see that combining the strengths of Random Forest and XGBoost, while adopting a supplementary diversification approach, LCE outperforms both methods on 4 out of the 10 datasets, on both categories of datasets (small/medium-sized). For the rest of the datasets, LCE obtains the same prediction performance as the best performing method between Random Forest and XGBoost (3 wins/ties with Random Forest and 4 wins/ties with XGBoost). Therefore, LCE design allows it to keep the best of both Random Forest and XGBoost methods across the datasets tested, and its supplementary diversification approach can enable it to outperform both of the methods.

As previously introduced, LCE is a high-performing, scalable and user-friendly machine learning method for the general tasks of Classification and Regression. In particular, LCE:

  • Enhances the prediction performance of Random Forest and XGBoost by combining their strengths and adopting a complementary diversification approach.
  • Supports parallel processing to ensure scalability.
  • Handles missing data by design.
  • Adopts scikit-learn API for the ease of use.
  • Adheres to scikit-learn conventions to allow interaction with scikit-learn pipelines and model selection tools.
  • Is released in open source and commercially usable — Apache 2.0 license.

However, LCE capabilities can be further enhanced and some developments are required to respond to a variety of user needs.

Next Steps

Based on the first round of user feedback and my own experience developing LCE, it would be valuable for the community to add the following features to LCE:

Modeling flexibility:

  • Base method: LCE uses XGBoost as base method in each node of a tree. It would be beneficial to offer to the end-users the possibility of choosing the base method, e.g., other boosting methods like LightGBM [Ke et al., 2017] or any other machine learning method with a main effect on bias reduction, by adding it as parameter of LCE, according to their specific applications.
  • Loss functions: LCE uses the standard loss functions available in scikit-learn (e.g., cross-entropy, MSE). These standard functions do not fit all user-specific applications. Therefore, it would be valuable to add the possibility of adopting a custom loss function for the end-users (for more information, please refer to current discussions in scikit-learn GitHub issue).
  • Multi-task learning: LCE currently adopts traditional single-task learning. However, in many applications (e.g., medical risk evaluation, financial forecast of multiple indicators), it is desirable to solve multiple related machine learning tasks to enhance generalization performance. A recent work [Ibrahim et al., 2022] proposes to address this challenge by extending differentiable trees with a regularizer that allows for soft parameter sharing across tasks, which could be an interesting option to consider.
  • Streaming: to respond to the needs of emerging applications to process massive volume of evolving data streams (e.g., for cyber security, energy management), it would be valuable to develop a streaming version of LCE that would integrate strict memory requirements and fast processing times. The work of Gomes et al. [2017] with an adaptive approach could be an interesting starting point in this direction.

Faithful Explainability-by-Design: current explainability methods for ensembles are post-hoc methods (e.g., SHAP [Lundberg et al., 2017], Anchors [Ribeiro et al., 2018]), which cannot provide faithful explanations [Rudin, 2019]. Faithfulness is critical as it corresponds to the level of trust an end-user can have in the explanations of model predictions, i.e., the level of relatedness of the explanations to what the model actually computes. Faithfulness has been highlighted by regulatory bodies as a pillar for accountability, responsibility, and transparency of processes including AI components [European Parliament and Council, 2021]. Therefore, it would be valuable to integrate directly in the design of LCE a mechanism that would allow the extraction of explanations supporting its predictions.

Scalability: computational time at both training and inference stages is key for many applications, therefore it would be valuable to:

  • CPU: accelerate current CPU implementation.
  • GPU: add GPU support (e.g., CUDA).
  • Distributed on cloud: add support for distributed training on multiple machines (e.g., AWS, GCE, Azure, and Yarn clusters) and the integration with cloud dataflow systems (e.g., Flink and Spark).

Programming languages: LCE is currently implemented in Python. To fit different user needs, it would be valuable to implement it in other languages (e.g., C++, R, Java, Julia, Scala).

Tutorials: to make more people discover LCE, it would be valuable to design tutorials in various media outlets, targeting different audiences.

How to participate?

They are multiple ways to participate in LCE; for instance, school projects for students, research questions for academics, performance maximization through collaboration for professionals:

  • Add a star to LCE GitHub repository. It seems insignificant but it is key for LCE referencing and visibility.
  • Answer queries on the issue tracker, investigate bugs, and review other developers’ pull requests to make sure the existing version is running as expected.
  • Extend LCE capabilities by adding new features like the ones presented in previous section.

For organizations, it is possible to sponsor the project to cover the expenses needed to develop LCE along with the highest standards (e.g., professional services like a robust continuous integration infrastructure, workshop expenses).

This article presents the new tree-based machine learning method LCE, and shows it obtains the best performance on a public benchmark by comparing it to current top performing methods. Then, it introduces key directions for participation, to further advance LCE and release its full potential. If you have any question, you can contact me here.

S. Aeberhard, D. Coomans and O. de Vel. Comparison of Classifiers in High Dimensional Settings. Tech. Rep. no. 92–02, 1992. (License: CC BY 4.0)

A. Alzahrani and S. Sadaoui. Clustering and Labeling Auction Fraud Data. In Data Management, Analytics and Innovation, 269–283, 2020. (License: CC0)

K. Bennett and O. Mangasarian. Robust Linear Programming Discrimination of Two Linearly Inseparable Sets. Optimization Methods and Software, 1: 23–34, 1992. (License: CC0)

L. Breiman. Random Forests. Machine Learning, 45(1):5–32, 2001.

M. Buscema, S. Terzi and W. Tastle. A New Meta-Classifier. Annual Meeting of the North American Fuzzy Information Processing Society, 1–7, 2010. (License: PDDL)

T. Chen and C. Guestrin. XGBoost: A Scalable Tree Boosting System. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 2016.

D. Dua and C. Graff. UCI Machine Learning Repository, 2017.

J. Dvořák and P. Savický. Softening Splits in Decision Trees Using Simulated Annealing. Proceedings of ICANNGA, 2007. (License: ODbL)

K. Fauvel, E. Fromont, V. Masson, P. Faverdin and A. Termier. XEM: An Explainable-by-Design Ensemble Method for Multivariate Time Series Classification. Data Mining and Knowledge Discovery, 36(3):917–957, 2022.

R. Fischer. The Use of Multiple Measurements in Taxonomic Problems. Annals of Eugenics, 7: 179–188, 1936. (License: CC0)

H. Gomes, A. Bifet, J. Read, J. Barddal, F. Enembreck, B. Pfharinger, G. Holmes and T. Abdessalem. Adaptive Random Forests for Evolving Data Stream Classification. Machine Learning, 106: 1469–1495, 2017.

L. Grinsztajn, E. Oyallon and G. Varoquaux. Why Do Tree-Based Models still Outperform Deep Learning on Typical Tabular Data? In Proceedings of the 36th Conference on Neural Information Processing Systems Datasets and Benchmarks Track, 2022.

S. Ibrahim, H. Hussein and R. Mazumder. Flexible Modeling and Multitask Learning Using Differentiable Tree Ensembles. In Proceedings of the 28th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 2022.

G. Ke, Q. Meng, T. Finley, T. Wang, W. Chen, W. Ma, Q. Ye and T. Liu. LightGBM: A Highly Efficient Gradient Boosting Decision Tree. In Proceedings of the 31st Conference on Neural Information Processing Systems, 2017.

European Parliament and Council. 2021. Artificial Intelligence Act. European Union Law.

S. Lundberg and S. Lee. 2017. A Unified Approach to Interpreting Model Predictions. In Proceedings of the 31st International Conference on Neural Information Processing Systems.

M. Ribeiro, S. Singh and C. Guestrin. 2018. Anchors: High-Precision Model-Agnostic Explanations. In Proceedings of the 32nd AAAI Conference on Artificial Intelligence.

J. Rohra, B. Perumal, S. Narayanan, P. Thakur and R. Bhatt. User Localization in an Indoor Environment Using Fuzzy Hybrid of Particle Swarm Optimization and Gravitational Search Algorithm with Neural Networks. In Proceedings of 6th International Conference on Soft Computing for Problem Solving, 2017. (License: CC BY 4.0)

C. Rudin. 2019. Stop Explaining Black Box Machine Learning Models for High Stakes Decisions and Use Interpretable Models Instead. Nature Machine Intelligence, 1: 206–215.

C. Sakar, S. Polat, M. Katircioglu and Y. Kastro. Real-Time Prediction of Online Shoppers’ Purchasing Intention Using Multilayer Perceptron and LSTM Recurrent Neural Networks. Neural Computing and Applications, 6893–6908, 2018. (License: CC0)

C. de Stefano, M. Maniaci, F. Fontanella and A. Scotto di Freca. Reliable Writer Identification in Medieval Manuscripts through Page Layout Features: The ‘Avila’ Bible Case. Engineering Applications of Artificial Intelligence, 72: 99–110, 2018. (License: CC0)

B. Zupan, M. Bohanec, I. Bratko and J. Demsar. Machine Learning by Function Decomposition. In Proceedings of the 14th International Conference on Machine Learning, 1997. (License: CC0)

FOLLOW US ON GOOGLE NEWS

Read original article here

Denial of responsibility! Techno Blender is an automatic aggregator of the all world’s media. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, all materials to their authors. If you are the owner of the content and do not want us to publish your materials, please contact us by email – [email protected]. The content will be deleted within 24 hours.

Leave a comment