VOOZH about

URL: https://github.com/topics/bandit-algorithms

⇱ bandit-algorithms · GitHub Topics · GitHub


Skip to content
#

bandit-algorithms

Here are 102 public repositories matching this topic...

👁 SMPyBandits

🔬 Research Framework for Single and Multi-Players 🎰 Multi-Arms Bandits (MAB) Algorithms, implementing all the state-of-the-art algorithms for single-player (UCB, KL-UCB, Thompson...) and multi-player (MusicalChair, MEGA, rhoRand, MCTop/RandTopM etc).. Available on PyPI: https://pypi.org/project/SMPyBandits/ and documentation on

  • Updated
  • Jupyter Notebook

A comprehensive Python library implementing a variety of contextual and non-contextual multi-armed bandit algorithms—including LinUCB, Epsilon-Greedy, Upper Confidence Bound (UCB), Thompson Sampling, KernelUCB, NeuralLinearBandit, and DecisionTreeBandit—designed for reinforcement learning applications

  • Updated
  • Python

Improve this page

Add a description, image, and links to the bandit-algorithms topic page so that developers can more easily learn about it.

Curate this topic

Add this topic to your repo

To associate your repository with the bandit-algorithms topic, visit your repo's landing page and select "manage topics."

Learn more

You can’t perform that action at this time.