Compare commits

..

6 Commits

Author SHA1 Message Date
Luke Rogers 16210639bc Fixed another issue. 2013-02-07 19:50:00 +13:00
Luke Rogers 8d8260a2f7 Edited Snowy Evening plugin. 2013-02-07 11:50:33 +13:00
Luke Rogers 971e8e4066 Merge branch 'develop' of https://github.com/nasonfish/CloudBot into feature/snowyevening 2013-02-07 11:19:16 +13:00
nasonfish 64974dc355 more work on the snowy plugin. 2013-01-31 08:02:27 -07:00
nasonfish 560f2e7516 cleanup, don't raise the error if you catch it 2013-01-28 00:09:59 -07:00
nasonfish b648706bfd Added a plugin to get data from snowy-evening.com with a regex when a link is sent. 2013-01-28 00:01:58 -07:00
241 changed files with 5032 additions and 13024 deletions

View File

@ -1,18 +0,0 @@
# CloudBot editor configuration normalization
# Copied from Drupal (GPL)
# @see http://editorconfig.org/
# This is the top-most .editorconfig file; do not search in parent directories.
root = true
# All files.
[*]
end_of_line = LF
indent_style = space
indent_size = 4
# Not in the spec yet:
# @see https://github.com/editorconfig/editorconfig/wiki/EditorConfig-Properties
charset = utf-8
trim_trailing_whitespace = true
insert_final_newline = true

6
.gitignore vendored Normal file → Executable file
View File

@ -1,6 +1,5 @@
persist
config
config.ssl
gitflow
*.db
*.log
@ -8,8 +7,7 @@ gitflow
*.pyc
*.orig
.project
.pydevproject
.geany
*.sublime-project
*.sublime-workspace
.idea/
plugins/data/GeoLiteCity.dat
*.sublime-workspace

View File

@ -1,55 +0,0 @@
# How to contribute
I like to encourage you to contribute to the repository.
This should be as easy as possible for you but there are a few things to consider when contributing.
The following guidelines for contribution should be followed if you want to submit a pull request.
## TL;DR
* Read [Github documentation](http://help.github.com/) and [Pull Request documentation](http://help.github.com/send-pull-requests/)
* Fork the repository
* Edit the files, add new files
* Check the files with [`pep8`](https://pypi.python.org/pypi/pep8), fix any reported errors
* Check that the files work as expected in CloudBot
* Create a new branch with a descriptive name for your feature (optional)
* Commit changes, push to your fork on GitHub
* Create a new pull request, provide a short summary of changes in the title line, with more information in the description field.
* After submitting the pull request, join the IRC channel (irc.esper.net #cloudbot) and paste a link to the pull request so people are aware of it
* After discussion, your pull request will be accepted or rejected.
## How to prepare
* You need a [GitHub account](https://github.com/signup/free)
* Submit an [issue ticket](https://github.com/ClouDev/CloudBot/issues) for your issue if the is no one yet.
* Describe the issue and include steps to reproduce if it's a bug.
* Ensure to mention the earliest version that you know is affected.
* If you are able and want to fix this, fork the repository on GitHub
## Make Changes
* In your forked repository, create a topic branch for your upcoming patch. (e.g. `feature--autoplay` or `bugfix--ios-crash`)
* Usually this is based on the develop branch.
* Create a branch based on master; `git branch
fix/develop/my_contribution develop` then checkout the new branch with `git
checkout fix/develop/my_contribution`. Please avoid working directly on the `develop` branch.
* Make sure you stick to the coding style that is used already.
* Make use of the [`.editorconfig`](http://editorconfig.org/) file.
* Make commits of logical units and describe them properly.
* Check for unnecessary whitespace with `git diff --check` before committing.
* Check your changes with [`pep8`](https://pypi.python.org/pypi/pep8). Ignore messages about line length.
## Submit Changes
* Push your changes to a topic branch in your fork of the repository.
* Open a pull request to the original repository and choose the right original branch you want to patch.
_Advanced users may use [`hub`](https://github.com/defunkt/hub#git-pull-request) gem for that._
* If not done in commit messages (which you really should do) please reference and update your issue with the code changes. But _please do not close the issue yourself_.
_Notice: You can [turn your previously filed issues into a pull-request here](http://issue2pr.herokuapp.com/)._
* Even if you have write access to the repository, do not directly push or merge pull-requests. Let another team member review your pull request and approve.
# Additional Resources
* [General GitHub documentation](http://help.github.com/)
* [GitHub pull request documentation](http://help.github.com/send-pull-requests/)
* [Read the Issue Guidelines by @necolas](https://github.com/necolas/issue-guidelines/blob/master/CONTRIBUTING.md) for more details
* [This CONTRIBUTING.md from here](https://github.com/anselmh/CONTRIBUTING.md)

View File

@ -1,34 +0,0 @@
Thanks to everyone who has contributed to CloudBot! Come in IRC and ping me if I forgot anyone.
Luke Rogers (lukeroge)
Neersighted
blha303
cybojenix
KsaRedFx
nathanblaney
thenoodle68
nasonfish
urbels
puffrfish
Sepero
TheFiZi
mikeleigh
Spudstabber
frozenMC
frdmn
We are using code from the following projects:
./plugins/mlia.py - https://github.com/infinitylabs/UguuBot
./plugins/horoscope.py - https://github.com/infinitylabs/UguuBot
color section in ./plugins/utility.py - https://github.com/hitzler/homero
Special Thanks:
Rmmh (created skybot!)
lahwran (for his advice and stuff I stole from his skybot fork!)
TheNoodle (for helping with some plugins when I was first starting out)
If any of your code is in here and you don't have credit, I'm sorry. I didn't keep track of a lot of code I added in the early days of the project.
You are all awesome :)

1
DOCUMENTATION Executable file
View File

@ -0,0 +1 @@
Please see the wiki @ http://git.io/cloudbotircwiki

0
LICENSE Normal file → Executable file
View File

89
README.md Normal file → Executable file
View File

@ -1,12 +1,25 @@
# CloudBot
# CloudBot/DEV
## About
CloudBot is a Python IRC bot based on [Skybot](http://git.io/skybot) by [rmmh](http://git.io/rmmh).
CloudBot is a Python IRC bot very heavily based on [Skybot](http://git.io/skybot) by [rmmh](http://git.io/rmmh).
### Goals
* Easy to use wrapper
* Intuitive configuration
* Fully controlled from IRC
* Fully compatable with existing skybot plugins
* Easily extendable
* Thorough documentation
* Cross-platform
* Muti-threaded, efficient
* Automatic reloading
* Little boilerplate
## Getting and using CloudBot
### Download
### Download
Get CloudBot at [https://github.com/ClouDev/CloudBot/zipball/develop](https://github.com/ClouDev/CloudBot/zipball/develop "Get CloudBot from Github!").
@ -14,33 +27,51 @@ Unzip the resulting file, and continue to read this document.
### Install
Before you can run the bot, you need to install a few Python dependencies. LXML is required while Enchant and PyDNS are needed for several plugins.
These can be installed with `pip` (The Python package manager):
Before you can run the bot, you need to install a few Python dependencies. These can be installed with `pip` (The Python package manager):
[sudo] pip install -r requirements.txt
If you use `pip`, you will also need the following packages on linux or `pip` will fail to install the requirements.
```python, python-dev, libenchant-dev, libenchant1c2a, libxslt-dev, libxml2-dev.```
#### How to install `pip`
curl -O http://python-distribute.org/distribute_setup.py # or download with your browser on windows
python distribute_setup.py
easy_install pip
If you are unable to use pip, there are Windows installers for LXML available for [64 bit](https://pypi.python.org/packages/2.7/l/lxml/lxml-2.3.win-amd64-py2.7.exe) and [32 bit](https://pypi.python.org/packages/2.7/l/lxml/lxml-2.3.win32-py2.7.exe) versions of Python.
### Run
Before you run the bot, rename `config.default` to `config` and edit it with your preferred settings.
Once you have installed the required dependencies, there are two ways you can run the bot:
Once you have installed the required dependencies and renamed the config file, you can run the bot! Make sure you are in the correct folder and run the following command:
#### Launcher
**Note:** Due to some issues with the launcher we recommend you run the bot manually as detailed below.
The launcher will start the bot as a background process, and allow the bot to close and restart itself. This is only supported on unix-like machines (not Windows).
For the launcher to work properly, install `screen`, or `daemon` (daemon is recommended):
`apt-get install screen`
`apt-get install daemon`
Once you have installed either `screen` or `daemon`, run the start command:
`./cloudbot start`
It will generate a default config for you. Once you have edited the config, run it again with the same command:
`./cloudbot start`
This will start up your bot as a background process. To stop it, use `./cloudbot stop`. (Config docs at the [wiki](http://git.io/cloudbotircconfig))
#### Manually
To manually run the bot and get console output, run it with:
`python bot.py`
On Windows you can usually just double-click `bot.py` to start the bot, as long as you have Python installed correctly.
On Windows you can usually just double-click the `bot.py` file to start the bot, as long as you have Python installed correctly.
(note: running the bot without the launcher breaks the start and restart commands)
## Getting help with CloudBot
@ -52,8 +83,6 @@ To write your own plugins, visit the [Plugin Wiki Page](http://git.io/cloudbotir
More at the [Wiki Main Page](http://git.io/cloudbotircwiki).
(some of the information on the wiki is outdated and needs to be rewritten)
### Support
The developers reside in [#CloudBot](irc://irc.esper.net/cloudbot) on [EsperNet](http://esper.net) and would be glad to help you.
@ -62,25 +91,31 @@ If you think you have found a bug/have a idea/suggestion, please **open a issue*
### Requirements
CloudBot runs on **Python** *2.7.x*. It is currently developed on **Windows** *8* with **Python** *2.7.5*.
CloudBot runs on **Python** *2.7.x*. It is developed on **Ubuntu** *12.04* with **Python** *2.7.3*.
It **requires the Python module** lXML.
The module `Enchant` is needed for the spellcheck plugin.
The module `PyDNS` is needed for SRV record lookup in the mcping plugin.
It **requires the Python module** `lXML`, and `Enchant` is needed for the spellcheck plugin.
**Windows** users: Windows compatibility some plugins is **broken** (such as ping), but we do intend to add it. Eventually.
The programs `daemon` or `screen` are recomended for the launcher to run optimaly.
**Windows** users: Windows compatibility with the launcher and some plugins is **broken** (such as ping), but we do intend to add it.³
## Example CloudBots
You can find a number of example bots in [#CloudBot](irc://irc.esper.net/cloudbot "Connect via IRC to #CloudBot on irc.esper.net").
The developers of CloudBot run two CloudBots on [Espernet](http://esper.net).
They can both be found in [#CloudBot](irc://irc.esper.net/cloudbot "Connect via IRC to #CloudBot on irc.esper.net").
**mau5bot** is the semi-stable bot, and runs on the latest stable development version of CloudBot. (mau5bot is running on **Ubuntu Server** *12.04* with **Python** *2.7.3*)
**neerbot** is unstable bot, and runs on the `HEAD` of the `develop` branch. (neerbot is running on **Debian** *Wheezy/Testing* with **Python** *2.7.2*)
## License
CloudBot is **licensed** under the **GPL v3** license. The terms are as follows.
CloudBot
CloudBot/DEV
Copyright © 2011-2013 Luke Rogers
Copyright © 2011-2012 Luke Rogers / ClouDev - <[cloudev.github.com](http://cloudev.github.com)>
CloudBot is free software: you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
@ -94,3 +129,7 @@ CloudBot is **licensed** under the **GPL v3** license. The terms are as follows.
You should have received a copy of the GNU General Public License
along with CloudBot. If not, see <http://www.gnu.org/licenses/>.
## Notes
³ eventually

View File

@ -1,32 +1,49 @@
#!/usr/bin/env python
__author__ = "ClouDev"
__authors__ = ["Lukeroge", "neersighted"]
__copyright__ = "Copyright 2012, ClouDev"
__credits__ = ["thenoodle", "_frozen", "rmmh"]
__license__ = "GPL v3"
__version__ = "DEV"
__maintainer__ = "ClouDev"
__email__ = "cloudev@neersighted.com"
__status__ = "Development"
import os
import Queue
import sys
import time
import re
import platform
sys.path += ['plugins', 'lib'] # add stuff to the sys.path for easy imports
sys.path += ['plugins'] # so 'import hook' works without duplication
sys.path += ['lib']
os.chdir(sys.path[0] or '.') # do stuff relative to the install directory
class Bot(object):
pass
print 'CloudBot DEV <http://git.io/cloudbotirc>'
print 'CloudBot %s (%s) <http://git.io/cloudbotirc>' % (__version__, __status__)
# print debug info
opsys = platform.platform()
python_imp = platform.python_implementation()
python_ver = platform.python_version()
architecture = ' '.join(platform.architecture())
print "Operating System: %s, Python " \
"Version: %s %s, Architecture: %s" \
"" % (opsys, python_imp, python_ver, architecture)
# create new bot object
bot = Bot()
bot.vars = {}
# record start time for the uptime command
bot.start_time = time.time()
print 'Begin Plugin Loading.'
print 'Loading plugins...'
# bootstrap the reloader
eval(compile(open(os.path.join('core', 'reload.py'), 'U').read(),
os.path.join('core', 'reload.py'), 'exec'))
os.path.join('core', 'reload.py'), 'exec'))
reload(init=True)
config()
@ -39,17 +56,14 @@ bot.conns = {}
try:
for name, conf in bot.config['connections'].iteritems():
# strip all spaces and capitalization from the connection name
name = name.replace(" ", "_")
name = re.sub('[^A-Za-z0-9_]+', '', name)
print 'Connecting to server: %s' % conf['server']
if conf.get('ssl'):
bot.conns[name] = SSLIRC(name, conf['server'], conf['nick'], conf=conf,
port=conf.get('port', 6667), channels=conf['channels'],
ignore_certificate_errors=conf.get('ignore_cert', True))
port=conf.get('port', 6667), channels=conf['channels'],
ignore_certificate_errors=conf.get('ignore_cert', True))
else:
bot.conns[name] = IRC(name, conf['server'], conf['nick'], conf=conf,
port=conf.get('port', 6667), channels=conf['channels'])
port=conf.get('port', 6667), channels=conf['channels'])
except Exception as e:
print 'ERROR: malformed config file', e
sys.exit()

0
disabled_stuff/cloudbot.sh → cloudbot Normal file → Executable file
View File

View File

@ -1,77 +0,0 @@
{
"connections": {
"hackint": {
"server": "irc.hackint.eu",
"nick": "antibot",
"user": "antibot",
"realname": "CloudBot - http://git.io/cloudbotirc",
"mode": "",
"_nickserv_password": "",
"-nickserv_user": "",
"channels": [
"#ChaosChemnitz",
"#logbot"
],
"invite_join": true,
"auto_rejoin": false,
"command_prefix": "."
}
},
"disabled_plugins": [],
"disabled_commands": [],
"acls": {},
"api_keys": {
"tvdb": "",
"wolframalpha": "",
"lastfm": "",
"rottentomatoes": "",
"soundcloud": "",
"twitter_consumer_key": "",
"twitter_consumer_secret": "",
"twitter_access_token": "",
"twitter_access_secret": "",
"wunderground": "",
"googletranslate": "",
"rdio_key": "",
"rdio_secret": ""
},
"permissions": {
"admins": {
"perms": [
"adminonly",
"addfactoid",
"delfactoid",
"ignore",
"botcontrol",
"permissions_users",
"op"
],
"users": [
"examplea!user@example.com",
"exampleb!user@example.com"
]
},
"moderators": {
"perms": [
"addfactoid",
"delfactoid",
"ignore"
],
"users": [
"stummi!~Stummi@stummi.org"
]
}
},
"plugins": {
"factoids": {
"prefix": false
},
"ignore": {
"ignored": []
}
},
"censored_strings": [
"mypass",
"mysecret"
]
}

57
core/config.py Normal file → Executable file
View File

@ -7,8 +7,60 @@ def save(conf):
json.dump(conf, open('config', 'w'), sort_keys=True, indent=2)
if not os.path.exists('config'):
print "Please rename 'config.default' to 'config' to set up your bot!"
print "For help, see http://git.io/cloudbotirc"
open('config', 'w').write(inspect.cleandoc(
r'''
{
"connections":
{
"EsperNet":
{
"server": "irc.esper.net",
"nick": "MyNewCloudBot",
"user": "cloudbot",
"realname": "CloudBot - http://git.io/cloudbotirc",
"nickserv_password": "",
"channels": ["#cloudbot"],
"invite_join": true,
"auto_rejoin": false,
"command_prefix": "."
}
},
"disabled_plugins": [],
"disabled_commands": [],
"acls": {},
"api_keys":
{
"geoip": "INSERT API KEY FROM ipinfodb.com HERE",
"tvdb": "INSERT API KEY FROM thetvdb.com HERE",
"bitly_user": "INSERT USERNAME FROM bitly.com HERE",
"bitly_api": "INSERT API KEY FROM bitly.com HERE",
"wolframalpha": "INSERT API KEY FROM wolframalpha.com HERE",
"lastfm": "INSERT API KEY FROM lastfm HERE",
"rottentomatoes": "INSERT API KEY FROM rottentomatoes HERE",
"mc_user": "INSERT minecraft USERNAME HERE",
"mc_pass": "INSERT minecraft PASSWORD HERE"
},
"plugins":
{
"factoids":
{
"prefix": false
},
"ignore":
{
"ignored": []
}
},
"censored_strings":
[
"mypass",
"mysecret"
],
"admins": ["myname@myhost"]
}''') + '\n')
print "Config generated!"
print "Please edit the config now!"
print "For help, see http://git.io/cloudbotircwiki"
print "Thank you for using CloudBot!"
sys.exit()
@ -25,3 +77,4 @@ def config():
bot._config_mtime = 0

2
core/db.py Normal file → Executable file
View File

@ -6,7 +6,7 @@ threaddbs = {}
def get_db_connection(conn, name=''):
"""returns an sqlite3 connection to a persistent database"""
"returns an sqlite3 connection to a persistent database"
if not name:
name = '{}.db'.format(conn.name)

92
core/irc.py Normal file → Executable file
View File

@ -17,18 +17,16 @@ def decode(txt):
def censor(text):
text = text.replace('\n', '').replace('\r', '')
replacement = '[censored]'
if 'censored_strings' in bot.config:
if bot.config['censored_strings']:
words = map(re.escape, bot.config['censored_strings'])
regex = re.compile('({})'.format("|".join(words)))
text = regex.sub(replacement, text)
words = map(re.escape, bot.config['censored_strings'])
regex = re.compile('(%s)' % "|".join(words))
text = regex.sub(replacement, text)
return text
class crlf_tcp(object):
"""Handles tcp connections that consist of utf-8 lines ending with crlf"""
"Handles tcp connections that consist of utf-8 lines ending with crlf"
def __init__(self, host, port, timeout=300):
self.ibuffer = ""
@ -44,16 +42,7 @@ class crlf_tcp(object):
return socket.socket(socket.AF_INET, socket.TCP_NODELAY)
def run(self):
noerror = 0
while 1:
try:
self.socket.connect((self.host, self.port))
break
except socket.gaierror as e:
time.sleep(5)
except socket.timeout as e:
time.sleep(5)
self.socket.connect((self.host, self.port))
thread.start_new_thread(self.recv_loop, ())
thread.start_new_thread(self.send_loop, ())
@ -64,25 +53,17 @@ class crlf_tcp(object):
return socket.timeout
def handle_receive_exception(self, error, last_timestamp):
print("Receive exception: %s" % (error))
if time.time() - last_timestamp > self.timeout:
print("Receive timeout. Restart connection.")
self.iqueue.put(StopIteration)
self.socket.close()
return True
return False
def handle_send_exception(self, error):
print("Send exception: %s" % (error))
self.iqueue.put(StopIteration)
self.socket.close()
return True
def recv_loop(self):
last_timestamp = time.time()
while True:
try:
data = self.recv_from_socket(4096)
data = self.recv_from_socket(4096)
self.ibuffer += data
if data:
last_timestamp = time.time()
@ -96,8 +77,6 @@ class crlf_tcp(object):
if self.handle_receive_exception(e, last_timestamp):
return
continue
except AttributeError:
return
while '\r\n' in self.ibuffer:
line, self.ibuffer = self.ibuffer.split('\r\n', 1)
@ -105,31 +84,24 @@ class crlf_tcp(object):
def send_loop(self):
while True:
try:
line = self.oqueue.get().splitlines()[0][:500]
if line == StopIteration:
return
print ">>> %r" % line
self.obuffer += line.encode('utf-8', 'replace') + '\r\n'
while self.obuffer:
sent = self.socket.send(self.obuffer)
self.obuffer = self.obuffer[sent:]
line = self.oqueue.get().splitlines()[0][:500]
print ">>> %r" % line
self.obuffer += line.encode('utf-8', 'replace') + '\r\n'
while self.obuffer:
sent = self.socket.send(self.obuffer)
self.obuffer = self.obuffer[sent:]
except socket.error as e:
self.handle_send_exception(e)
return
class crlf_ssl_tcp(crlf_tcp):
"""Handles ssl tcp connetions that consist of utf-8 lines ending with crlf"""
"Handles ssl tcp connetions that consist of utf-8 lines ending with crlf"
def __init__(self, host, port, ignore_cert_errors, timeout=300):
self.ignore_cert_errors = ignore_cert_errors
crlf_tcp.__init__(self, host, port, timeout)
def create_socket(self):
return wrap_socket(crlf_tcp.create_socket(self), server_side=False,
cert_reqs=CERT_NONE if self.ignore_cert_errors else
CERT_REQUIRED)
cert_reqs=CERT_NONE if self.ignore_cert_errors else
CERT_REQUIRED)
def recv_from_socket(self, nbytes):
return self.socket.read(nbytes)
@ -139,14 +111,10 @@ class crlf_ssl_tcp(crlf_tcp):
def handle_receive_exception(self, error, last_timestamp):
# this is terrible
#if not "timed out" in error.args[0]:
# raise
if not "timed out" in error.args[0]:
raise
return crlf_tcp.handle_receive_exception(self, error, last_timestamp)
def handle_send_exception(self, error):
return crlf_tcp.handle_send_exception(self, error)
irc_prefix_rem = re.compile(r'(.*?) (.*?) (.*)').match
irc_noprefix_rem = re.compile(r'()(.*?) (.*)').match
irc_netmask_rem = re.compile(r':?([^!@]*)!?([^@]*)@?(.*)').match
@ -154,8 +122,7 @@ irc_param_ref = re.compile(r'(?:^|(?<= ))(:.*|[^ ]+)').findall
class IRC(object):
"""handles the IRC protocol"""
"handles the IRC protocol"
def __init__(self, name, server, nick, port=6667, channels=[], conf={}):
self.name = name
self.channels = channels
@ -163,8 +130,6 @@ class IRC(object):
self.server = server
self.port = port
self.nick = nick
self.history = {}
self.vars = {}
self.out = Queue.Queue() # responses from the server are placed here
# format: [rawline, prefix, command, params,
@ -182,8 +147,8 @@ class IRC(object):
self.set_pass(self.conf.get('server_password'))
self.set_nick(self.nick)
self.cmd("USER",
[conf.get('user', 'cloudbot'), "3", "*", conf.get('realname',
'CloudBot - http://git.io/cloudbot')])
[conf.get('user', 'cloudbot'), "3", "*", conf.get('realname',
'CloudBot - http://git.io/cloudbot')])
def parse_loop(self):
while True:
@ -200,7 +165,7 @@ class IRC(object):
else:
prefix, command, params = irc_noprefix_rem(msg).groups()
nick, user, host = irc_netmask_rem(prefix).groups()
mask = nick + "!" + user + "@" + host
mask = user + "@" + host
paramlist = irc_param_ref(params)
lastparam = ""
if paramlist:
@ -209,7 +174,7 @@ class IRC(object):
lastparam = paramlist[-1]
# put the parsed message in the response queue
self.out.put([msg, prefix, command, params, nick, user, host,
mask, paramlist, lastparam])
mask, paramlist, lastparam])
# if the server pings us, pong them back
if command == "PING":
self.cmd("PONG", paramlist)
@ -223,7 +188,7 @@ class IRC(object):
def join(self, channel):
""" makes the bot join a channel """
self.send("JOIN {}".format(channel))
self.send("JOIN %s" % channel)
if channel not in self.channels:
self.channels.append(channel)
@ -234,18 +199,13 @@ class IRC(object):
self.channels.remove(channel)
def msg(self, target, text):
""" makes the bot send a PRIVMSG to a target """
""" makes the bot send a message to a user """
self.cmd("PRIVMSG", [target, text])
def ctcp(self, target, ctcp_type, text):
""" makes the bot send a PRIVMSG CTCP to a target """
out = u"\x01{} {}\x01".format(ctcp_type, text)
self.cmd("PRIVMSG", [target, out])
def cmd(self, command, params=None):
if params:
params[-1] = u':' + params[-1]
self.send(u"{} {}".format(command, ' '.join(params)))
params[-1] = ':' + params[-1]
self.send(command + ' ' + ' '.join(map(censor, params)))
else:
self.send(command)

58
core/main.py Normal file → Executable file
View File

@ -7,40 +7,35 @@ thread.stack_size(1024 * 512) # reduce vm size
class Input(dict):
def __init__(self, conn, raw, prefix, command, params,
nick, user, host, mask, paraml, msg):
nick, user, host, mask, paraml, msg):
chan = paraml[0].lower()
if chan == conn.nick.lower(): # is a PM
chan = nick
def message(message, target=chan):
"""sends a message to a specific or current channel/user"""
conn.msg(target, message)
def say(msg):
conn.msg(chan, msg)
def reply(message, target=chan):
"""sends a message to the current channel/user with a prefix"""
if target == nick:
conn.msg(target, message)
def pm(msg):
conn.msg(nick, msg)
def reply(msg):
if chan == nick: # PMs don't need prefixes
conn.msg(chan, msg)
else:
conn.msg(target, u"({}) {}".format(nick, message))
conn.msg(chan, '(' + nick + ') ' + msg)
def action(message, target=chan):
"""sends an action to the current channel/user or a specific channel/user"""
conn.ctcp(target, "ACTION", message)
def me(msg):
conn.msg(chan, "\x01%s %s\x01" % ("ACTION", msg))
def ctcp(message, ctcp_type, target=chan):
"""sends an ctcp to the current channel/user or a specific channel/user"""
conn.ctcp(target, ctcp_type, message)
def notice(message, target=nick):
"""sends a notice to the current channel/user or a specific channel/user"""
conn.cmd('NOTICE', [target, message])
def notice(msg):
conn.cmd('NOTICE', [nick, msg])
dict.__init__(self, conn=conn, raw=raw, prefix=prefix, command=command,
params=params, nick=nick, user=user, host=host, mask=mask,
paraml=paraml, msg=msg, server=conn.server, chan=chan,
notice=notice, message=message, reply=reply, bot=bot,
action=action, ctcp=ctcp, lastparam=paraml[-1])
params=params, nick=nick, user=user, host=host, mask=mask,
paraml=paraml, msg=msg, server=conn.server, chan=chan,
notice=notice, say=say, reply=reply, pm=pm, bot=bot,
me=me, lastparam=paraml[-1])
# make dict keys accessible as attributes
def __getattr__(self, key):
@ -82,8 +77,7 @@ def do_sieve(sieve, bot, input, func, type, args):
class Handler(object):
"""Runs plugins in their own threads (ensures order)"""
'''Runs plugins in their own threads (ensures order)'''
def __init__(self, func):
self.func = func
self.input_queue = Queue.Queue()
@ -109,7 +103,6 @@ class Handler(object):
run(self.func, input)
except:
import traceback
traceback.print_exc()
def stop(self):
@ -122,10 +115,11 @@ class Handler(object):
def dispatch(input, kind, func, args, autohelp=False):
for sieve, in bot.plugs['sieve']:
input = do_sieve(sieve, bot, input, func, kind, args)
if input is None:
if input == None:
return
if not (not autohelp or not args.get('autohelp', True) or input.inp or not (func.__doc__ is not None)):
if autohelp and args.get('autohelp', True) and not input.inp \
and func.__doc__ is not None:
input.notice(input.conn.conf["command_prefix"] + func.__doc__)
return
@ -159,9 +153,9 @@ def main(conn, out):
if inp.command == 'PRIVMSG':
# COMMANDS
if inp.chan == inp.nick: # private message, no command prefix
prefix = '^(?:[{}]?|'.format(command_prefix)
prefix = '^(?:[%s]?|' % command_prefix
else:
prefix = '^(?:[{}]|'.format(command_prefix)
prefix = '^(?:[%s]|' % command_prefix
command_re = prefix + inp.conn.nick
command_re += r'[,;:]+\s+)(\w+)(?:$|\s+)(.*)'
@ -174,8 +168,8 @@ def main(conn, out):
if isinstance(command, list): # multiple potential matches
input = Input(conn, *out)
input.notice("Did you mean {} or {}?".format
(', '.join(command[:-1]), command[-1]))
input.notice("Did you mean %s or %s?" %
(', '.join(command[:-1]), command[-1]))
elif command in bot.commands:
input = Input(conn, *out)
input.trigger = trigger

19
core/reload.py Normal file → Executable file
View File

@ -17,8 +17,8 @@ def make_signature(f):
return f.func_code.co_filename, f.func_name, f.func_code.co_firstlineno
def format_plug(plug, kind='', lpad=0):
out = ' ' * lpad + '{}:{}:{}'.format(*make_signature(plug[0]))
def format_plug(plug, kind='', lpad=0, width=40):
out = ' ' * lpad + '%s:%s:%s' % make_signature(plug[0])
if kind == 'command':
out += ' ' * (50 - len(out)) + plug[1]['name']
@ -49,7 +49,7 @@ def reload(init=False):
try:
eval(compile(open(filename, 'U').read(), filename, 'exec'),
globals())
globals())
except Exception:
traceback.print_exc()
if init: # stop if there's an error (syntax?) in a core
@ -111,19 +111,20 @@ def reload(init=False):
if not init:
print '### new plugin (type: %s) loaded:' % \
type, format_plug(data)
type, format_plug(data)
if changed:
bot.commands = {}
for plug in bot.plugs['command']:
name = plug[1]['name'].lower()
if not re.match(r'^\w+$', name):
print '### ERROR: invalid command name "{}" ({})'.format(name, format_plug(plug))
print '### ERROR: invalid command name "%s" (%s)' % (name,
format_plug(plug))
continue
if name in bot.commands:
print "### ERROR: command '{}' already registered ({}, {})".format(name,
format_plug(bot.commands[name]),
format_plug(plug))
print "### ERROR: command '%s' already registered (%s, %s)" % \
(name, format_plug(bot.commands[name]),
format_plug(plug))
continue
bot.commands[name] = plug
@ -154,7 +155,7 @@ def reload(init=False):
for kind, plugs in sorted(bot.plugs.iteritems()):
if kind == 'command':
continue
print ' {}:'.format(kind)
print ' %s:' % kind
for plug in plugs:
print format_plug(plug, kind=kind, lpad=6)
print

33
disabled_plugins/antiflood.py Executable file
View File

@ -0,0 +1,33 @@
def yaml_load(filename):
import yaml
fileHandle = open(filename, 'r')
stuff = yaml.load(fileHandle.read())
fileHandle.close()
return stuff
def yaml_save(stuff, filename):
import yaml
fileHandle = open (filename, 'w' )
fileHandle.write (yaml.dump(stuff))
fileHandle.close()
from util import hook
@hook.event('*')
def tellinput(paraml, input=None, say=None):
# import time
# now = time.time()
# spam = yaml_load('spam')
# if spam[input.nick]:
# spam[input.nick].append(time.time())
# else:
# spam[input.nick] = [time.time()]
# for x in spam[input.nick]:
# if now - x > 5:
# spam[input.nick].pop(x)
# if len(spam[input.nick]) > 8:
# say(":O")
# say("HOW COULD YOU "+input.nick)
# say("lol!")
# yaml_save(spam,'spam')
return

0
disabled_stuff/mtg.py → disabled_plugins/mtg.py Normal file → Executable file
View File

View File

@ -1,7 +1,7 @@
# BING translation plugin by Lukeroge and neersighted
from util import hook
from util import http
import re
import re
import htmlentitydefs
import mygengo

View File

33
disabled_plugins/suggest.py Executable file
View File

@ -0,0 +1,33 @@
import json
import random
import re
from util import hook, http
@hook.command
def suggest(inp, inp_unstripped=''):
".suggest [#n] <phrase> -- gets a random/the nth suggested google search"
inp = inp_unstripped
m = re.match('^#(\d+) (.+)$', inp)
if m:
num, inp = m.groups()
num = int(num)
if num > 10:
return 'I can only get the first ten suggestions.'
else:
num = 0
page = http.get('http://google.com/complete/search', output='json', client='hp', q=inp)
page_json = page.split('(', 1)[1][:-1]
suggestions = json.loads(page_json)[1]
if not suggestions:
return 'No suggestions found.'
if num:
if len(suggestions) + 1 <= num:
return 'I only got %d suggestions.' % len(suggestions)
out = suggestions[num - 1]
else:
out = random.choice(suggestions)
return '#%d: %s' % (int(out[2][0]) + 1, out[0].replace('<b>', '').replace('</b>', ''))

View File

View File

View File

View File

@ -1,72 +0,0 @@
import random
from util import hook
with open("plugins/data/larts.txt") as f:
larts = [line.strip() for line in f.readlines()
if not line.startswith("//")]
with open("plugins/data/insults.txt") as f:
insults = [line.strip() for line in f.readlines()
if not line.startswith("//")]
with open("plugins/data/flirts.txt") as f:
flirts = [line.strip() for line in f.readlines()
if not line.startswith("//")]
@hook.command
def lart(inp, action=None, nick=None, conn=None, notice=None):
"""lart <user> -- LARTs <user>."""
target = inp.strip()
if " " in target:
notice("Invalid username!")
return
# if the user is trying to make the bot slap itself, slap them
if target.lower() == conn.nick.lower() or target.lower() == "itself":
target = nick
values = {"user": target}
phrase = random.choice(larts)
# act out the message
action(phrase.format(**values))
@hook.command
def insult(inp, nick=None, action=None, conn=None, notice=None):
"""insult <user> -- Makes the bot insult <user>."""
target = inp.strip()
if " " in target:
notice("Invalid username!")
return
if target == conn.nick.lower() or target == "itself":
target = nick
else:
target = inp
out = 'insults {}... "{}"'.format(target, random.choice(insults))
action(out)
@hook.command
def flirt(inp, action=None, conn=None, notice=None):
"""flirt <user> -- Make the bot flirt with <user>."""
target = inp.strip()
if " " in target:
notice("Invalid username!")
return
if target == conn.nick.lower() or target == "itself":
target = 'itself'
else:
target = inp
out = 'flirts with {}... "{}"'.format(target, random.choice(flirts))
action(out)

View File

@ -1,121 +0,0 @@
# from jessi bot
import urllib2
import hashlib
import re
import unicodedata
from util import hook
# these are just parts required
# TODO: Merge them.
arglist = ['', 'y', '', '', '', '', '', '', '', '', 'wsf', '',
'', '', '', '', '', '', '', '0', 'Say', '1', 'false']
always_safe = ('ABCDEFGHIJKLMNOPQRSTUVWXYZ'
'abcdefghijklmnopqrstuvwxyz'
'0123456789' '_.-')
headers = {'X-Moz': 'prefetch', 'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8',
'User-Agent': 'Mozilla/5.0 (Windows NT 6.1; WOW64; rv:7.0.1)Gecko/20100101 Firefox/7.0',
'Accept-Charset': 'ISO-8859-1,utf-8;q=0.7,*;q=0.7', 'Referer': 'http://www.cleverbot.com',
'Pragma': 'no-cache', 'Cache-Control': 'no-cache, no-cache', 'Accept-Language': 'en-us;q=0.8,en;q=0.5'}
keylist = ['stimulus', 'start', 'sessionid', 'vText8', 'vText7', 'vText6',
'vText5', 'vText4', 'vText3', 'vText2', 'icognoid',
'icognocheck', 'prevref', 'emotionaloutput', 'emotionalhistory',
'asbotname', 'ttsvoice', 'typing', 'lineref', 'fno', 'sub',
'islearning', 'cleanslate']
MsgList = list()
def quote(s, safe='/'): # quote('abc def') -> 'abc%20def'
s = s.encode('utf-8')
s = s.decode('utf-8')
print "s= " + s
print "safe= " + safe
safe += always_safe
safe_map = dict()
for i in range(256):
c = chr(i)
safe_map[c] = (c in safe) and c or ('%%%02X' % i)
try:
res = map(safe_map.__getitem__, s)
except:
print "blank"
return ''
print "res= " + ''.join(res)
return ''.join(res)
def encode(keylist, arglist):
text = str()
for i in range(len(keylist)):
k = keylist[i]
v = quote(arglist[i])
text += '&' + k + '=' + v
text = text[1:]
return text
def Send():
data = encode(keylist, arglist)
digest_txt = data[9:29]
new_hash = hashlib.md5(digest_txt).hexdigest()
arglist[keylist.index('icognocheck')] = new_hash
data = encode(keylist, arglist)
req = urllib2.Request('http://www.cleverbot.com/webservicemin',
data, headers)
f = urllib2.urlopen(req)
reply = f.read()
return reply
def parseAnswers(text):
d = dict()
keys = ['text', 'sessionid', 'logurl', 'vText8', 'vText7', 'vText6',
'vText5', 'vText4', 'vText3', 'vText2', 'prevref', 'foo',
'emotionalhistory', 'ttsLocMP3', 'ttsLocTXT', 'ttsLocTXT3',
'ttsText', 'lineRef', 'lineURL', 'linePOST', 'lineChoices',
'lineChoicesAbbrev', 'typingData', 'divert']
values = text.split('\r')
i = 0
for key in keys:
d[key] = values[i]
i += 1
return d
def ask(inp):
arglist[keylist.index('stimulus')] = inp
if MsgList:
arglist[keylist.index('lineref')] = '!0' + str(len(
MsgList) / 2)
asw = Send()
MsgList.append(inp)
answer = parseAnswers(asw)
for k, v in answer.iteritems():
try:
arglist[keylist.index(k)] = v
except ValueError:
pass
arglist[keylist.index('emotionaloutput')] = str()
text = answer['ttsText']
MsgList.append(text)
return text
@hook.command("cb")
def cleverbot(inp, reply=None):
reply(ask(inp))
''' # TODO: add in command to control extra verbose per channel
@hook.event('PRIVMSG')
def cbevent(inp, reply=None):
reply(ask(inp))
@hook.command("cbver", permissions=['cleverbot'])
def cleverbotverbose(inp, notice=None):
if on in input
'''

View File

@ -1,37 +0,0 @@
from util import hook
import re
CORRECTION_RE = r'^(s|S)/.*/.*/?\S*$'
@hook.regex(CORRECTION_RE)
def correction(match, input=None, conn=None, message=None):
split = input.msg.split("/")
if len(split) == 4:
nick = split[3].lower()
else:
nick = None
find = split[1]
replace = split[2]
for item in conn.history[input.chan].__reversed__():
name, timestamp, msg = item
if msg.startswith("s/"):
# don't correct corrections, it gets really confusing
continue
if nick:
if nick != name.lower():
continue
if find in msg:
if "\x01ACTION" in msg:
msg = msg.replace("\x01ACTION ", "/me ").replace("\x01", "")
message(u"Correction, <{}> {}".format(name, msg.replace(find, "\x02" + replace + "\x02")))
return
else:
continue
return u"Did not find {} in any recent messages.".format(find)

View File

@ -1,60 +0,0 @@
from util import http, hook
## CONSTANTS
exchanges = {
"blockchain": {
"api_url": "https://blockchain.info/ticker",
"func": lambda data: u"Blockchain // Buy: \x0307${:,.2f}\x0f -"
u" Sell: \x0307${:,.2f}\x0f".format(data["USD"]["buy"], data["USD"]["sell"])
},
"coinbase": {
"api_url": "https://coinbase.com/api/v1/prices/spot_rate",
"func": lambda data: u"Coinbase // Current: \x0307${:,.2f}\x0f".format(float(data['amount']))
},
"bitpay": {
"api_url": "https://bitpay.com/api/rates",
"func": lambda data: u"Bitpay // Current: \x0307${:,.2f}\x0f".format(data[0]['rate'])
},
"bitstamp": {
"api_url": "https://www.bitstamp.net/api/ticker/",
"func": lambda data: u"BitStamp // Current: \x0307${:,.2f}\x0f - High: \x0307${:,.2f}\x0f -"
u" Low: \x0307${:,.2f}\x0f - Volume: {:,.2f} BTC".format(float(data['last']),
float(data['high']),
float(data['low']),
float(data['volume']))
}
}
## HOOK FUNCTIONS
@hook.command("btc", autohelp=False)
@hook.command(autohelp=False)
def bitcoin(inp):
"""bitcoin <exchange> -- Gets current exchange rate for bitcoins from several exchanges, default is Blockchain.
Supports MtGox, Bitpay, Coinbase and BitStamp."""
inp = inp.lower()
if inp:
if inp in exchanges:
exchange = exchanges[inp]
else:
return "Invalid Exchange"
else:
exchange = exchanges["blockchain"]
data = http.get_json(exchange["api_url"])
func = exchange["func"]
return func(data)
@hook.command("ltc", autohelp=False)
@hook.command(autohelp=False)
def litecoin(inp, message=None):
"""litecoin -- gets current exchange rate for litecoins from BTC-E"""
data = http.get_json("https://btc-e.com/api/2/ltc_usd/ticker")
ticker = data['ticker']
message("Current: \x0307${:,.2f}\x0f - High: \x0307${:,.2f}\x0f"
" - Low: \x0307${:,.2f}\x0f - Volume: {:,.2f} LTC".format(ticker['buy'], ticker['high'], ticker['low'],
ticker['vol_cur']))

View File

@ -1,39 +0,0 @@
import base64
from util import hook
def encode(key, clear):
enc = []
for i in range(len(clear)):
key_c = key[i % len(key)]
enc_c = chr((ord(clear[i]) + ord(key_c)) % 256)
enc.append(enc_c)
return base64.urlsafe_b64encode("".join(enc))
def decode(key, enc):
dec = []
enc = base64.urlsafe_b64decode(enc.encode('ascii', 'ignore'))
for i in range(len(enc)):
key_c = key[i % len(key)]
dec_c = chr((256 + ord(enc[i]) - ord(key_c)) % 256)
dec.append(dec_c)
return "".join(dec)
@hook.command
def cypher(inp):
"""cypher <pass> <string> -- Cyphers <string> with <password>."""
passwd = inp.split(" ")[0]
inp = " ".join(inp.split(" ")[1:])
return encode(passwd, inp)
@hook.command
def decypher(inp):
"""decypher <pass> <string> -- Decyphers <string> with <password>."""
passwd = inp.split(" ")[0]
inp = " ".join(inp.split(" ")[1:])
return decode(passwd, inp)

View File

@ -1,620 +0,0 @@
1 Stone
1:1 Granite
1:2 Polished Granite
1:3 Diorite
1:4 Polished Diorite
1:5 Andesite
1:6 Polished Andesite
2 Grass
3 Dirt
3:1 Dirt (No Grass)
3:2 Podzol
4 Cobblestone
5 Wooden Plank (Oak)
5:1 Wooden Plank (Spruce)
5:2 Wooden Plank (Birch)
5:3 Wooden Plank (Jungle)
5:4 Wooden Plank (Acacia)
5:5 Wooden Plank (Dark Oak)
6 Sapling (Oak)
6:1 Sapling (Spruce)
6:2 Sapling (Birch)
6:3 Sapling (Jungle)
6:4 Sapling (Acacia)
6:5 Sapling (Dark Oak)
7 Bedrock
8 Water
9 Water (No Spread)
10 Lava
11 Lava (No Spread)
12 Sand
12:1 Red Sand
13 Gravel
14 Gold Ore
15 Iron Ore
16 Coal Ore
17 Wood (Oak)
17:1 Wood (Spruce)
17:2 Wood (Birch)
17:3 Wood (Jungle)
17:4 Wood (Oak 4)
17:5 Wood (Oak 5)
18 Leaves (Oak)
18:1 Leaves (Spruce)
18:2 Leaves (Birch)
18:3 Leaves (Jungle)
19 Sponge
20 Glass
21 Lapis Lazuli Ore
22 Lapis Lazuli Block
23 Dispenser
24 Sandstone
24:1 Sandstone (Chiseled)
24:2 Sandstone (Smooth)
25 Note Block
26 Bed (Block)
27 Rail (Powered)
28 Rail (Detector)
29 Sticky Piston
30 Cobweb
31 Tall Grass (Dead Shrub)
31:1 Tall Grass
31:2 Tall Grass (Fern)
32 Dead Shrub
33 Piston
34 Piston (Head)
35 Wool
35:1 Orange Wool
35:2 Magenta Wool
35:3 Light Blue Wool
35:4 Yellow Wool
35:5 Lime Wool
35:6 Pink Wool
35:7 Gray Wool
35:8 Light Gray Wool
35:9 Cyan Wool
35:10 Purple Wool
35:11 Blue Wool
35:12 Brown Wool
35:13 Green Wool
35:14 Red Wool
35:15 Black Wool
36 Piston (Moving)
37 Dandelion
38 Poppy
38:1 Blue Orchid
38:2 Allium
38:4 Red Tulip
38:5 Orange Tulip
38:6 White Tulip
38:7 Pink Tulip
38:8 Oxeye Daisy
39 Brown Mushroom
40 Red Mushroom
41 Block of Gold
42 Block of Iron
43 Stone Slab (Double)
43:1 Sandstone Slab (Double)
43:2 Wooden Slab (Double)
43:3 Cobblestone Slab (Double)
43:4 Brick Slab (Double)
43:5 Stone Brick Slab (Double)
43:6 Nether Brick Slab (Double)
43:7 Quartz Slab (Double)
43:8 Smooth Stone Slab (Double)
43:9 Smooth Sandstone Slab (Double)
44 Stone Slab
44:1 Sandstone Slab
44:2 Wooden Slab
44:3 Cobblestone Slab
44:4 Brick Slab
44:5 Stone Brick Slab
44:6 Nether Brick Slab
44:7 Quartz Slab
45 Brick
46 TNT
47 Bookshelf
48 Moss Stone
49 Obsidian
50 Torch
51 Fire
52 Mob Spawner
53 Wooden Stairs (Oak)
54 Chest
55 Redstone Wire
56 Diamond Ore
57 Block of Diamond
58 Workbench
59 Wheat (Crop)
60 Farmland
61 Furnace
62 Furnace (Smelting)
63 Sign (Block)
64 Wood Door (Block)
65 Ladder
66 Rail
67 Cobblestone Stairs
68 Sign (Wall Block)
69 Lever
70 Stone Pressure Plate
71 Iron Door (Block)
72 Wooden Pressure Plate
73 Redstone Ore
74 Redstone Ore (Glowing)
75 Redstone Torch (Off)
76 Redstone Torch
77 Button (Stone)
78 Snow
79 Ice
80 Snow Block
81 Cactus
82 Clay Block
83 Sugar Cane (Block)
84 Jukebox
85 Fence
86 Pumpkin
87 Netherrack
88 Soul Sand
89 Glowstone
90 Portal
91 Jack-O-Lantern
92 Cake (Block)
93 Redstone Repeater (Block Off)
94 Redstone Repeater (Block On)
95 Stained Glass (White)
95:1 Stained Glass (Orange)
95:2 Stained Glass (Magenta)
95:3 Stained Glass (Light Blue)
95:4 Stained Glass (Yellow)
95:5 Stained Glass (Lime)
95:6 Stained Glass (Pink)
95:7 Stained Glass (Gray)
95:8 Stained Glass (Light Grey)
95:9 Stained Glass (Cyan)
95:10 Stained Glass (Purple)
95:11 Stained Glass (Blue)
95:12 Stained Glass (Brown)
95:13 Stained Glass (Green)
95:14 Stained Glass (Red)
95:15 Stained Glass (Black)
96 Trapdoor
97 Monster Egg (Stone)
97:1 Monster Egg (Cobblestone)
97:2 Monster Egg (Stone Brick)
97:3 Monster Egg (Mossy Stone Brick)
97:4 Monster Egg (Cracked Stone)
97:5 Monster Egg (Chiseled Stone)
98 Stone Bricks
98:1 Mossy Stone Bricks
98:2 Cracked Stone Bricks
98:3 Chiseled Stone Brick
99 Brown Mushroom (Block)
100 Red Mushroom (Block)
101 Iron Bars
102 Glass Pane
103 Melon (Block)
104 Pumpkin Vine
105 Melon Vine
106 Vines
107 Fence Gate
108 Brick Stairs
109 Stone Brick Stairs
110 Mycelium
111 Lily Pad
112 Nether Brick
113 Nether Brick Fence
114 Nether Brick Stairs
115 Nether Wart
116 Enchantment Table
117 Brewing Stand (Block)
118 Cauldron (Block)
119 End Portal
120 End Portal Frame
121 End Stone
122 Dragon Egg
123 Redstone Lamp
124 Redstone Lamp (On)
125 Oak-Wood Slab (Double)
125:1 Spruce-Wood Slab (Double)
125:2 Birch-Wood Slab (Double)
125:3 Jungle-Wood Slab (Double)
125:4 Acacia Wood Slab (Double)
125:5 Dark Oak Wood Slab (Double)
126 Oak-Wood Slab
126:1 Spruce-Wood Slab
126:2 Birch-Wood Slab
126:3 Jungle-Wood Slab
126:4 Acacia Wood Slab
126:5 Dark Oak Wood Slab
127 Cocoa Plant
128 Sandstone Stairs
129 Emerald Ore
130 Ender Chest
131 Tripwire Hook
132 Tripwire
133 Block of Emerald
134 Wooden Stairs (Spruce)
135 Wooden Stairs (Birch)
136 Wooden Stairs (Jungle)
137 Command Block
138 Beacon
139 Cobblestone Wall
139:1 Mossy Cobblestone Wall
140 Flower Pot (Block)
141 Carrot (Crop)
142 Potatoes (Crop)
143 Button (Wood)
144 Head Block (Skeleton)
144:1 Head Block (Wither)
144:2 Head Block (Zombie)
144:3 Head Block (Steve)
144:4 Head Block (Creeper)
145 Anvil
145:1 Anvil (Slightly Damaged)
145:2 Anvil (Very Damaged)
146 Trapped Chest
147 Weighted Pressure Plate (Light)
148 Weighted Pressure Plate (Heavy)
149 Redstone Comparator (Off)
150 Redstone Comparator (On)
151 Daylight Sensor
152 Block of Redstone
153 Nether Quartz Ore
154 Hopper
155 Quartz Block
155:1 Chiseled Quartz Block
155:2 Pillar Quartz Block
156 Quartz Stairs
157 Rail (Activator)
158 Dropper
159 Stained Clay (White)
159:1 Stained Clay (Orange)
159:2 Stained Clay (Magenta)
159:3 Stained Clay (Light Blue)
159:4 Stained Clay (Yellow)
159:5 Stained Clay (Lime)
159:6 Stained Clay (Pink)
159:7 Stained Clay (Gray)
159:8 Stained Clay (Light Gray)
159:9 Stained Clay (Cyan)
159:10 Stained Clay (Purple)
159:11 Stained Clay (Blue)
159:12 Stained Clay (Brown)
159:13 Stained Clay (Green)
159:14 Stained Clay (Red)
159:15 Stained Clay (Black)
160 Stained Glass Pane (White)
160:1 Stained Glass Pane (Orange)
160:2 Stained Glass Pane (Magenta)
160:3 Stained Glass Pane (Light Blue)
160:4 Stained Glass Pane (Yellow)
160:5 Stained Glass Pane (Lime)
160:6 Stained Glass Pane (Pink)
160:7 Stained Glass Pane (Gray)
160:8 Stained Glass Pane (Light Gray)
160:9 Stained Glass Pane (Cyan)
160:10 Stained Glass Pane (Purple)
160:11 Stained Glass Pane (Blue)
160:12 Stained Glass Pane (Brown)
160:13 Stained Glass Pane (Green)
160:14 Stained Glass Pane (Red)
160:15 Stained Glass Pane (Black)
162 Wood (Acacia Oak)
162:1 Wood (Dark Oak)
163 Wooden Stairs (Acacia)
164 Wooden Stairs (Dark Oak)
165 Slime Block
170 Hay Bale
171 Carpet (White)
171:1 Carpet (Orange)
171:2 Carpet (Magenta)
171:3 Carpet (Light Blue)
171:4 Carpet (Yellow)
171:5 Carpet (Lime)
171:6 Carpet (Pink)
171:7 Carpet (Grey)
171:8 Carpet (Light Gray)
171:9 Carpet (Cyan)
171:10 Carpet (Purple)
171:11 Carpet (Blue)
171:12 Carpet (Brown)
171:13 Carpet (Green)
171:14 Carpet (Red)
171:15 Carpet (Black)
172 Hardened Clay
173 Block of Coal
174 Packed Ice
175 Sunflower
175:1 Lilac
175:2 Double Tallgrass
175:3 Large Fern
175:4 Rose Bush
175:5 Peony
256 Iron Shovel
257 Iron Pickaxe
258 Iron Axe
259 Flint and Steel
260 Apple
261 Bow
262 Arrow
263 Coal
263:1 Charcoal
264 Diamond Gem
265 Iron Ingot
266 Gold Ingot
267 Iron Sword
268 Wooden Sword
269 Wooden Shovel
270 Wooden Pickaxe
271 Wooden Axe
272 Stone Sword
273 Stone Shovel
274 Stone Pickaxe
275 Stone Axe
276 Diamond Sword
277 Diamond Shovel
278 Diamond Pickaxe
279 Diamond Axe
280 Stick
281 Bowl
282 Mushroom Stew
283 Gold Sword
284 Gold Shovel
285 Gold Pickaxe
286 Gold Axe
287 String
288 Feather
289 Gunpowder
290 Wooden Hoe
291 Stone Hoe
292 Iron Hoe
293 Diamond Hoe
294 Gold Hoe
295 Wheat Seeds
296 Wheat
297 Bread
298 Leather Helmet
299 Leather Chestplate
300 Leather Leggings
301 Leather Boots
302 Chainmail Helmet
303 Chainmail Chestplate
304 Chainmail Leggings
305 Chainmail Boots
306 Iron Helmet
307 Iron Chestplate
308 Iron Leggings
309 Iron Boots
310 Diamond Helmet
311 Diamond Chestplate
312 Diamond Leggings
313 Diamond Boots
314 Gold Helmet
315 Gold Chestplate
316 Gold Leggings
317 Gold Boots
318 Flint
319 Raw Porkchop
320 Cooked Porkchop
321 Painting
322 Golden Apple
322:1 Enchanted Golden Apple
323 Sign
324 Wooden Door
325 Bucket
326 Bucket (Water)
327 Bucket (Lava)
328 Minecart
329 Saddle
330 Iron Door
331 Redstone Dust
332 Snowball
333 Boat
334 Leather
335 Bucket (Milk)
336 Clay Brick
337 Clay
338 Sugar Cane
339 Paper
340 Book
341 Slime Ball
342 Minecart (Storage)
343 Minecart (Powered)
344 Egg
345 Compass
346 Fishing Rod
347 Watch
348 Glowstone Dust
349 Raw Fish
349:1 Raw Salmon
349:2 Clownfish
349:3 Pufferfish
350 Cooked Fish
350:1 Cooked Salmon
350:2 Clownfish
350:3 Pufferfish
351 Ink Sack
351:1 Rose Red Dye
351:2 Cactus Green Dye
351:3 Cocoa Bean
351:4 Lapis Lazuli
351:5 Purple Dye
351:6 Cyan Dye
351:7 Light Gray Dye
351:8 Gray Dye
351:9 Pink Dye
351:10 Lime Dye
351:11 Dandelion Yellow Dye
351:12 Light Blue Dye
351:13 Magenta Dye
351:14 Orange Dye
351:15 Bone Meal
352 Bone
353 Sugar
354 Cake
355 Bed
356 Redstone Repeater
357 Cookie
358 Map
359 Shears
360 Melon (Slice)
361 Pumpkin Seeds
362 Melon Seeds
363 Raw Beef
364 Steak
365 Raw Chicken
366 Cooked Chicken
367 Rotten Flesh
368 Ender Pearl
369 Blaze Rod
370 Ghast Tear
371 Gold Nugget
372 Nether Wart Seeds
373 Water Bottle
373:16 Awkward Potion
373:32 Thick Potion
373:64 Mundane Potion
373:8193 Regeneration Potion (0:45)
373:8194 Swiftness Potion (3:00)
373:8195 Fire Resistance Potion (3:00)
373:8196 Poison Potion (0:45)
373:8197 Healing Potion
373:8198 Night Vision Potion (3:00)
373:8200 Weakness Potion (1:30)
373:8201 Strength Potion (3:00)
373:8202 Slowness Potion (1:30)
373:8204 Harming Potion
373:8205 Water Breathing Potion (3:00)
373:8206 Invisibility Potion (3:00)
373:8225 Regeneration Potion II (0:22)
373:8226 Swiftness Potion II (1:30)
373:8228 Poison Potion II (0:22)
373:8229 Healing Potion II
373:8233 Strength Potion II (1:30)
373:8236 Harming Potion II
373:8257 Regeneration Potion (2:00)
373:8258 Swiftness Potion (8:00)
373:8259 Fire Resistance Potion (8:00)
373:8260 Poison Potion (2:00)
373:8262 Night Vision Potion (8:00)
373:8264 Weakness Potion (4:00)
373:8265 Strength Potion (8:00)
373:8266 Slowness Potion (4:00)
373:8269 Water Breathing Potion (8:00)
373:8270 Invisibility Potion (8:00)
373:8289 Regeneration Potion II (1:00)
373:8290 Swiftness Potion II (4:00)
373:8292 Poison Potion II (1:00)
373:8297 Strength Potion II (4:00)
373:16385 Regeneration Splash (0:33)
373:16386 Swiftness Splash (2:15)
373:16387 Fire Resistance Splash (2:15)
373:16388 Poison Splash (0:33)
373:16389 Healing Splash
373:16390 Night Vision Splash (2:15)
373:16392 Weakness Splash (1:07)
373:16393 Strength Splash (2:15)
373:16394 Slowness Splash (1:07)
373:16396 Harming Splash
373:16397 Breathing Splash (2:15)
373:16398 Invisibility Splash (2:15)
373:16417 Regeneration Splash II (0:16)
373:16418 Swiftness Splash II (1:07)
373:16420 Poison Splash II (0:16)
373:16421 Healing Splash II
373:16425 Strength Splash II (1:07)
373:16428 Harming Splash II
373:16449 Regeneration Splash (1:30)
373:16450 Swiftness Splash (6:00)
373:16451 Fire Resistance Splash (6:00)
373:16452 Poison Splash (1:30)
373:16454 Night Vision Splash (6:00)
373:16456 Weakness Splash (3:00)
373:16457 Strength Splash (6:00)
373:16458 Slowness Splash (3:00)
373:16461 Breathing Splash (6:00)
373:16462 Invisibility Splash (6:00)
373:16481 Regeneration Splash II (0:45)
373:16482 Swiftness Splash II (3:00)
373:16484 Poison Splash II (0:45)
373:16489 Strength Splash II (3:00)
374 Glass Bottle
375 Spider Eye
376 Fermented Spider Eye
377 Blaze Powder
378 Magma Cream
379 Brewing Stand
380 Cauldron
381 Eye of Ender
382 Glistering Melon (Slice)
383:50 Spawn Egg (Creeper)
383:51 Spawn Egg (Skeleton)
383:52 Spawn Egg (Spider)
383:54 Spawn Egg (Zombie)
383:55 Spawn Egg (Slime)
383:56 Spawn Egg (Ghast)
383:57 Spawn Egg (Zombie Pigmen)
383:58 Spawn Egg (Endermen)
383:59 Spawn Egg (Cave Spider)
383:60 Spawn Egg (Silverfish)
383:61 Spawn Egg (Blaze)
383:62 Spawn Egg (Magma Cube)
383:65 Spawn Egg (Bat)
383:66 Spawn Egg (Witch)
383:90 Spawn Egg (Pig)
383:91 Spawn Egg (Sheep)
383:92 Spawn Egg (Cow)
383:93 Spawn Egg (Chicken)
383:94 Spawn Egg (Squid)
383:95 Spawn Egg (Wolf)
383:96 Spawn Egg (Mooshroom)
383:98 Spawn Egg (Ocelot)
383:100 Spawn Egg (Horse)
383:120 Spawn Egg (Villager)
384 Bottle of Enchanting
385 Fire Charge
386 Book and Quill
387 Written Book
388 Emerald
389 Item Frame
390 Flower Pot
391 Carrot
392 Potato
393 Baked Potato
394 Poisonous Potato
395 Empty Map
396 Golden Carrot
397 Head (Skeleton)
397:1 Head (Wither)
397:2 Head (Zombie)
397:3 Head (Steve)
397:4 Head (Creeper)
398 Carrot on a Stick
399 Nether Star
400 Pumpkin Pie
401 Firework Rocket
402 Firework Star
403 Enchanted Book
404 Redstone Comparator
405 Nether Brick (Item)
406 Nether Quartz
407 Minecart (TNT)
408 Minecart (Hopper)
417 Iron Horse Armor
418 Gold Horse Armor
419 Diamond Horse Armor
420 Lead
421 Name Tag
422 Minecart (Command Block)
2256 Music Disk (13)
2257 Music Disk (Cat)
2258 Music Disk (Blocks)
2259 Music Disk (Chirp)
2260 Music Disk (Far)
2261 Music Disk (Mall)
2262 Music Disk (Mellohi)
2263 Music Disk (Stal)
2264 Music Disk (Strad)
2265 Music Disk (Ward)
2266 Music Disk (11)
2267 Music Disk (Wait)

View File

@ -1,79 +0,0 @@
{
"templates": [
"rips off {user}'s {limbs} and leaves them to die.",
"grabs {user}'s head and rips it clean off their body.",
"grabs a {gun} and riddles {user}'s body with bullets.",
"gags and ties {user} then throws them off a {tall_thing}.",
"crushes {user} with a huge spiked {spiked_thing}.",
"glares at {user} until they die of boredom.",
"stabs {user} in the heart a few times with a {weapon_stab}.",
"rams a {weapon_explosive} up {user}'s ass and lets off a few rounds.",
"crushes {user}'s skull in with a {weapon_crush}.",
"unleashes the armies of Isengard on {user}.",
"gags and ties {user} then throws them off a {tall_thing} to their death.",
"reaches out and punches right through {user}'s chest.",
"slices {user}'s limbs off with a {weapon_slice}.",
"throws {user} to Cthulu and watches them get ripped to shreds.",
"feeds {user} to an owlbear who then proceeds to maul them violently.",
"turns {user} into a snail and covers then in salt.",
"snacks on {user}'s dismembered body.",
"stuffs {bomb} up {user}'s ass and waits for it to go off.",
"puts {user} into a sack, throws the sack in the river, and hurls the river into space.",
"goes bowling with {user}'s bloody disembodied head.",
"sends {user} to /dev/null!",
"feeds {user} coke and mentos till they violently explode."
],
"parts": {
"gun": [
"AK47",
"machine gun",
"automatic pistol",
"Uzi"
],
"limbs": [
"legs",
"arms",
"limbs"
],
"weapon_stab": [
"knife",
"shard of glass",
"sword blade",
"butchers knife",
"corkscrew"
],
"weapon_slice": [
"sharpened katana",
"chainsaw",
"polished axe"
],
"weapon_crush": [
"spiked mace",
"baseball bat",
"wooden club",
"massive steel ball",
"heavy iron rod"
],
"weapon_explosive": [
"rocket launcher",
"grenade launcher",
"napalm launcher"
],
"tall_thing": [
"bridge",
"tall building",
"cliff",
"mountain"
],
"spiked_thing": [
"boulder",
"rock",
"barrel of rocks"
],
"bomb": [
"a bomb",
"some TNT",
"a bunch of C4"
]
}
}

View File

@ -1,69 +0,0 @@
{
"templates":[
"{hits} {user} with a {item}.",
"{hits} {user} around a bit with a {item}.",
"{throws} a {item} at {user}.",
"{throws} a few {item}s at {user}.",
"grabs a {item} and {throws} it in {user}'s face.",
"launches a {item} in {user}'s general direction.",
"sits on {user}'s face while slamming a {item} into their crotch.",
"starts slapping {user} silly with a {item}.",
"holds {user} down and repeatedly {hits} them with a {item}.",
"prods {user} with a {item}.",
"picks up a {item} and {hits} {user} with it.",
"ties {user} to a chair and {throws} a {item} at them.",
"{hits} {user} {where} with a {item}.",
"ties {user} to a pole and whips them with a {item}."
],
"parts": {
"item":[
"cast iron skillet",
"large trout",
"baseball bat",
"wooden cane",
"nail",
"printer",
"shovel",
"pair of trousers",
"CRT monitor",
"diamond sword",
"baguette",
"physics textbook",
"toaster",
"portrait of Richard Stallman",
"television",
"mau5head",
"five ton truck",
"roll of duct tape",
"book",
"laptop",
"old television",
"sack of rocks",
"rainbow trout",
"cobblestone block",
"lava bucket",
"rubber chicken",
"spiked bat",
"gold block",
"fire extinguisher",
"heavy rock",
"chunk of dirt"
],
"throws": [
"throws",
"flings",
"chucks"
],
"hits": [
"hits",
"whacks",
"slaps",
"smacks"
],
"where": [
"in the chest",
"on the head",
"on the bum"
]
}
}

View File

@ -1,18 +0,0 @@
from util import hook, http
@hook.command
def domainr(inp):
"""domainr <domain> - Use domain.nr's API to search for a domain, and similar domains."""
try:
data = http.get_json('http://domai.nr/api/json/search?q=' + inp)
except (http.URLError, http.HTTPError) as e:
return "Unable to get data for some reason. Try again later."
if data['query'] == "":
return "An error occurred: {status} - {message}".format(**data['error'])
domains = ""
for domain in data['results']:
domains += ("\x034" if domain['availability'] == "taken" else (
"\x033" if domain['availability'] == "available" else "\x031")) + domain['domain'] + "\x0f" + domain[
'path'] + ", "
return "Domains: " + domains

View File

@ -1,23 +0,0 @@
import random
from util import hook, text
color_codes = {
"<r>": "\x02\x0305",
"<g>": "\x02\x0303",
"<y>": "\x02"
}
with open("plugins/data/8ball_responses.txt") as f:
responses = [line.strip() for line in
f.readlines() if not line.startswith("//")]
@hook.command('8ball')
def eightball(inp, action=None):
"""8ball <question> -- The all knowing magic eight ball,
in electronic form. Ask and it shall be answered!"""
magic = text.multiword_replace(random.choice(responses), color_codes)
action("shakes the magic 8 ball... {}".format(magic))

View File

@ -1,105 +0,0 @@
import os
import base64
import json
import hashlib
from Crypto import Random
from Crypto.Cipher import AES
from Crypto.Protocol.KDF import PBKDF2
from util import hook
# helper functions to pad and unpad a string to a specified block size
# <http://stackoverflow.com/questions/12524994/encrypt-decrypt-using-pycrypto-aes-256>
BS = AES.block_size
pad = lambda s: s + (BS - len(s) % BS) * chr(BS - len(s) % BS)
unpad = lambda s: s[0:-ord(s[-1])]
# helper functions to encrypt and encode a string with AES and base64
encode_aes = lambda c, s: base64.b64encode(c.encrypt(pad(s)))
decode_aes = lambda c, s: unpad(c.decrypt(base64.b64decode(s)))
db_ready = False
def db_init(db):
"""check to see that our db has the the encryption table."""
global db_ready
if not db_ready:
db.execute("create table if not exists encryption(encrypted, iv, "
"primary key(encrypted))")
db.commit()
db_ready = True
def get_salt(bot):
"""generate an encryption salt if none exists, then returns the salt"""
if not bot.config.get("random_salt", False):
bot.config["random_salt"] = hashlib.md5(os.urandom(16)).hexdigest()
json.dump(bot.config, open('config', 'w'), sort_keys=True, indent=2)
return bot.config["random_salt"]
@hook.command
def encrypt(inp, bot=None, db=None, notice=None):
"""encrypt <pass> <string> -- Encrypts <string> with <pass>. (<string> can only be decrypted using this bot)"""
db_init(db)
split = inp.split(" ")
# if there is only one argument, return the help message
if len(split) == 1:
notice(encrypt.__doc__)
return
# generate the key from the password and salt
password = split[0]
salt = get_salt(bot)
key = PBKDF2(password, salt)
# generate the IV and encode it to store in the database
iv = Random.new().read(AES.block_size)
iv_encoded = base64.b64encode(iv)
# create the AES cipher and encrypt/encode the text with it
text = " ".join(split[1:])
cipher = AES.new(key, AES.MODE_CBC, iv)
encoded = encode_aes(cipher, text)
# store the encoded text and IV in the DB for decoding later
db.execute("insert or replace into encryption(encrypted, iv)"
"values(?,?)", (encoded, iv_encoded))
db.commit()
return encoded
@hook.command
def decrypt(inp, bot=None, db=None, notice=None):
"""decrypt <pass> <string> -- Decrypts <string> with <pass>. (can only decrypt strings encrypted on this bot)"""
if not db_ready:
db_init(db)
split = inp.split(" ")
# if there is only one argument, return the help message
if len(split) == 1:
notice(decrypt.__doc__)
return
# generate the key from the password and salt
password = split[0]
salt = get_salt(bot)
key = PBKDF2(password, salt)
text = " ".join(split[1:])
# get the encoded IV from the database and decode it
iv_encoded = db.execute("select iv from encryption where"
" encrypted=?", (text,)).fetchone()[0]
iv = base64.b64decode(iv_encoded)
# create AES cipher, decode text, decrypt text, and unpad it
cipher = AES.new(key, AES.MODE_CBC, iv)
return decode_aes(cipher, text)

View File

@ -1,57 +0,0 @@
from urllib import quote_plus
from util import hook, http
api_url = "http://api.fishbans.com/stats/{}/"
@hook.command("bans")
@hook.command
def fishbans(inp):
"""fishbans <user> -- Gets information on <user>s minecraft bans from fishbans"""
user = inp.strip()
try:
request = http.get_json(api_url.format(quote_plus(user)))
except (http.HTTPError, http.URLError) as e:
return "Could not fetch ban data from the Fishbans API: {}".format(e)
if not request["success"]:
return "Could not fetch ban data for {}.".format(user)
user_url = "http://fishbans.com/u/{}/".format(user)
ban_count = request["stats"]["totalbans"]
return "The user \x02{}\x02 has \x02{}\x02 ban(s). See detailed info " \
"at {}".format(user, ban_count, user_url)
@hook.command
def bancount(inp):
"""bancount <user> -- Gets a count of <user>s minecraft bans from fishbans"""
user = inp.strip()
try:
request = http.get_json(api_url.format(quote_plus(user)))
except (http.HTTPError, http.URLError) as e:
return "Could not fetch ban data from the Fishbans API: {}".format(e)
if not request["success"]:
return "Could not fetch ban data for {}.".format(user)
user_url = "http://fishbans.com/u/{}/".format(user)
services = request["stats"]["service"]
out = []
for service, ban_count in services.items():
if ban_count != 0:
out.append("{}: \x02{}\x02".format(service, ban_count))
else:
pass
if not out:
return "The user \x02{}\x02 has no bans.".format(user)
else:
return "Bans for \x02{}\x02: ".format(user) + ", ".join(out) + ". More info " \
"at {}".format(user_url)

View File

@ -1,13 +0,0 @@
from util import hook, http, web
from subprocess import check_output, CalledProcessError
@hook.command
def freddycode(inp):
"""freddycode <code> - Check if the Freddy Fresh code is correct."""
try:
return "Freddy: '%s' ist %s" % (inp, \
check_output(["/bin/freddycheck", inp]))
except CalledProcessError as err:
return "Freddy: Skript returned %s" % (str(err))

View File

@ -1,120 +0,0 @@
import json
import urllib2
from util import hook, http
shortcuts = {"cloudbot": "ClouDev/CloudBot"}
def truncate(msg):
nmsg = msg.split()
out = None
x = 0
for i in nmsg:
if x <= 7:
if out:
out = out + " " + nmsg[x]
else:
out = nmsg[x]
x += 1
if x <= 7:
return out
else:
return out + "..."
@hook.command
def ghissues(inp):
"""ghissues username/repo [number] - Get specified issue summary, or open issue count """
args = inp.split(" ")
try:
if args[0] in shortcuts:
repo = shortcuts[args[0]]
else:
repo = args[0]
url = "https://api.github.com/repos/{}/issues".format(repo)
except IndexError:
return "Invalid syntax. .github issues username/repo [number]"
try:
url += "/%s" % args[1]
number = True
except IndexError:
number = False
try:
data = json.loads(http.open(url).read())
print url
if not number:
try:
data = data[0]
except IndexError:
print data
return "Repo has no open issues"
except ValueError:
return "Invalid data returned. Check arguments (.github issues username/repo [number]"
fmt = "Issue: #%s (%s) by %s: %s | %s %s" # (number, state, user.login, title, truncate(body), gitio.gitio(data.url))
fmt1 = "Issue: #%s (%s) by %s: %s %s" # (number, state, user.login, title, gitio.gitio(data.url))
number = data["number"]
if data["state"] == "open":
state = u"\x033\x02OPEN\x02\x0f"
else:
state = u"\x034\x02CLOSED\x02\x0f by {}".format(data["closed_by"]["login"])
user = data["user"]["login"]
title = data["title"]
summary = truncate(data["body"])
gitiourl = gitio(data["html_url"])
if "Failed to get URL" in gitiourl:
gitiourl = gitio(data["html_url"] + " " + repo.split("/")[1] + number)
if summary == "":
return fmt1 % (number, state, user, title, gitiourl)
else:
return fmt % (number, state, user, title, summary, gitiourl)
@hook.command
def gitio(inp):
"""gitio <url> [code] -- Shorten Github URLs with git.io. [code] is
a optional custom short code."""
split = inp.split(" ")
url = split[0]
try:
code = split[1]
except:
code = None
# if the first 8 chars of "url" are not "https://" then append
# "https://" to the url, also convert "http://" to "https://"
if url[:8] != "https://":
if url[:7] != "http://":
url = "https://" + url
else:
url = "https://" + url[7:]
url = 'url=' + str(url)
if code:
url = url + '&code=' + str(code)
req = urllib2.Request(url='http://git.io', data=url)
# try getting url, catch http error
try:
f = urllib2.urlopen(req)
except urllib2.HTTPError:
return "Failed to get URL!"
urlinfo = str(f.info())
# loop over the rows in urlinfo and pick out location and
# status (this is pretty odd code, but urllib2.Request is weird)
for row in urlinfo.split("\n"):
if row.find("Status") != -1:
status = row
if row.find("Location") != -1:
location = row
print status
if not "201" in status:
return "Failed to get URL!"
# this wont work for some reason, so lets ignore it ^
# return location, minus the first 10 chars
return location[10:]

View File

@ -1,168 +0,0 @@
"""
A Google API key is required and retrieved from the bot config file.
Since December 1, 2011, the Google Translate API is a paid service only.
"""
import htmlentitydefs
import re
from util import hook, http
max_length = 100
########### from http://effbot.org/zone/re-sub.htm#unescape-html #############
def unescape(text):
def fixup(m):
text = m.group(0)
if text[:2] == "&#":
# character reference
try:
if text[:3] == "&#x":
return unichr(int(text[3:-1], 16))
else:
return unichr(int(text[2:-1]))
except ValueError:
pass
else:
# named entity
try:
text = unichr(htmlentitydefs.name2codepoint[text[1:-1]])
except KeyError:
pass
return text # leave as is
return re.sub("&#?\w+;", fixup, text)
##############################################################################
def goog_trans(api_key, text, slang, tlang):
url = 'https://www.googleapis.com/language/translate/v2'
if len(text) > max_length:
return "This command only supports input of less then 100 characters."
if slang:
parsed = http.get_json(url, key=api_key, q=text, source=slang, target=tlang, format="text")
else:
parsed = http.get_json(url, key=api_key, q=text, target=tlang, format="text")
#if not 200 <= parsed['responseStatus'] < 300:
# raise IOError('error with the translation server: %d: %s' % (
# parsed['responseStatus'], parsed['responseDetails']))
if not slang:
return unescape('(%(detectedSourceLanguage)s) %(translatedText)s' %
(parsed['data']['translations'][0]))
return unescape('%(translatedText)s' % parsed['data']['translations'][0])
def match_language(fragment):
fragment = fragment.lower()
for short, _ in lang_pairs:
if fragment in short.lower().split():
return short.split()[0]
for short, full in lang_pairs:
if fragment in full.lower():
return short.split()[0]
return None
@hook.command
def translate(inp, bot=None):
"""translate [source language [target language]] <sentence> -- translates
<sentence> from source language (default autodetect) to target
language (default English) using Google Translate"""
api_key = bot.config.get("api_keys", {}).get("googletranslate", None)
if not api_key:
return "This command requires a paid API key."
args = inp.split(u' ', 2)
try:
if len(args) >= 2:
sl = match_language(args[0])
if not sl:
return goog_trans(api_key, inp, '', 'en')
if len(args) == 2:
return goog_trans(api_key, args[1], sl, 'en')
if len(args) >= 3:
tl = match_language(args[1])
if not tl:
if sl == 'en':
return 'unable to determine desired target language'
return goog_trans(api_key, args[1] + ' ' + args[2], sl, 'en')
return goog_trans(api_key, args[2], sl, tl)
return goog_trans(api_key, inp, '', 'en')
except IOError, e:
return e
lang_pairs = [
("no", "Norwegian"),
("it", "Italian"),
("ht", "Haitian Creole"),
("af", "Afrikaans"),
("sq", "Albanian"),
("ar", "Arabic"),
("hy", "Armenian"),
("az", "Azerbaijani"),
("eu", "Basque"),
("be", "Belarusian"),
("bg", "Bulgarian"),
("ca", "Catalan"),
("zh-CN zh", "Chinese"),
("hr", "Croatian"),
("cs", "Czech"),
("da", "Danish"),
("nl", "Dutch"),
("en", "English"),
("et", "Estonian"),
("tl", "Filipino"),
("fi", "Finnish"),
("fr", "French"),
("gl", "Galician"),
("ka", "Georgian"),
("de", "German"),
("el", "Greek"),
("ht", "Haitian Creole"),
("iw", "Hebrew"),
("hi", "Hindi"),
("hu", "Hungarian"),
("is", "Icelandic"),
("id", "Indonesian"),
("ga", "Irish"),
("it", "Italian"),
("ja jp jpn", "Japanese"),
("ko", "Korean"),
("lv", "Latvian"),
("lt", "Lithuanian"),
("mk", "Macedonian"),
("ms", "Malay"),
("mt", "Maltese"),
("no", "Norwegian"),
("fa", "Persian"),
("pl", "Polish"),
("pt", "Portuguese"),
("ro", "Romanian"),
("ru", "Russian"),
("sr", "Serbian"),
("sk", "Slovak"),
("sl", "Slovenian"),
("es", "Spanish"),
("sw", "Swahili"),
("sv", "Swedish"),
("th", "Thai"),
("tr", "Turkish"),
("uk", "Ukrainian"),
("ur", "Urdu"),
("vi", "Vietnamese"),
("cy", "Welsh"),
("yi", "Yiddish")
]

View File

@ -1,22 +0,0 @@
from util import hook
from urllib import unquote
@hook.command(autohelp=False)
def googleurl(inp, db=None, nick=None):
"""googleurl [nickname] - Converts Google urls (google.com/url) to normal urls
where possible, in the specified nickname's last message. If nickname isn't provided,
action will be performed on user's last message"""
if not inp:
inp = nick
last_message = db.execute("select name, quote from seen_user where name"
" like ? and chan = ?", (inp.lower(), input.chan.lower())).fetchone()
if last_message:
msg = last_message[1]
out = ", ".join([(unquote(a[4:]) if a[:4] == "url=" else "") for a in msg.split("&")])\
.replace(", ,", "").strip()
return out if out else "No matches in your last message."
else:
if inp == nick:
return "You haven't said anything in this channel yet!"
else:
return "That user hasn't said anything in this channel yet!"

View File

@ -1,89 +0,0 @@
from collections import deque
from util import hook, timesince
import time
import re
db_ready = []
def db_init(db, conn_name):
"""check to see that our db has the the seen table (connection name is for caching the result per connection)"""
global db_ready
if db_ready.count(conn_name) < 1:
db.execute("create table if not exists seen_user(name, time, quote, chan, host, "
"primary key(name, chan))")
db.commit()
db_ready.append(conn_name)
def track_seen(input, message_time, db, conn):
""" Tracks messages for the .seen command """
db_init(db, conn)
# keep private messages private
if input.chan[:1] == "#" and not re.findall('^s/.*/.*/$', input.msg.lower()):
db.execute("insert or replace into seen_user(name, time, quote, chan, host)"
"values(?,?,?,?,?)", (input.nick.lower(), message_time, input.msg,
input.chan, input.mask))
db.commit()
def track_history(input, message_time, conn):
try:
history = conn.history[input.chan]
except KeyError:
conn.history[input.chan] = deque(maxlen=100)
history = conn.history[input.chan]
data = (input.nick, message_time, input.msg)
history.append(data)
@hook.singlethread
@hook.event('PRIVMSG', ignorebots=False)
def chat_tracker(paraml, input=None, db=None, conn=None):
message_time = time.time()
track_seen(input, message_time, db, conn)
track_history(input, message_time, conn)
@hook.command(autohelp=False)
def resethistory(inp, input=None, conn=None):
"""resethistory - Resets chat history for the current channel"""
try:
conn.history[input.chan].clear()
return "Reset chat history for current channel."
except KeyError:
# wat
return "There is no history for this channel."
"""seen.py: written by sklnd in about two beers July 2009"""
@hook.command
def seen(inp, nick='', chan='', db=None, input=None, conn=None):
"""seen <nick> <channel> -- Tell when a nickname was last in active in one of this bot's channels."""
if input.conn.nick.lower() == inp.lower():
return "You need to get your eyes checked."
if inp.lower() == nick.lower():
return "Have you looked in a mirror lately?"
if not re.match("^[A-Za-z0-9_|.\-\]\[]*$", inp.lower()):
return "I can't look up that name, its impossible to use!"
db_init(db, conn.name)
last_seen = db.execute("select name, time, quote from seen_user where name"
" like ? and chan = ?", (inp, chan)).fetchone()
if last_seen:
reltime = timesince.timesince(last_seen[1])
if last_seen[0] != inp.lower(): # for glob matching
inp = last_seen[0]
if last_seen[2][0:1] == "\x01":
return '{} was last seen {} ago: * {} {}'.format(inp, reltime, inp,
last_seen[2][8:-1])
else:
return '{} was last seen {} ago saying: {}'.format(inp, reltime, last_seen[2])
else:
return "I've never seen {} talking in this channel.".format(inp)

View File

@ -1,56 +0,0 @@
# Plugin by Infinity - <https://github.com/infinitylabs/UguuBot>
from util import hook, http, text
db_ready = False
def db_init(db):
"""check to see that our db has the horoscope table and return a connection."""
global db_ready
if not db_ready:
db.execute("create table if not exists horoscope(nick primary key, sign)")
db.commit()
db_ready = True
@hook.command(autohelp=False)
def horoscope(inp, db=None, notice=None, nick=None):
"""horoscope <sign> -- Get your horoscope."""
db_init(db)
# check if the user asked us not to save his details
dontsave = inp.endswith(" dontsave")
if dontsave:
sign = inp[:-9].strip().lower()
else:
sign = inp
db.execute("create table if not exists horoscope(nick primary key, sign)")
if not sign:
sign = db.execute("select sign from horoscope where nick=lower(?)",
(nick,)).fetchone()
if not sign:
notice("horoscope <sign> -- Get your horoscope")
return
sign = sign[0]
url = "http://my.horoscope.com/astrology/free-daily-horoscope-{}.html".format(sign)
soup = http.get_soup(url)
title = soup.find_all('h1', {'class': 'h1b'})[1]
horoscope_text = soup.find('div', {'class': 'fontdef1'})
result = u"\x02%s\x02 %s" % (title, horoscope_text)
result = text.strip_html(result)
#result = unicode(result, "utf8").replace('flight ','')
if not title:
return "Could not get the horoscope for {}.".format(inp)
if inp and not dontsave:
db.execute("insert or replace into horoscope(nick, sign) values (?,?)",
(nick.lower(), sign))
db.commit()
return result

View File

@ -1,30 +0,0 @@
from urllib import urlencode
import re
from util import hook, http, timeformat
hulu_re = (r'(.*://)(www.hulu.com|hulu.com)(.*)', re.I)
@hook.regex(*hulu_re)
def hulu_url(match):
data = http.get_json("http://www.hulu.com/api/oembed.json?url=http://www.hulu.com" + match.group(3))
showname = data['title'].split("(")[-1].split(")")[0]
title = data['title'].split(" (")[0]
return "{}: {} - {}".format(showname, title, timeformat.format_time(int(data['duration'])))
@hook.command('hulu')
def hulu_search(inp):
"""hulu <search> - Search Hulu"""
result = http.get_soup(
"http://m.hulu.com/search?dp_identifier=hulu&{}&items_per_page=1&page=1".format(urlencode({'query': inp})))
data = result.find('results').find('videos').find('video')
showname = data.find('show').find('name').text
title = data.find('title').text
duration = timeformat.format_time(int(float(data.find('duration').text)))
description = data.find('description').text
rating = data.find('content-rating').text
return "{}: {} - {} - {} ({}) {}".format(showname, title, description, duration, rating,
"http://www.hulu.com/watch/" + str(data.find('id').text))

View File

@ -1,59 +0,0 @@
# IMDb lookup plugin by Ghetto Wizard (2011) and blha303 (2013)
import re
from util import hook, http, text
id_re = re.compile("tt\d+")
imdb_re = (r'(.*:)//(imdb.com|www.imdb.com)(:[0-9]+)?(.*)', re.I)
@hook.command
def imdb(inp):
"""imdb <movie> -- Gets information about <movie> from IMDb."""
strip = inp.strip()
if id_re.match(strip):
content = http.get_json("http://www.omdbapi.com/", i=strip)
else:
content = http.get_json("http://www.omdbapi.com/", t=strip)
if content.get('Error', None) == 'Movie not found!':
return 'Movie not found!'
elif content['Response'] == 'True':
content['URL'] = 'http://www.imdb.com/title/{}'.format(content['imdbID'])
out = '\x02%(Title)s\x02 (%(Year)s) (%(Genre)s): %(Plot)s'
if content['Runtime'] != 'N/A':
out += ' \x02%(Runtime)s\x02.'
if content['imdbRating'] != 'N/A' and content['imdbVotes'] != 'N/A':
out += ' \x02%(imdbRating)s/10\x02 with \x02%(imdbVotes)s\x02' \
' votes.'
out += ' %(URL)s'
return out % content
else:
return 'Unknown error.'
@hook.regex(*imdb_re)
def imdb_url(match):
imdb_id = match.group(4).split('/')[-1]
if imdb_id == "":
imdb_id = match.group(4).split('/')[-2]
content = http.get_json("http://www.omdbapi.com/", i=imdb_id)
if content.get('Error', None) == 'Movie not found!':
return 'Movie not found!'
elif content['Response'] == 'True':
content['URL'] = 'http://www.imdb.com/title/%(imdbID)s' % content
content['Plot'] = text.truncate_str(content['Plot'], 50)
out = '\x02%(Title)s\x02 (%(Year)s) (%(Genre)s): %(Plot)s'
if content['Runtime'] != 'N/A':
out += ' \x02%(Runtime)s\x02.'
if content['imdbRating'] != 'N/A' and content['imdbVotes'] != 'N/A':
out += ' \x02%(imdbRating)s/10\x02 with \x02%(imdbVotes)s\x02' \
' votes.'
return out % content
else:
return 'Unknown error.'

View File

@ -1,82 +0,0 @@
import re
import random
from util import hook, http, web
base_url = "http://reddit.com/r/{}/.json"
imgur_re = re.compile(r'http://(?:i\.)?imgur\.com/(a/)?(\w+\b(?!/))\.?\w?')
album_api = "https://api.imgur.com/3/album/{}/images.json"
def is_valid(data):
if data["domain"] in ["i.imgur.com", "imgur.com"]:
return True
else:
return False
@hook.command(autohelp=False)
def imgur(inp):
"""imgur [subreddit] -- Gets the first page of imgur images from [subreddit] and returns a link to them.
If [subreddit] is undefined, return any imgur images"""
if inp:
# see if the input ends with "nsfw"
show_nsfw = inp.endswith(" nsfw")
# remove "nsfw" from the input string after checking for it
if show_nsfw:
inp = inp[:-5].strip().lower()
url = base_url.format(inp.strip())
else:
url = "http://www.reddit.com/domain/imgur.com/.json"
show_nsfw = False
try:
data = http.get_json(url, user_agent=http.ua_chrome)
except Exception as e:
return "Error: " + str(e)
data = data["data"]["children"]
random.shuffle(data)
# filter list to only have imgur links
filtered_posts = [i["data"] for i in data if is_valid(i["data"])]
if not filtered_posts:
return "No images found."
items = []
headers = {
"Authorization": "Client-ID b5d127e6941b07a"
}
# loop over the list of posts
for post in filtered_posts:
if post["over_18"] and not show_nsfw:
continue
match = imgur_re.search(post["url"])
if match.group(1) == 'a/':
# post is an album
url = album_api.format(match.group(2))
images = http.get_json(url, headers=headers)["data"]
# loop over the images in the album and add to the list
for image in images:
items.append(image["id"])
elif match.group(2) is not None:
# post is an image
items.append(match.group(2))
if not items:
return "No images found (use .imgur <subreddit> nsfw to show explicit content)"
if show_nsfw:
return "{} \x02NSFW\x02".format(web.isgd("http://imgur.com/" + ','.join(items)))
else:
return web.isgd("http://imgur.com/" + ','.join(items))

View File

@ -1,28 +0,0 @@
import urlparse
from util import hook, http, urlnorm
@hook.command
def isup(inp):
"""isup -- uses isup.me to see if a site is up or not"""
# slightly overcomplicated, esoteric URL parsing
scheme, auth, path, query, fragment = urlparse.urlsplit(inp.strip())
domain = auth.encode('utf-8') or path.encode('utf-8')
url = urlnorm.normalize(domain, assume_scheme="http")
try:
soup = http.get_soup('http://isup.me/' + domain)
except http.HTTPError, http.URLError:
return "Could not get status."
content = soup.find('div').text.strip()
if "not just you" in content:
return "It's not just you. {} looks \x02\x034down\x02\x0f from here!".format(url)
elif "is up" in content:
return "It's just you. {} is \x02\x033up\x02\x0f.".format(url)
else:
return "Huh? That doesn't look like a site on the interweb."

View File

@ -1,15 +0,0 @@
import re
from util import hook, http
@hook.command(autohelp=False)
def kernel(inp, reply=None):
contents = http.get("https://www.kernel.org/finger_banner")
contents = re.sub(r'The latest(\s*)', '', contents)
contents = re.sub(r'version of the Linux kernel is:(\s*)', '- ', contents)
lines = contents.split("\n")
message = "Linux kernel versions: "
message += ", ".join(line for line in lines[:-1])
reply(message)

View File

@ -1,33 +0,0 @@
import json
from util import hook, textgen
def get_generator(_json, variables):
data = json.loads(_json)
return textgen.TextGenerator(data["templates"],
data["parts"], variables=variables)
@hook.command
def kill(inp, action=None, nick=None, conn=None, notice=None):
"""kill <user> -- Makes the bot kill <user>."""
target = inp.strip()
if " " in target:
notice("Invalid username!")
return
# if the user is trying to make the bot kill itself, kill them
if target.lower() == conn.nick.lower() or target.lower() == "itself":
target = nick
variables = {
"user": target
}
with open("plugins/data/kills.json") as f:
generator = get_generator(f.read(), variables)
# act out the message
action(generator.generate_string())

View File

@ -1,43 +0,0 @@
from util import hook, http, web
url = "http://search.azlyrics.com/search.php?q="
@hook.command
def lyrics(inp):
"""lyrics <search> - Search AZLyrics.com for song lyrics"""
if "pastelyrics" in inp:
dopaste = True
inp = inp.replace("pastelyrics", "").strip()
else:
dopaste = False
soup = http.get_soup(url + inp.replace(" ", "+"))
if "Try to compose less restrictive search query" in soup.find('div', {'id': 'inn'}).text:
return "No results. Check spelling."
div = None
for i in soup.findAll('div', {'class': 'sen'}):
if "/lyrics/" in i.find('a')['href']:
div = i
break
if div:
title = div.find('a').text
link = div.find('a')['href']
if dopaste:
newsoup = http.get_soup(link)
try:
lyrics = newsoup.find('div', {'style': 'margin-left:10px;margin-right:10px;'}).text.strip()
pasteurl = " " + web.haste(lyrics)
except Exception as e:
pasteurl = " (\x02Unable to paste lyrics\x02 [{}])".format(str(e))
else:
pasteurl = ""
artist = div.find('b').text.title()
lyricsum = div.find('div').text
if "\r\n" in lyricsum.strip():
lyricsum = " / ".join(lyricsum.strip().split("\r\n")[0:4]) # truncate, format
else:
lyricsum = " / ".join(lyricsum.strip().split("\n")[0:4]) # truncate, format
return "\x02{}\x02 by \x02{}\x02 {}{} - {}".format(title, artist, web.try_isgd(link), pasteurl,
lyricsum[:-3])
else:
return "No song results. " + url + inp.replace(" ", "+")

View File

@ -1,154 +0,0 @@
import time
import random
from util import hook, http, web, text
## CONSTANTS
base_url = "http://api.bukget.org/3/"
search_url = base_url + "search/plugin_name/like/{}"
random_url = base_url + "plugins/bukkit/?start={}&size=1"
details_url = base_url + "plugins/bukkit/{}"
categories = http.get_json("http://api.bukget.org/3/categories")
count_total = sum([cat["count"] for cat in categories])
count_categories = {cat["name"].lower(): int(cat["count"]) for cat in categories} # dict comps!
class BukgetError(Exception):
def __init__(self, code, text):
self.code = code
self.text = text
def __str__(self):
return self.text
## DATA FUNCTIONS
def plugin_search(term):
""" searches for a plugin with the bukget API and returns the slug """
term = term.lower().strip()
search_term = http.quote_plus(term)
try:
results = http.get_json(search_url.format(search_term))
except (http.HTTPError, http.URLError) as e:
raise BukgetError(500, "Error Fetching Search Page: {}".format(e))
if not results:
raise BukgetError(404, "No Results Found")
for result in results:
if result["slug"] == term:
return result["slug"]
return results[0]["slug"]
def plugin_random():
""" gets a random plugin from the bukget API and returns the slug """
results = None
while not results:
plugin_number = random.randint(1, count_total)
print "trying {}".format(plugin_number)
try:
results = http.get_json(random_url.format(plugin_number))
except (http.HTTPError, http.URLError) as e:
raise BukgetError(500, "Error Fetching Search Page: {}".format(e))
return results[0]["slug"]
def plugin_details(slug):
""" takes a plugin slug and returns details from the bukget API """
slug = slug.lower().strip()
try:
details = http.get_json(details_url.format(slug))
except (http.HTTPError, http.URLError) as e:
raise BukgetError(500, "Error Fetching Details: {}".format(e))
return details
## OTHER FUNCTIONS
def format_output(data):
""" takes plugin data and returns two strings representing information about that plugin """
name = data["plugin_name"]
description = text.truncate_str(data['description'], 30)
url = data['website']
authors = data['authors'][0]
authors = authors[0] + u"\u200b" + authors[1:]
stage = data['stage']
current_version = data['versions'][0]
last_update = time.strftime('%d %B %Y %H:%M',
time.gmtime(current_version['date']))
version_number = data['versions'][0]['version']
bukkit_versions = ", ".join(current_version['game_versions'])
link = web.try_isgd(current_version['link'])
if description:
line_a = u"\x02{}\x02, by \x02{}\x02 - {} - ({}) \x02{}".format(name, authors, description, stage, url)
else:
line_a = u"\x02{}\x02, by \x02{}\x02 ({}) \x02{}".format(name, authors, stage, url)
line_b = u"Last release: \x02v{}\x02 for \x02{}\x02 at {} \x02{}\x02".format(version_number, bukkit_versions,
last_update, link)
return line_a, line_b
## HOOK FUNCTIONS
@hook.command('plugin')
@hook.command
def bukget(inp, reply=None, message=None):
"""bukget <slug/name> - Look up a plugin on dev.bukkit.org"""
# get the plugin slug using search
try:
slug = plugin_search(inp)
except BukgetError as e:
return e
# get the plugin info using the slug
try:
data = plugin_details(slug)
except BukgetError as e:
return e
# format the final message and send it to IRC
line_a, line_b = format_output(data)
reply(line_a)
message(line_b)
@hook.command(autohelp=None)
def randomplugin(inp, reply=None, message=None):
"""randomplugin - Gets a random plugin from dev.bukkit.org"""
# get a random plugin slug
try:
slug = plugin_random()
except BukgetError as e:
return e
# get the plugin info using the slug
try:
data = plugin_details(slug)
except BukgetError as e:
return e
# format the final message and send it to IRC
line_a, line_b = format_output(data)
reply(line_a)
message(line_b)

View File

@ -1,232 +0,0 @@
import socket
import struct
import json
import traceback
from util import hook
try:
import DNS
has_dns = True
except ImportError:
has_dns = False
mc_colors = [(u'\xa7f', u'\x0300'), (u'\xa70', u'\x0301'), (u'\xa71', u'\x0302'), (u'\xa72', u'\x0303'),
(u'\xa7c', u'\x0304'), (u'\xa74', u'\x0305'), (u'\xa75', u'\x0306'), (u'\xa76', u'\x0307'),
(u'\xa7e', u'\x0308'), (u'\xa7a', u'\x0309'), (u'\xa73', u'\x0310'), (u'\xa7b', u'\x0311'),
(u'\xa71', u'\x0312'), (u'\xa7d', u'\x0313'), (u'\xa78', u'\x0314'), (u'\xa77', u'\x0315'),
(u'\xa7l', u'\x02'), (u'\xa79', u'\x0310'), (u'\xa7o', u'\t'), (u'\xa7m', u'\x13'),
(u'\xa7r', u'\x0f'), (u'\xa7n', u'\x15')]
## EXCEPTIONS
class PingError(Exception):
def __init__(self, text):
self.text = text
def __str__(self):
return self.text
class ParseError(Exception):
def __init__(self, text):
self.text = text
def __str__(self):
return self.text
## MISC
def unpack_varint(s):
d = 0
i = 0
while True:
b = ord(s.recv(1))
d |= (b & 0x7F) << 7 * i
i += 1
if not b & 0x80:
return d
pack_data = lambda d: struct.pack('>b', len(d)) + d
pack_port = lambda i: struct.pack('>H', i)
## DATA FUNCTIONS
def mcping_modern(host, port):
""" pings a server using the modern (1.7+) protocol and returns data """
try:
# connect to the server
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
try:
s.connect((host, port))
except socket.gaierror:
raise PingError("Invalid hostname")
except socket.timeout:
raise PingError("Request timed out")
# send handshake + status request
s.send(pack_data("\x00\x00" + pack_data(host.encode('utf8')) + pack_port(port) + "\x01"))
s.send(pack_data("\x00"))
# read response
unpack_varint(s) # Packet length
unpack_varint(s) # Packet ID
l = unpack_varint(s) # String length
if not l > 1:
raise PingError("Invalid response")
d = ""
while len(d) < l:
d += s.recv(1024)
# Close our socket
s.close()
except socket.error:
raise PingError("Socket Error")
# Load json and return
data = json.loads(d.decode('utf8'))
try:
version = data["version"]["name"]
try:
desc = u" ".join(data["description"]["text"].split())
except TypeError:
desc = u" ".join(data["description"].split())
max_players = data["players"]["max"]
online = data["players"]["online"]
except Exception as e:
# TODO: except Exception is bad
traceback.print_exc(e)
raise PingError("Unknown Error: {}".format(e))
output = {
"motd": format_colors(desc),
"motd_raw": desc,
"version": version,
"players": online,
"players_max": max_players
}
return output
def mcping_legacy(host, port):
""" pings a server using the legacy (1.6 and older) protocol and returns data """
sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
try:
sock.connect((host, port))
sock.send('\xfe\x01')
response = sock.recv(1)
except socket.gaierror:
raise PingError("Invalid hostname")
except socket.timeout:
raise PingError("Request timed out")
if response[0] != '\xff':
raise PingError("Invalid response")
length = struct.unpack('!h', sock.recv(2))[0]
values = sock.recv(length * 2).decode('utf-16be')
data = values.split(u'\x00') # try to decode data using new format
if len(data) == 1:
# failed to decode data, server is using old format
data = values.split(u'\xa7')
output = {
"motd": format_colors(" ".join(data[0].split())),
"motd_raw": data[0],
"version": None,
"players": data[1],
"players_max": data[2]
}
else:
# decoded data, server is using new format
output = {
"motd": format_colors(" ".join(data[3].split())),
"motd_raw": data[3],
"version": data[2],
"players": data[4],
"players_max": data[5]
}
sock.close()
return output
## FORMATTING/PARSING FUNCTIONS
def check_srv(domain):
""" takes a domain and finds minecraft SRV records """
DNS.DiscoverNameServers()
srv_req = DNS.Request(qtype='srv')
srv_result = srv_req.req('_minecraft._tcp.{}'.format(domain))
for getsrv in srv_result.answers:
if getsrv['typename'] == 'SRV':
data = [getsrv['data'][2], getsrv['data'][3]]
return data
def parse_input(inp):
""" takes the input from the mcping command and returns the host and port """
inp = inp.strip().split(" ")[0]
if ":" in inp:
# the port is defined in the input string
host, port = inp.split(":", 1)
try:
port = int(port)
if port > 65535 or port < 0:
raise ParseError("The port '{}' is invalid.".format(port))
except ValueError:
raise ParseError("The port '{}' is invalid.".format(port))
return host, port
if has_dns:
# the port is not in the input string, but we have PyDNS so look for a SRV record
srv_data = check_srv(inp)
if srv_data:
return str(srv_data[1]), int(srv_data[0])
# return default port
return inp, 25565
def format_colors(motd):
for original, replacement in mc_colors:
motd = motd.replace(original, replacement)
motd = motd.replace(u"\xa7k", "")
return motd
def format_output(data):
if data["version"]:
return u"{motd}\x0f - {version}\x0f - {players}/{players_max}" \
u" players.".format(**data).replace("\n", u"\x0f - ")
else:
return u"{motd}\x0f - {players}/{players_max}" \
u" players.".format(**data).replace("\n", u"\x0f - ")
@hook.command
@hook.command("mcp")
def mcping(inp):
"""mcping <server>[:port] - Ping a Minecraft server to check status."""
try:
host, port = parse_input(inp)
except ParseError as e:
return "Could not parse input ({})".format(e)
try:
data = mcping_modern(host, port)
except PingError:
try:
data = mcping_legacy(host, port)
except PingError as e:
return "Could not ping server, is it offline? ({})".format(e)
return format_output(data)

View File

@ -1,44 +0,0 @@
import json
from util import hook, http
@hook.command(autohelp=False)
def mcstatus(inp):
"""mcstatus -- Checks the status of various Mojang (the creators of Minecraft) servers."""
try:
request = http.get("http://status.mojang.com/check")
except (http.URLError, http.HTTPError) as e:
return "Unable to get Minecraft server status: {}".format(e)
# lets just reformat this data to get in a nice format
data = json.loads(request.replace("}", "").replace("{", "").replace("]", "}").replace("[", "{"))
out = []
# use a loop so we don't have to update it if they add more servers
green = []
yellow = []
red = []
for server, status in data.items():
if status == "green":
green.append(server)
elif status == "yellow":
yellow.append(server)
else:
red.append(server)
if green:
out = "\x033\x02Online\x02\x0f: " + ", ".join(green)
if yellow:
out += " "
if yellow:
out += "\x02Issues\x02: " + ", ".join(yellow)
if red:
out += " "
if red:
out += "\x034\x02Offline\x02\x0f: " + ", ".join(red)
return "\x0f" + out.replace(".mojang.com", ".mj") \
.replace(".minecraft.net", ".mc")

View File

@ -1,101 +0,0 @@
import json
from util import hook, http
NAME_URL = "https://account.minecraft.net/buy/frame/checkName/{}"
PAID_URL = "http://www.minecraft.net/haspaid.jsp"
class McuError(Exception):
pass
def get_status(name):
""" takes a name and returns status """
try:
name_encoded = http.quote_plus(name)
response = http.get(NAME_URL.format(name_encoded))
except (http.URLError, http.HTTPError) as e:
raise McuError("Could not get name status: {}".format(e))
if "OK" in response:
return "free"
elif "TAKEN" in response:
return "taken"
elif "invalid characters" in response:
return "invalid"
def get_profile(name):
profile = {}
# form the profile request
request = {
"name": name,
"agent": "minecraft"
}
# submit the profile request
try:
headers = {"Content-Type": "application/json"}
r = http.get_json(
'https://api.mojang.com/profiles/page/1',
post_data=json.dumps(request),
headers=headers
)
except (http.URLError, http.HTTPError) as e:
raise McuError("Could not get profile status: {}".format(e))
user = r["profiles"][0]
profile["name"] = user["name"]
profile["id"] = user["id"]
profile["legacy"] = user.get("legacy", False)
try:
response = http.get(PAID_URL, user=name)
except (http.URLError, http.HTTPError) as e:
raise McuError("Could not get payment status: {}".format(e))
if "true" in response:
profile["paid"] = True
else:
profile["paid"] = False
return profile
@hook.command("haspaid")
@hook.command("mcpaid")
@hook.command
def mcuser(inp):
"""mcpaid <username> -- Gets information about the Minecraft user <account>."""
user = inp.strip()
try:
# get status of name (does it exist?)
name_status = get_status(user)
except McuError as e:
return e
if name_status == "taken":
try:
# get information about user
profile = get_profile(user)
except McuError as e:
return "Error: {}".format(e)
profile["lt"] = ", legacy" if profile["legacy"] else ""
if profile["paid"]:
return u"The account \x02{name}\x02 ({id}{lt}) exists. It is a \x02paid\x02" \
u" account.".format(**profile)
else:
return u"The account \x02{name}\x02 ({id}{lt}) exists. It \x034\x02is NOT\x02\x0f a paid" \
u" account.".format(**profile)
elif name_status == "free":
return u"The account \x02{}\x02 does not exist.".format(user)
elif name_status == "invalid":
return u"The name \x02{}\x02 contains invalid characters.".format(user)
else:
# if you see this, panic
return "Unknown Error."

View File

@ -1,51 +0,0 @@
import re
from util import hook, http, text
api_url = "http://minecraft.gamepedia.com/api.php?action=opensearch"
mc_url = "http://minecraft.gamepedia.com/"
@hook.command
def mcwiki(inp):
"""mcwiki <phrase> -- Gets the first paragraph of
the Minecraft Wiki article on <phrase>."""
try:
j = http.get_json(api_url, search=inp)
except (http.HTTPError, http.URLError) as e:
return "Error fetching search results: {}".format(e)
except ValueError as e:
return "Error reading search results: {}".format(e)
if not j[1]:
return "No results found."
# we remove items with a '/' in the name, because
# gamepedia uses sub-pages for different languages
# for some stupid reason
items = [item for item in j[1] if not "/" in item]
if items:
article_name = items[0].replace(' ', '_').encode('utf8')
else:
# there are no items without /, just return a / one
article_name = j[1][0].replace(' ', '_').encode('utf8')
url = mc_url + http.quote(article_name, '')
try:
page = http.get_html(url)
except (http.HTTPError, http.URLError) as e:
return "Error fetching wiki page: {}".format(e)
for p in page.xpath('//div[@class="mw-content-ltr"]/p'):
if p.text_content():
summary = " ".join(p.text_content().splitlines())
summary = re.sub("\[\d+\]", "", summary)
summary = text.truncate_str(summary, 200)
return u"{} :: {}".format(summary, url)
# this shouldn't happen
return "Unknown Error."

View File

@ -1,34 +0,0 @@
# Plugin by Infinity - <https://github.com/infinitylabs/UguuBot>
import random
from util import hook, http
mlia_cache = []
def refresh_cache():
"""gets a page of random MLIAs and puts them into a dictionary """
url = 'http://mylifeisaverage.com/{}'.format(random.randint(1, 11000))
soup = http.get_soup(url)
for story in soup.find_all('div', {'class': 'story '}):
mlia_id = story.find('span', {'class': 'left'}).a.text
mlia_text = story.find('div', {'class': 'sc'}).text.strip()
mlia_cache.append((mlia_id, mlia_text))
# do an initial refresh of the cache
refresh_cache()
@hook.command(autohelp=False)
def mlia(inp, reply=None):
"""mlia -- Gets a random quote from MyLifeIsAverage.com."""
# grab the last item in the mlia cache and remove it
mlia_id, text = mlia_cache.pop()
# reply with the mlia we grabbed
reply('({}) {}'.format(mlia_id, text))
# refresh mlia cache if its getting empty
if len(mlia_cache) < 3:
refresh_cache()

View File

@ -1,60 +0,0 @@
import json
import os
from util import hook, text, textgen
GEN_DIR = "./plugins/data/name_files/"
def get_generator(_json):
data = json.loads(_json)
return textgen.TextGenerator(data["templates"],
data["parts"], default_templates=data["default_templates"])
@hook.command(autohelp=False)
def namegen(inp, notice=None):
"""namegen [generator] -- Generates some names using the chosen generator.
'namegen list' will display a list of all generators."""
# clean up the input
inp = inp.strip().lower()
# get a list of available name generators
files = os.listdir(GEN_DIR)
all_modules = []
for i in files:
if os.path.splitext(i)[1] == ".json":
all_modules.append(os.path.splitext(i)[0])
all_modules.sort()
# command to return a list of all available generators
if inp == "list":
message = "Available generators: "
message += text.get_text_list(all_modules, 'and')
notice(message)
return
if inp:
selected_module = inp.split()[0]
else:
# make some generic fantasy names
selected_module = "fantasy"
# check if the selected module is valid
if not selected_module in all_modules:
return "Invalid name generator :("
# load the name generator
with open(os.path.join(GEN_DIR, "{}.json".format(selected_module))) as f:
try:
generator = get_generator(f.read())
except ValueError as error:
return "Unable to read name file: {}".format(error)
# time to generate some names
name_list = generator.generate_strings(10)
# and finally return the final message :D
return "Some names to ponder: {}.".format(text.get_text_list(name_list, 'and'))

View File

@ -1,95 +0,0 @@
import json
import re
from util import hook, http, text, web
## CONSTANTS
ITEM_URL = "http://www.newegg.com/Product/Product.aspx?Item={}"
API_PRODUCT = "http://www.ows.newegg.com/Products.egg/{}/ProductDetails"
API_SEARCH = "http://www.ows.newegg.com/Search.egg/Advanced"
NEWEGG_RE = (r"(?:(?:www.newegg.com|newegg.com)/Product/Product\.aspx\?Item=)([-_a-zA-Z0-9]+)", re.I)
## OTHER FUNCTIONS
def format_item(item, show_url=True):
""" takes a newegg API item object and returns a description """
title = text.truncate_str(item["Title"], 50)
# format the rating nicely if it exists
if not item["ReviewSummary"]["TotalReviews"] == "[]":
rating = "Rated {}/5 ({} ratings)".format(item["ReviewSummary"]["Rating"],
item["ReviewSummary"]["TotalReviews"][1:-1])
else:
rating = "No Ratings"
if not item["FinalPrice"] == item["OriginalPrice"]:
price = "{FinalPrice}, was {OriginalPrice}".format(**item)
else:
price = item["FinalPrice"]
tags = []
if item["Instock"]:
tags.append("\x02Stock Available\x02")
else:
tags.append("\x02Out Of Stock\x02")
if item["FreeShippingFlag"]:
tags.append("\x02Free Shipping\x02")
if item["IsFeaturedItem"]:
tags.append("\x02Featured\x02")
if item["IsShellShockerItem"]:
tags.append(u"\x02SHELL SHOCKER\u00AE\x02")
# join all the tags together in a comma separated string ("tag1, tag2, tag3")
tag_text = u", ".join(tags)
if show_url:
# create the item URL and shorten it
url = web.try_isgd(ITEM_URL.format(item["NeweggItemNumber"]))
return u"\x02{}\x02 ({}) - {} - {} - {}".format(title, price, rating,
tag_text, url)
else:
return u"\x02{}\x02 ({}) - {} - {}".format(title, price, rating,
tag_text)
## HOOK FUNCTIONS
@hook.regex(*NEWEGG_RE)
def newegg_url(match):
item_id = match.group(1)
item = http.get_json(API_PRODUCT.format(item_id))
return format_item(item, show_url=False)
@hook.command
def newegg(inp):
"""newegg <item name> -- Searches newegg.com for <item name>"""
# form the search request
request = {
"Keyword": inp,
"Sort": "FEATURED"
}
# submit the search request
r = http.get_json(
'http://www.ows.newegg.com/Search.egg/Advanced',
post_data=json.dumps(request)
)
# get the first result
if r["ProductListItems"]:
return format_item(r["ProductListItems"][0])
else:
return "No results found."

View File

@ -1,59 +0,0 @@
import re
from util import hook, http
newgrounds_re = (r'(.*:)//(www.newgrounds.com|newgrounds.com)(:[0-9]+)?(.*)', re.I)
valid = set('0123456789')
def test(s):
return set(s) <= valid
@hook.regex(*newgrounds_re)
def newgrounds_url(match):
location = match.group(4).split("/")[-1]
if not test(location):
print "Not a valid Newgrounds portal ID. Example: http://www.newgrounds.com/portal/view/593993"
return None
soup = http.get_soup("http://www.newgrounds.com/portal/view/" + location)
title = "\x02{}\x02".format(soup.find('title').text)
# get author
try:
author_info = soup.find('ul', {'class': 'authorlinks'}).find('img')['alt']
author = " - \x02{}\x02".format(author_info)
except:
author = ""
# get rating
try:
rating_info = soup.find('dd', {'class': 'star-variable'})['title'].split("Stars &ndash;")[0].strip()
rating = u" - rated \x02{}\x02/\x025.0\x02".format(rating_info)
except:
rating = ""
# get amount of ratings
try:
ratings_info = soup.find('dd', {'class': 'star-variable'})['title'].split("Stars &ndash;")[1].replace("Votes",
"").strip()
numofratings = " ({})".format(ratings_info)
except:
numofratings = ""
# get amount of views
try:
views_info = soup.find('dl', {'class': 'contentdata'}).findAll('dd')[1].find('strong').text
views = " - \x02{}\x02 views".format(views_info)
except:
views = ""
# get upload data
try:
date = "on \x02{}\x02".format(soup.find('dl', {'class': 'sidestats'}).find('dd').text)
except:
date = ""
return title + rating + numofratings + views + author + date

View File

@ -1,29 +0,0 @@
from bs4 import BeautifulSoup
from util import hook, http, web
user_url = "http://osrc.dfm.io/{}"
@hook.command
def osrc(inp):
"""osrc <github user> -- Gets an Open Source Report Card for <github user>"""
user_nick = inp.strip()
url = user_url.format(user_nick)
try:
soup = http.get_soup(url)
except (http.HTTPError, http.URLError):
return "Couldn't find any stats for this user."
report = soup.find("div", {"id": "description"}).find("p").get_text()
# Split and join to remove all the excess whitespace, slice the
# string to remove the trailing full stop.
report = " ".join(report.split())[:-1]
short_url = web.try_isgd(url)
return "{} - {}".format(report, short_url)

View File

@ -1,12 +0,0 @@
from util import hook, web
@hook.command(adminonly=True)
def plpaste(inp):
if "/" in inp and inp.split("/")[0] != "util":
return "Invalid input"
try:
with open("plugins/%s.py" % inp) as f:
return web.haste(f.read(), ext='py')
except IOError:
return "Plugin not found (must be in plugins folder)"

View File

@ -1,56 +0,0 @@
# coding=utf-8
import re
import random
from util import hook
potatoes = ['AC Belmont', 'AC Blue Pride', 'AC Brador', 'AC Chaleur', 'AC Domino', 'AC Dubuc', 'AC Glacier Chip',
'AC Maple Gold', 'AC Novachip', 'AC Peregrine Red', 'AC Ptarmigan', 'AC Red Island', 'AC Saguenor',
'AC Stampede Russet', 'AC Sunbury', 'Abeille', 'Abnaki', 'Acadia', 'Acadia Russet', 'Accent',
'Adirondack Blue', 'Adirondack Red', 'Adora', 'Agria', 'All Blue', 'All Red', 'Alpha', 'Alta Russet',
'Alturas Russet', 'Amandine', 'Amisk', 'Andover', 'Anoka', 'Anson', 'Aquilon', 'Arran Consul', 'Asterix',
'Atlantic', 'Austrian Crescent', 'Avalanche', 'Banana', 'Bannock Russet', 'Batoche', 'BeRus',
'Belle De Fonteney', 'Belleisle', 'Bintje', 'Blossom', 'Blue Christie', 'Blue Mac', 'Brigus',
'Brise du Nord', 'Butte', 'Butterfinger', 'Caesar', 'CalWhite', 'CalRed', 'Caribe', 'Carlingford',
'Carlton', 'Carola', 'Cascade', 'Castile', 'Centennial Russet', 'Century Russet', 'Charlotte', 'Cherie',
'Cherokee', 'Cherry Red', 'Chieftain', 'Chipeta', 'Coastal Russet', 'Colorado Rose', 'Concurrent',
'Conestoga', 'Cowhorn', 'Crestone Russet', 'Crispin', 'Cupids', 'Daisy Gold', 'Dakota Pearl', 'Defender',
'Delikat', 'Denali', 'Desiree', 'Divina', 'Dundrod', 'Durango Red', 'Early Rose', 'Elba', 'Envol',
'Epicure', 'Eramosa', 'Estima', 'Eva', 'Fabula', 'Fambo', 'Fremont Russet', 'French Fingerling',
'Frontier Russet', 'Fundy', 'Garnet Chile', 'Gem Russet', 'GemStar Russet', 'Gemchip', 'German Butterball',
'Gigant', 'Goldrush', 'Granola', 'Green Mountain', 'Haida', 'Hertha', 'Hilite Russet', 'Huckleberry',
'Hunter', 'Huron', 'IdaRose', 'Innovator', 'Irish Cobbler', 'Island Sunshine', 'Ivory Crisp',
'Jacqueline Lee', 'Jemseg', 'Kanona', 'Katahdin', 'Kennebec', "Kerr's Pink", 'Keswick', 'Keuka Gold',
'Keystone Russet', 'King Edward VII', 'Kipfel', 'Klamath Russet', 'Krantz', 'LaRatte', 'Lady Rosetta',
'Latona', 'Lemhi Russet', 'Liberator', 'Lili', 'MaineChip', 'Marfona', 'Maris Bard', 'Maris Piper',
'Matilda', 'Mazama', 'McIntyre', 'Michigan Purple', 'Millenium Russet', 'Mirton Pearl', 'Modoc', 'Mondial',
'Monona', 'Morene', 'Morning Gold', 'Mouraska', 'Navan', 'Nicola', 'Nipigon', 'Niska', 'Nooksack',
'NorValley', 'Norchip', 'Nordonna', 'Norgold Russet', 'Norking Russet', 'Norland', 'Norwis', 'Obelix',
'Ozette', 'Peanut', 'Penta', 'Peribonka', 'Peruvian Purple', 'Pike', 'Pink Pearl', 'Prospect', 'Pungo',
'Purple Majesty', 'Purple Viking', 'Ranger Russet', 'Reba', 'Red Cloud', 'Red Gold', 'Red La Soda',
'Red Pontiac', 'Red Ruby', 'Red Thumb', 'Redsen', 'Rocket', 'Rose Finn Apple', 'Rose Gold', 'Roselys',
'Rote Erstling', 'Ruby Crescent', 'Russet Burbank', 'Russet Legend', 'Russet Norkotah', 'Russet Nugget',
'Russian Banana', 'Saginaw Gold', 'Sangre', 'Sant<EFBFBD>', 'Satina', 'Saxon', 'Sebago', 'Shepody', 'Sierra',
'Silverton Russet', 'Simcoe', 'Snowden', 'Spunta', "St. John's", 'Summit Russet', 'Sunrise', 'Superior',
'Symfonia', 'Tolaas', 'Trent', 'True Blue', 'Ulla', 'Umatilla Russet', 'Valisa', 'Van Gogh', 'Viking',
'Wallowa Russet', 'Warba', 'Western Russet', 'White Rose', 'Willamette', 'Winema', 'Yellow Finn',
'Yukon Gold']
@hook.command
def potato(inp, action=None):
"""potato <user> - Makes <user> a tasty little potato."""
inp = inp.strip()
if not re.match("^[A-Za-z0-9_|.-\]\[]*$", inp.lower()):
return "I cant make a tasty potato for that user!"
potato_type = random.choice(potatoes)
size = random.choice(['small', 'little', 'mid-sized', 'medium-sized', 'large', 'gigantic'])
flavor = random.choice(['tasty', 'delectable', 'delicious', 'yummy', 'toothsome', 'scrumptious', 'luscious'])
method = random.choice(['bakes', 'fries', 'boils', 'roasts'])
side_dish = random.choice(['side salad', 'dollop of sour cream', 'piece of chicken', 'bowl of shredded bacon'])
action("{} a {} {} {} potato for {} and serves it with a small {}!".format(method, flavor, size, potato_type, inp,
side_dish))

View File

@ -1,38 +0,0 @@
import datetime
from util import hook, http, timesince
@hook.command("scene")
@hook.command
def pre(inp):
"""pre <query> -- searches scene releases using orlydb.com"""
try:
h = http.get_html("http://orlydb.com/", q=inp)
except http.HTTPError as e:
return 'Unable to fetch results: {}'.format(e)
results = h.xpath("//div[@id='releases']/div/span[@class='release']/..")
if not results:
return "No results found."
result = results[0]
date = result.xpath("span[@class='timestamp']/text()")[0]
section = result.xpath("span[@class='section']//text()")[0]
name = result.xpath("span[@class='release']/text()")[0]
# parse date/time
date = datetime.datetime.strptime(date, "%Y-%m-%d %H:%M:%S")
date_string = date.strftime("%d %b %Y")
since = timesince.timesince(date)
size = result.xpath("span[@class='inforight']//text()")
if size:
size = ' - ' + size[0].split()[0]
else:
size = ''
return '{} - {}{} - {} ({} ago)'.format(section, name, size, date_string, since)

View File

@ -1,9 +0,0 @@
from util import hook
from util.pyexec import eval_py
@hook.command
def python(inp):
"""python <prog> -- Executes <prog> as Python code."""
return eval_py(inp)

View File

@ -1,18 +0,0 @@
# Plugin by https://github.com/Mu5tank05
from util import hook, web, http
@hook.command('qr')
@hook.command
def qrcode(inp):
"""qrcode [link] returns a link for a QR code."""
args = {
"cht": "qr", # chart type (QR)
"chs": "200x200", # dimensions
"chl": inp # data
}
link = http.prepare_url("http://chart.googleapis.com/chart", args)
return web.try_isgd(link)

View File

@ -1,131 +0,0 @@
import urllib
import json
import re
import oauth2 as oauth
from util import hook
def getdata(inp, types, api_key, api_secret):
consumer = oauth.Consumer(api_key, api_secret)
client = oauth.Client(consumer)
response = client.request('http://api.rdio.com/1/', 'POST',
urllib.urlencode({'method': 'search', 'query': inp, 'types': types, 'count': '1'}))
data = json.loads(response[1])
return data
@hook.command
def rdio(inp, bot=None):
""" rdio <search term> - alternatives: .rdiot (track), .rdioar (artist), .rdioal (album) """
api_key = bot.config.get("api_keys", {}).get("rdio_key")
api_secret = bot.config.get("api_keys", {}).get("rdio_secret")
if not api_key:
return "error: no api key set"
data = getdata(inp, "Track,Album,Artist", api_key, api_secret)
try:
info = data['result']['results'][0]
except IndexError:
return "No results."
if 'name' in info:
if 'artist' in info and 'album' in info: # Track
name = info['name']
artist = info['artist']
album = info['album']
url = info['shortUrl']
return u"\x02{}\x02 by \x02{}\x02 - {} {}".format(name, artist, album, url)
elif 'artist' in info and not 'album' in info: # Album
name = info['name']
artist = info['artist']
url = info['shortUrl']
return u"\x02{}\x02 by \x02{}\x02 - {}".format(name, artist, url)
else: # Artist
name = info['name']
url = info['shortUrl']
return u"\x02{}\x02 - {}".format(name, url)
@hook.command
def rdiot(inp, bot=None):
""" rdiot <search term> - Search for tracks on rdio """
api_key = bot.config.get("api_keys", {}).get("rdio_key")
api_secret = bot.config.get("api_keys", {}).get("rdio_secret")
if not api_key:
return "error: no api key set"
data = getdata(inp, "Track", api_key, api_secret)
try:
info = data['result']['results'][0]
except IndexError:
return "No results."
name = info['name']
artist = info['artist']
album = info['album']
url = info['shortUrl']
return u"\x02{}\x02 by \x02{}\x02 - {} - {}".format(name, artist, album, url)
@hook.command
def rdioar(inp, bot=None):
""" rdioar <search term> - Search for artists on rdio """
api_key = bot.config.get("api_keys", {}).get("rdio_key")
api_secret = bot.config.get("api_keys", {}).get("rdio_secret")
if not api_key:
return "error: no api key set"
data = getdata(inp, "Artist", api_key, api_secret)
try:
info = data['result']['results'][0]
except IndexError:
return "No results."
name = info['name']
url = info['shortUrl']
return u"\x02{}\x02 - {}".format(name, url)
@hook.command
def rdioal(inp, bot=None):
""" rdioal <search term> - Search for albums on rdio """
api_key = bot.config.get("api_keys", {}).get("rdio_key")
api_secret = bot.config.get("api_keys", {}).get("rdio_secret")
if not api_key:
return "error: no api key set"
data = getdata(inp, "Album", api_key, api_secret)
try:
info = data['result']['results'][0]
except IndexError:
return "No results."
name = info['name']
artist = info['artist']
url = info['shortUrl']
return u"\x02{}\x02 by \x02{}\x02 - {}".format(name, artist, url)
rdio_re = (r'(.*:)//(rd.io|www.rdio.com|rdio.com)(:[0-9]+)?(.*)', re.I)
@hook.regex(*rdio_re)
def rdio_url(match, bot=None):
api_key = bot.config.get("api_keys", {}).get("rdio_key")
api_secret = bot.config.get("api_keys", {}).get("rdio_secret")
if not api_key:
return None
url = match.group(1) + "//" + match.group(2) + match.group(4)
consumer = oauth.Consumer(api_key, api_secret)
client = oauth.Client(consumer)
response = client.request('http://api.rdio.com/1/', 'POST',
urllib.urlencode({'method': 'getObjectFromUrl', 'url': url}))
data = json.loads(response[1])
info = data['result']
if 'name' in info:
if 'artist' in info and 'album' in info: # Track
name = info['name']
artist = info['artist']
album = info['album']
return u"Rdio track: \x02{}\x02 by \x02{}\x02 - {}".format(name, artist, album)
elif 'artist' in info and not 'album' in info: # Album
name = info['name']
artist = info['artist']
return u"Rdio album: \x02{}\x02 by \x02{}\x02".format(name, artist)
else: # Artist
name = info['name']
return u"Rdio artist: \x02{}\x02".format(name)

View File

@ -1,106 +0,0 @@
import random
from util import hook, http, web
metadata_url = "http://omnidator.appspot.com/microdata/json/?url={}"
base_url = "http://www.cookstr.com"
search_url = base_url + "/searches"
random_url = search_url + "/surprise"
# set this to true to censor this plugin!
censor = True
phrases = [
u"EAT SOME FUCKING \x02{}\x02",
u"YOU WON'T NOT MAKE SOME FUCKING \x02{}\x02",
u"HOW ABOUT SOME FUCKING \x02{}?\x02",
u"WHY DON'T YOU EAT SOME FUCKING \x02{}?\x02",
u"MAKE SOME FUCKING \x02{}\x02",
u"INDUCE FOOD COMA WITH SOME FUCKING \x02{}\x02"
]
clean_key = lambda i: i.split("#")[1]
class ParseError(Exception):
pass
def get_data(url):
""" Uses the omnidator API to parse the metadata from the provided URL """
try:
omni = http.get_json(metadata_url.format(url))
except (http.HTTPError, http.URLError) as e:
raise ParseError(e)
schemas = omni["@"]
for d in schemas:
if d["a"] == "<http://schema.org/Recipe>":
data = {clean_key(key): value for (key, value) in d.iteritems()
if key.startswith("http://schema.org/Recipe")}
return data
raise ParseError("No recipe data found")
@hook.command(autohelp=False)
def recipe(inp):
"""recipe [term] - Gets a recipe for [term], or ets a random recipe if [term] is not provided"""
if inp:
# get the recipe URL by searching
try:
search = http.get_soup(search_url, query=inp.strip())
except (http.HTTPError, http.URLError) as e:
return "Could not get recipe: {}".format(e)
# find the list of results
result_list = search.find('div', {'class': 'found_results'})
if result_list:
results = result_list.find_all('div', {'class': 'recipe_result'})
else:
return "No results"
# pick a random front page result
result = random.choice(results)
# extract the URL from the result
url = base_url + result.find('div', {'class': 'image-wrapper'}).find('a')['href']
else:
# get a random recipe URL
try:
page = http.open(random_url)
except (http.HTTPError, http.URLError) as e:
return "Could not get recipe: {}".format(e)
url = page.geturl()
# use get_data() to get the recipe info from the URL
try:
data = get_data(url)
except ParseError as e:
return "Could not parse recipe: {}".format(e)
name = data["name"].strip()
return u"Try eating \x02{}!\x02 - {}".format(name, web.try_isgd(url))
@hook.command(autohelp=False)
def dinner(inp):
"""dinner - WTF IS FOR DINNER"""
try:
page = http.open(random_url)
except (http.HTTPError, http.URLError) as e:
return "Could not get recipe: {}".format(e)
url = page.geturl()
try:
data = get_data(url)
except ParseError as e:
return "Could not parse recipe: {}".format(e)
name = data["name"].strip().upper()
text = random.choice(phrases).format(name)
if censor:
text = text.replace("FUCK", "F**K")
return u"{} - {}".format(text, web.try_isgd(url))

View File

@ -1,79 +0,0 @@
from datetime import datetime
import re
import random
from util import hook, http, text, timesince
reddit_re = (r'.*(((www\.)?reddit\.com/r|redd\.it)[^ ]+)', re.I)
base_url = "http://reddit.com/r/{}/.json"
short_url = "http://redd.it/{}"
@hook.regex(*reddit_re)
def reddit_url(match):
thread = http.get_html(match.group(0))
title = thread.xpath('//title/text()')[0]
upvotes = thread.xpath("//span[@class='upvotes']/span[@class='number']/text()")[0]
downvotes = thread.xpath("//span[@class='downvotes']/span[@class='number']/text()")[0]
author = thread.xpath("//div[@id='siteTable']//a[contains(@class,'author')]/text()")[0]
timeago = thread.xpath("//div[@id='siteTable']//p[@class='tagline']/time/text()")[0]
comments = thread.xpath("//div[@id='siteTable']//a[@class='comments']/text()")[0]
return u'\x02{}\x02 - posted by \x02{}\x02 {} ago - {} upvotes, {} downvotes - {}'.format(
title, author, timeago, upvotes, downvotes, comments)
@hook.command(autohelp=False)
def reddit(inp):
"""reddit <subreddit> [n] -- Gets a random post from <subreddit>, or gets the [n]th post in the subreddit."""
id_num = None
if inp:
# clean and split the input
parts = inp.lower().strip().split()
# find the requested post number (if any)
if len(parts) > 1:
url = base_url.format(parts[0].strip())
try:
id_num = int(parts[1]) - 1
except ValueError:
return "Invalid post number."
else:
url = base_url.format(parts[0].strip())
else:
url = "http://reddit.com/.json"
try:
data = http.get_json(url, user_agent=http.ua_chrome)
except Exception as e:
return "Error: " + str(e)
data = data["data"]["children"]
# get the requested/random post
if id_num is not None:
try:
item = data[id_num]["data"]
except IndexError:
length = len(data)
return "Invalid post number. Number must be between 1 and {}.".format(length)
else:
item = random.choice(data)["data"]
item["title"] = text.truncate_str(item["title"], 50)
item["link"] = short_url.format(item["id"])
raw_time = datetime.fromtimestamp(int(item["created_utc"]))
item["timesince"] = timesince.timesince(raw_time)
if item["over_18"]:
item["warning"] = " \x02NSFW\x02"
else:
item["warning"] = ""
return u"\x02{title} : {subreddit}\x02 - posted by \x02{author}\x02" \
" {timesince} ago - {ups} upvotes, {downs} downvotes -" \
" {link}{warning}".format(**item)

View File

@ -1,128 +0,0 @@
from util import hook
# Default value.
# If True, all channels without a setting will have regex enabled
# If False, all channels without a setting will have regex disabled
default_enabled = True
db_ready = False
def db_init(db):
global db_ready
if not db_ready:
db.execute("CREATE TABLE IF NOT EXISTS regexchans(channel PRIMARY KEY, status)")
db.commit()
db_ready = True
def get_status(db, channel):
row = db.execute("SELECT status FROM regexchans WHERE channel = ?", [channel]).fetchone()
if row:
return row[0]
else:
return None
def set_status(db, channel, status):
row = db.execute("REPLACE INTO regexchans (channel, status) VALUES(?, ?)", [channel, status])
db.commit()
def delete_status(db, channel):
row = db.execute("DELETE FROM regexchans WHERE channel = ?", [channel])
db.commit()
def list_status(db):
row = db.execute("SELECT * FROM regexchans").fetchall()
result = None
for values in row:
if result:
result += u", {}: {}".format(values[0], values[1])
else:
result = u"{}: {}".format(values[0], values[1])
return result
@hook.sieve
def sieve_regex(bot, inp, func, kind, args):
db = bot.get_db_connection(inp.conn)
db_init(db)
if kind == 'regex' and inp.chan.startswith("#") and func.__name__ != 'factoid':
chanstatus = get_status(db, inp.chan)
if chanstatus != "ENABLED" and (chanstatus == "DISABLED" or not default_enabled):
print u"Denying input.raw={}, kind={}, args={} from {}".format(inp.raw, kind, args, inp.chan)
return None
print u"Allowing input.raw={}, kind={}, args={} from {}".format(inp.raw, kind, args, inp.chan)
return inp
@hook.command(permissions=["botcontrol"])
def enableregex(inp, db=None, message=None, notice=None, chan=None, nick=None):
db_init(db)
inp = inp.strip().lower()
if not inp:
channel = chan
elif inp.startswith("#"):
channel = inp
else:
channel = u"#{}".format(inp)
message(u"Enabling regex matching (youtube, etc) (issued by {})".format(nick), target=channel)
notice(u"Enabling regex matching (youtube, etc) in channel {}".format(channel))
set_status(db, channel, "ENABLED")
@hook.command(permissions=["botcontrol"])
def disableregex(inp, db=None, message=None, notice=None, chan=None, nick=None):
db_init(db)
inp = inp.strip().lower()
if not inp:
channel = chan
elif inp.startswith("#"):
channel = inp
else:
channel = u"#{}".format(inp)
message(u"Disabling regex matching (youtube, etc) (issued by {})".format(nick), target=channel)
notice(u"Disabling regex matching (youtube, etc) in channel {}".format(channel))
set_status(db, channel, "DISABLED")
@hook.command(permissions=["botcontrol"])
def resetregex(inp, db=None, message=None, notice=None, chan=None, nick=None):
db_init(db)
inp = inp.strip().lower()
if not inp:
channel = chan
elif inp.startswith("#"):
channel = inp
else:
channel = u"#{}".format(inp)
message(u"Resetting regex matching setting (youtube, etc) (issued by {})".format(nick), target=channel)
notice(u"Resetting regex matching setting (youtube, etc) in channel {}".format(channel))
delete_status(db, channel)
@hook.command(permissions=["botcontrol"])
def regexstatus(inp, db=None, chan=None):
db_init(db)
inp = inp.strip().lower()
if not inp:
channel = chan
elif inp.startswith("#"):
channel = inp
else:
channel = u"#{}".format(inp)
return u"Regex status for {}: {}".format(channel, get_status(db, channel))
@hook.command(permissions=["botcontrol"])
def listregex(inp, db=None):
db_init(db)
return list_status(db)

View File

@ -1,38 +0,0 @@
from util import hook, http
@hook.command('god')
@hook.command
def bible(inp):
""".bible <passage> -- gets <passage> from the Bible (ESV)"""
base_url = ('http://www.esvapi.org/v2/rest/passageQuery?key=IP&'
'output-format=plain-text&include-heading-horizontal-lines&'
'include-headings=false&include-passage-horizontal-lines=false&'
'include-passage-references=false&include-short-copyright=false&'
'include-footnotes=false&line-length=0&'
'include-heading-horizontal-lines=false')
text = http.get(base_url, passage=inp)
text = ' '.join(text.split())
if len(text) > 400:
text = text[:text.rfind(' ', 0, 400)] + '...'
return text
@hook.command('allah')
@hook.command
def koran(inp): # Koran look-up plugin by Ghetto Wizard
""".koran <chapter.verse> -- gets <chapter.verse> from the Koran"""
url = 'http://quod.lib.umich.edu/cgi/k/koran/koran-idx?type=simple'
results = http.get_html(url, q1=inp).xpath('//li')
if not results:
return 'No results for ' + inp
return results[0].text_content()

View File

@ -1,33 +0,0 @@
import json
from util import hook, textgen
def get_generator(_json, variables):
data = json.loads(_json)
return textgen.TextGenerator(data["templates"],
data["parts"], variables=variables)
@hook.command
def slap(inp, action=None, nick=None, conn=None, notice=None):
"""slap <user> -- Makes the bot slap <user>."""
target = inp.strip()
if " " in target:
notice("Invalid username!")
return
# if the user is trying to make the bot slap itself, slap them
if target.lower() == conn.nick.lower() or target.lower() == "itself":
target = nick
variables = {
"user": target
}
with open("plugins/data/slaps.json") as f:
generator = get_generator(f.read(), variables)
# act out the message
action(generator.generate_string())

View File

@ -1,50 +0,0 @@
from urllib import urlencode
import re
from util import hook, http, web, text
sc_re = (r'(.*:)//(www.)?(soundcloud.com)(.*)', re.I)
api_url = "http://api.soundcloud.com"
sndsc_re = (r'(.*:)//(www.)?(snd.sc)(.*)', re.I)
def soundcloud(url, api_key):
data = http.get_json(api_url + '/resolve.json?' + urlencode({'url': url, 'client_id': api_key}))
if data['description']:
desc = u": {} ".format(text.truncate_str(data['description'], 50))
else:
desc = ""
if data['genre']:
genre = u"- Genre: \x02{}\x02 ".format(data['genre'])
else:
genre = ""
url = web.try_isgd(data['permalink_url'])
return u"SoundCloud track: \x02{}\x02 by \x02{}\x02 {}{}- {} plays, {} downloads, {} comments - {}".format(
data['title'], data['user']['username'], desc, genre, data['playback_count'], data['download_count'],
data['comment_count'], url)
@hook.regex(*sc_re)
def soundcloud_url(match, bot=None):
api_key = bot.config.get("api_keys", {}).get("soundcloud")
if not api_key:
print "Error: no api key set"
return None
url = match.group(1).split(' ')[-1] + "//" + (match.group(2) if match.group(2) else "") + match.group(3) + \
match.group(4).split(' ')[0]
return soundcloud(url, api_key)
@hook.regex(*sndsc_re)
def sndsc_url(match, bot=None):
api_key = bot.config.get("api_keys", {}).get("soundcloud")
if not api_key:
print "Error: no api key set"
return None
url = match.group(1).split(' ')[-1] + "//" + (match.group(2) if match.group(2) else "") + match.group(3) + \
match.group(4).split(' ')[0]
return soundcloud(http.open(url).url, api_key)

View File

@ -1,47 +0,0 @@
from enchant.checker import SpellChecker
import enchant
from util import hook
locale = "en_US"
@hook.command
def spell(inp):
"""spell <word/sentence> -- Check spelling of a word or sentence."""
if not enchant.dict_exists(locale):
return "Could not find dictionary: {}".format(locale)
if len(inp.split(" ")) > 1:
# input is a sentence
checker = SpellChecker(locale)
checker.set_text(inp)
offset = 0
for err in checker:
# find the location of the incorrect word
start = err.wordpos + offset
finish = start + len(err.word)
# get some suggestions for it
suggestions = err.suggest()
s_string = '/'.join(suggestions[:3])
s_string = "\x02{}\x02".format(s_string)
# calculate the offset for the next word
offset = (offset + len(s_string)) - len(err.word)
# replace the word with the suggestions
inp = inp[:start] + s_string + inp[finish:]
return inp
else:
# input is a word
dictionary = enchant.Dict(locale)
is_correct = dictionary.check(inp)
suggestions = dictionary.suggest(inp)
s_string = ', '.join(suggestions[:10])
if is_correct:
return '"{}" appears to be \x02valid\x02! ' \
'(suggestions: {})'.format(inp, s_string)
else:
return '"{}" appears to be \x02invalid\x02! ' \
'(suggestions: {})'.format(inp, s_string)

View File

@ -1,106 +0,0 @@
import re
from urllib import urlencode
from util import hook, http, web
gateway = 'http://open.spotify.com/{}/{}' # http spotify gw address
spuri = 'spotify:{}:{}'
spotify_re = (r'(spotify:(track|album|artist|user):([a-zA-Z0-9]+))', re.I)
http_re = (r'(open\.spotify\.com\/(track|album|artist|user)\/'
'([a-zA-Z0-9]+))', re.I)
def sptfy(inp, sptfy=False):
if sptfy:
shortenurl = "http://sptfy.com/index.php"
data = urlencode({'longUrl': inp, 'shortUrlDomain': 1, 'submitted': 1, "shortUrlFolder": 6, "customUrl": "",
"shortUrlPassword": "", "shortUrlExpiryDate": "", "shortUrlUses": 0, "shortUrlType": 0})
try:
soup = http.get_soup(shortenurl, post_data=data, cookies=True)
except:
return inp
try:
link = soup.find('div', {'class': 'resultLink'}).text.strip()
return link
except:
message = "Unable to shorten URL: %s" % \
soup.find('div', {'class': 'messagebox_text'}).find('p').text.split("<br/>")[0]
return message
else:
return web.try_isgd(inp)
@hook.command('sptrack')
@hook.command
def spotify(inp):
"""spotify <song> -- Search Spotify for <song>"""
try:
data = http.get_json("http://ws.spotify.com/search/1/track.json", q=inp.strip())
except Exception as e:
return "Could not get track information: {}".format(e)
try:
type, id = data["tracks"][0]["href"].split(":")[1:]
except IndexError:
return "Could not find track."
url = sptfy(gateway.format(type, id))
return u"\x02{}\x02 by \x02{}\x02 - {}".format(data["tracks"][0]["name"],
data["tracks"][0]["artists"][0]["name"], url)
@hook.command
def spalbum(inp):
"""spalbum <album> -- Search Spotify for <album>"""
try:
data = http.get_json("http://ws.spotify.com/search/1/album.json", q=inp.strip())
except Exception as e:
return "Could not get album information: {}".format(e)
try:
type, id = data["albums"][0]["href"].split(":")[1:]
except IndexError:
return "Could not find album."
url = sptfy(gateway.format(type, id))
return u"\x02{}\x02 by \x02{}\x02 - {}".format(data["albums"][0]["name"],
data["albums"][0]["artists"][0]["name"], url)
@hook.command
def spartist(inp):
"""spartist <artist> -- Search Spotify for <artist>"""
try:
data = http.get_json("http://ws.spotify.com/search/1/artist.json", q=inp.strip())
except Exception as e:
return "Could not get artist information: {}".format(e)
try:
type, id = data["artists"][0]["href"].split(":")[1:]
except IndexError:
return "Could not find artist."
url = sptfy(gateway.format(type, id))
return u"\x02{}\x02 - {}".format(data["artists"][0]["name"], url)
@hook.regex(*http_re)
@hook.regex(*spotify_re)
def spotify_url(match):
type = match.group(2)
spotify_id = match.group(3)
url = spuri.format(type, spotify_id)
# no error catching here, if the API is down fail silently
data = http.get_json("http://ws.spotify.com/lookup/1/.json", uri=url)
if type == "track":
name = data["track"]["name"]
artist = data["track"]["artists"][0]["name"]
album = data["track"]["album"]["name"]
return u"Spotify Track: \x02{}\x02 by \x02{}\x02 from the album \x02{}\x02 - {}".format(name, artist,
album, sptfy(
gateway.format(type, spotify_id)))
elif type == "artist":
return u"Spotify Artist: \x02{}\x02 - {}".format(data["artist"]["name"],
sptfy(gateway.format(type, spotify_id)))
elif type == "album":
return u"Spotify Album: \x02{}\x02 - \x02{}\x02 - {}".format(data["album"]["artist"],
data["album"]["name"],
sptfy(gateway.format(type, spotify_id)))

View File

@ -1,53 +0,0 @@
from util import hook
import re
import time
from subprocess import check_output
def getstatus():
try:
return check_output("sudo /bin/chch-status", shell=True).strip("\n").decode("utf-8")
except:
return "unbekannt"
@hook.command("status", autohelp=False)
def cmd_status(inp, reply=None):
"""status - Return the door status"""
reply("Chaostreff Status: %s" % (getstatus()))
@hook.event("TOPIC")
def topic_update(info, conn=None, chan=None):
"""topic_update -- Update the topic on TOPIC command"""
status = getstatus()
topic = info[-1]
sstr = "Status: %s" % (status)
if sstr in topic:
return
if 'Status: ' in topic:
new_topic = re.sub("Status: [^ ]*", sstr, topic)
else:
new_topic = "%s | %s" % (topic.rstrip(' |'), sstr)
if new_topic != topic:
conn.send("TOPIC %s :%s" % (chan, new_topic))
@hook.event("332")
def e332_update(info, conn=None, chan=None):
"""e332_update -- run after current topic was requested"""
chan = info[1]
topic_update(info, conn=conn, chan=chan)
@hook.singlethread
@hook.event("353")
def e353_update(info, conn=None, chan=None):
"""e353_update -- runs after a channel was joined"""
chan = info[2]
if chan.lower() == "#chaoschemnitz":
conn.send("PRIVMSG Chanserv :op #chaoschemnitz")
while True:
conn.send("TOPIC %s" % (chan))
time.sleep(60)

View File

@ -1,75 +0,0 @@
import re
from bs4 import BeautifulSoup, NavigableString, Tag
from util import hook, http, web
from util.text import truncate_str
steam_re = (r'(.*:)//(store.steampowered.com)(:[0-9]+)?(.*)', re.I)
def get_steam_info(url):
page = http.get(url)
soup = BeautifulSoup(page, 'lxml', from_encoding="utf-8")
data = {}
data["name"] = soup.find('div', {'class': 'apphub_AppName'}).text
data["desc"] = truncate_str(soup.find('meta', {'name': 'description'})['content'].strip(), 80)
# get the element details_block
details = soup.find('div', {'class': 'details_block'})
# loop over every <b></b> tag in details_block
for b in details.findAll('b'):
# get the contents of the <b></b> tag, which is our title
title = b.text.lower().replace(":", "")
if title == "languages":
# we have all we need!
break
# find the next element directly after the <b></b> tag
next_element = b.nextSibling
if next_element:
# if the element is some text
if isinstance(next_element, NavigableString):
text = next_element.string.strip()
if text:
# we found valid text, save it and continue the loop
data[title] = text
continue
else:
# the text is blank - sometimes this means there are
# useless spaces or tabs between the <b> and <a> tags.
# so we find the next <a> tag and carry on to the next
# bit of code below
next_element = next_element.find_next('a', href=True)
# if the element is an <a></a> tag
if isinstance(next_element, Tag) and next_element.name == 'a':
text = next_element.string.strip()
if text:
# we found valid text (in the <a></a> tag),
# save it and continue the loop
data[title] = text
continue
data["price"] = soup.find('div', {'class': 'game_purchase_price price'}).text.strip()
return u"\x02{name}\x02: {desc}, \x02Genre\x02: {genre}, \x02Release Date\x02: {release date}," \
u" \x02Price\x02: {price}".format(**data)
@hook.regex(*steam_re)
def steam_url(match):
return get_steam_info("http://store.steampowered.com" + match.group(4))
@hook.command
def steam(inp):
"""steam [search] - Search for specified game/trailer/DLC"""
page = http.get("http://store.steampowered.com/search/?term=" + inp)
soup = BeautifulSoup(page, 'lxml', from_encoding="utf-8")
result = soup.find('a', {'class': 'search_result_row'})
return get_steam_info(result['href']) + " - " + web.isgd(result['href'])

View File

@ -1,120 +0,0 @@
import csv
import StringIO
from util import hook, http, text
gauge_url = "http://www.mysteamgauge.com/search?username={}"
api_url = "http://mysteamgauge.com/user/{}.csv"
steam_api_url = "http://steamcommunity.com/id/{}/?xml=1"
def refresh_data(name):
http.get(gauge_url.format(name), timeout=25, get_method='HEAD')
def get_data(name):
return http.get(api_url.format(name))
def is_number(s):
try:
float(s)
return True
except ValueError:
return False
def unicode_dictreader(utf8_data, **kwargs):
csv_reader = csv.DictReader(utf8_data, **kwargs)
for row in csv_reader:
yield dict([(key.lower(), unicode(value, 'utf-8')) for key, value in row.iteritems()])
@hook.command('sc')
@hook.command
def steamcalc(inp, reply=None):
"""steamcalc <username> [currency] - Gets value of steam account and
total hours played. Uses steamcommunity.com/id/<nickname>. """
# check if the user asked us to force reload
force_reload = inp.endswith(" forcereload")
if force_reload:
name = inp[:-12].strip().lower()
else:
name = inp.strip()
if force_reload:
try:
reply("Collecting data, this may take a while.")
refresh_data(name)
request = get_data(name)
do_refresh = False
except (http.HTTPError, http.URLError):
return "Could not get data for this user."
else:
try:
request = get_data(name)
do_refresh = True
except (http.HTTPError, http.URLError):
try:
reply("Collecting data, this may take a while.")
refresh_data(name)
request = get_data(name)
do_refresh = False
except (http.HTTPError, http.URLError):
return "Could not get data for this user."
csv_data = StringIO.StringIO(request) # we use StringIO because CSV can't read a string
reader = unicode_dictreader(csv_data)
# put the games in a list
games = []
for row in reader:
games.append(row)
data = {}
# basic information
steam_profile = http.get_xml(steam_api_url.format(name))
try:
data["name"] = steam_profile.find('steamID').text
online_state = steam_profile.find('stateMessage').text
except AttributeError:
return "Could not get data for this user."
online_state = online_state.replace("<br/>", ": ") # will make this pretty later
data["state"] = text.strip_html(online_state)
# work out the average metascore for all games
ms = [float(game["metascore"]) for game in games if is_number(game["metascore"])]
metascore = float(sum(ms)) / len(ms) if len(ms) > 0 else float('nan')
data["average_metascore"] = "{0:.1f}".format(metascore)
# work out the totals
data["games"] = len(games)
total_value = sum([float(game["value"]) for game in games if is_number(game["value"])])
data["value"] = str(int(round(total_value)))
# work out the total size
total_size = 0.0
for game in games:
if not is_number(game["size"]):
continue
if game["unit"] == "GB":
total_size += float(game["size"])
else:
total_size += float(game["size"]) / 1024
data["size"] = "{0:.1f}".format(total_size)
reply("{name} ({state}) has {games} games with a total value of ${value}"
" and a total size of {size}GB! The average metascore for these"
" games is {average_metascore}.".format(**data))
if do_refresh:
refresh_data(name)

View File

@ -1,30 +0,0 @@
from util import hook, web
@hook.command
def stock(inp):
"""stock <symbol> -- gets stock information"""
sym = inp.strip().lower()
query = "SELECT * FROM yahoo.finance.quote WHERE symbol=@symbol LIMIT 1"
quote = web.query(query, {"symbol": sym}).one()
# if we don't get a company name back, the symbol doesn't match a company
if quote['Change'] is None:
return "Unknown ticker symbol: {}".format(sym)
change = float(quote['Change'])
price = float(quote['LastTradePriceOnly'])
if change < 0:
quote['color'] = "5"
else:
quote['color'] = "3"
quote['PercentChange'] = 100 * change / (price - change)
print quote
return u"\x02{Name}\x02 (\x02{symbol}\x02) - {LastTradePriceOnly} " \
"\x03{color}{Change} ({PercentChange:.2f}%)\x03 " \
"Day Range: {DaysRange} " \
"MCAP: {MarketCapitalization}".format(**quote)

View File

@ -1,19 +0,0 @@
from util import hook, http, text
from bs4 import BeautifulSoup
@hook.command
def suggest(inp):
"""suggest <phrase> -- Gets suggested phrases for a google search"""
suggestions = http.get_json('http://suggestqueries.google.com/complete/search', client='firefox', q=inp)[1]
if not suggestions:
return 'no suggestions found'
out = u", ".join(suggestions)
# defuckify text (might not be needed now, but I'll keep it)
soup = BeautifulSoup(out)
out = soup.get_text()
return text.truncate_str(out, 200)

View File

@ -1,121 +0,0 @@
""" tell.py: written by sklnd in July 2009
2010.01.25 - modified by Scaevolus"""
import time
import re
from util import hook, timesince
db_ready = []
def db_init(db, conn):
"""Check that our db has the tell table, create it if not."""
global db_ready
if not conn.name in db_ready:
db.execute("create table if not exists tell"
"(user_to, user_from, message, chan, time,"
"primary key(user_to, message))")
db.commit()
db_ready.append(conn.name)
def get_tells(db, user_to):
return db.execute("select user_from, message, time, chan from tell where"
" user_to=lower(?) order by time",
(user_to.lower(),)).fetchall()
@hook.singlethread
@hook.event('PRIVMSG')
def tellinput(inp, input=None, notice=None, db=None, nick=None, conn=None):
if 'showtells' in input.msg.lower():
return
db_init(db, conn)
tells = get_tells(db, nick)
if tells:
user_from, message, time, chan = tells[0]
reltime = timesince.timesince(time)
reply = "{} sent you a message {} ago from {}: {}".format(user_from, reltime, chan,
message)
if len(tells) > 1:
reply += " (+{} more, {}showtells to view)".format(len(tells) - 1, conn.conf["command_prefix"])
db.execute("delete from tell where user_to=lower(?) and message=?",
(nick, message))
db.commit()
notice(reply)
@hook.command(autohelp=False)
def showtells(inp, nick='', chan='', notice=None, db=None, conn=None):
"""showtells -- View all pending tell messages (sent in a notice)."""
db_init(db, conn)
tells = get_tells(db, nick)
if not tells:
notice("You have no pending tells.")
return
for tell in tells:
user_from, message, time, chan = tell
past = timesince.timesince(time)
notice("{} sent you a message {} ago from {}: {}".format(user_from, past, chan, message))
db.execute("delete from tell where user_to=lower(?)",
(nick,))
db.commit()
@hook.command
def tell(inp, nick='', chan='', db=None, input=None, notice=None, conn=None):
"""tell <nick> <message> -- Relay <message> to <nick> when <nick> is around."""
query = inp.split(' ', 1)
if len(query) != 2:
notice(tell.__doc__)
return
user_to = query[0].lower()
message = query[1].strip()
user_from = nick
if chan.lower() == user_from.lower():
chan = 'a pm'
if user_to == user_from.lower():
notice("Have you looked in a mirror lately?")
return
if user_to.lower() == input.conn.nick.lower():
# user is looking for us, being a smart-ass
notice("Thanks for the message, {}!".format(user_from))
return
if not re.match("^[A-Za-z0-9_|.\-\]\[]*$", user_to.lower()):
notice("I can't send a message to that user!")
return
db_init(db, conn)
if db.execute("select count() from tell where user_to=?",
(user_to,)).fetchone()[0] >= 10:
notice("That person has too many messages queued.")
return
try:
db.execute("insert into tell(user_to, user_from, message, chan,"
"time) values(?,?,?,?,?)", (user_to, user_from, message,
chan, time.time()))
db.commit()
except db.IntegrityError:
notice("Message has already been queued.")
return
notice("Your message has been sent!")

View File

@ -1,62 +0,0 @@
import time
from util import hook, http
from util.text import capitalize_first
api_url = 'http://api.wolframalpha.com/v2/query?format=plaintext'
@hook.command("time")
def time_command(inp, bot=None):
"""time <area> -- Gets the time in <area>"""
query = "current time in {}".format(inp)
api_key = bot.config.get("api_keys", {}).get("wolframalpha", None)
if not api_key:
return "error: no wolfram alpha api key set"
request = http.get_xml(api_url, input=query, appid=api_key)
current_time = " ".join(request.xpath("//pod[@title='Result']/subpod/plaintext/text()"))
current_time = current_time.replace(" | ", ", ")
if current_time:
# nice place name for UNIX time
if inp.lower() == "unix":
place = "Unix Epoch"
else:
place = capitalize_first(" ".join(request.xpath("//pod[@"
"title='Input interpretation']/subpod/plaintext/text()"))[
16:])
return "{} - \x02{}\x02".format(current_time, place)
else:
return "Could not get the time for '{}'.".format(inp)
@hook.command(autohelp=False)
def beats(inp):
"""beats -- Gets the current time in .beats (Swatch Internet Time). """
if inp.lower() == "wut":
return "Instead of hours and minutes, the mean solar day is divided " \
"up into 1000 parts called \".beats\". Each .beat lasts 1 minute and" \
" 26.4 seconds. Times are notated as a 3-digit number out of 1000 af" \
"ter midnight. So, @248 would indicate a time 248 .beats after midni" \
"ght representing 248/1000 of a day, just over 5 hours and 57 minute" \
"s. There are no timezones."
elif inp.lower() == "guide":
return "1 day = 1000 .beats, 1 hour = 41.666 .beats, 1 min = 0.6944 .beats, 1 second = 0.01157 .beats"
t = time.gmtime()
h, m, s = t.tm_hour, t.tm_min, t.tm_sec
utc = 3600 * h + 60 * m + s
bmt = utc + 3600 # Biel Mean Time (BMT)
beat = bmt / 86.4
if beat > 1000:
beat -= 1000
return "Swatch Internet Time: @%06.2f" % beat

View File

@ -1,115 +0,0 @@
import re
from HTMLParser import HTMLParser
from util import hook, http
twitch_re = (r'(.*:)//(twitch.tv|www.twitch.tv)(:[0-9]+)?(.*)', re.I)
multitwitch_re = (r'(.*:)//(www.multitwitch.tv|multitwitch.tv)/(.*)', re.I)
def test(s):
valid = set('ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789_/')
return set(s) <= valid
def truncate(msg):
nmsg = msg.split(" ")
out = None
x = 0
for i in nmsg:
if x <= 7:
if out:
out = out + " " + nmsg[x]
else:
out = nmsg[x]
x += 1
if x <= 7:
return out
else:
return out + "..."
@hook.regex(*multitwitch_re)
def multitwitch_url(match):
usernames = match.group(3).split("/")
out = ""
for i in usernames:
if not test(i):
print "Not a valid username"
return None
if out == "":
out = twitch_lookup(i)
else:
out = out + " \x02|\x02 " + twitch_lookup(i)
return out
@hook.regex(*twitch_re)
def twitch_url(match):
bit = match.group(4).split("#")[0]
location = "/".join(bit.split("/")[1:])
if not test(location):
print "Not a valid username"
return None
return twitch_lookup(location)
@hook.command('twitchviewers')
@hook.command
def twviewers(inp):
inp = inp.split("/")[-1]
if test(inp):
location = inp
else:
return "Not a valid channel name."
return twitch_lookup(location).split("(")[-1].split(")")[0].replace("Online now! ", "")
def twitch_lookup(location):
locsplit = location.split("/")
if len(locsplit) > 1 and len(locsplit) == 3:
channel = locsplit[0]
type = locsplit[1] # should be b or c
id = locsplit[2]
else:
channel = locsplit[0]
type = None
id = None
h = HTMLParser()
fmt = "{}: {} playing {} ({})" # Title: nickname playing Game (x views)
if type and id:
if type == "b": # I haven't found an API to retrieve broadcast info
soup = http.get_soup("http://twitch.tv/" + location)
title = soup.find('span', {'class': 'real_title js-title'}).text
playing = soup.find('a', {'class': 'game js-game'}).text
views = soup.find('span', {'id': 'views-count'}).text + " view"
views = views + "s" if not views[0:2] == "1 " else views
return h.unescape(fmt.format(title, channel, playing, views))
elif type == "c":
data = http.get_json("https://api.twitch.tv/kraken/videos/" + type + id)
title = data['title']
playing = data['game']
views = str(data['views']) + " view"
views = views + "s" if not views[0:2] == "1 " else views
return h.unescape(fmt.format(title, channel, playing, views))
else:
data = http.get_json("http://api.justin.tv/api/stream/list.json?channel=" + channel)
if data and len(data) >= 1:
data = data[0]
title = data['title']
playing = data['meta_game']
viewers = "\x033\x02Online now!\x02\x0f " + str(data["channel_count"]) + " viewer"
print viewers
viewers = viewers + "s" if not " 1 view" in viewers else viewers
print viewers
return h.unescape(fmt.format(title, channel, playing, viewers))
else:
try:
data = http.get_json("https://api.twitch.tv/kraken/channels/" + channel)
except:
return
title = data['status']
playing = data['game']
viewers = "\x034\x02Offline\x02\x0f"
return h.unescape(fmt.format(title, channel, playing, viewers))

View File

@ -1,178 +0,0 @@
import re
import random
from datetime import datetime
import tweepy
from util import hook, timesince
TWITTER_RE = (r"(?:(?:www.twitter.com|twitter.com)/(?:[-_a-zA-Z0-9]+)/status/)([0-9]+)", re.I)
def get_api(bot):
consumer_key = bot.config.get("api_keys", {}).get("twitter_consumer_key")
consumer_secret = bot.config.get("api_keys", {}).get("twitter_consumer_secret")
oauth_token = bot.config.get("api_keys", {}).get("twitter_access_token")
oauth_secret = bot.config.get("api_keys", {}).get("twitter_access_secret")
if not consumer_key:
return False
auth = tweepy.OAuthHandler(consumer_key, consumer_secret)
auth.set_access_token(oauth_token, oauth_secret)
return tweepy.API(auth)
@hook.regex(*TWITTER_RE)
def twitter_url(match, bot=None):
# Find the tweet ID from the URL
tweet_id = match.group(1)
# Get the tweet using the tweepy API
api = get_api(bot)
if not api:
return
try:
tweet = api.get_status(tweet_id)
user = tweet.user
except tweepy.error.TweepError:
return
# Format the return the text of the tweet
text = " ".join(tweet.text.split())
if user.verified:
prefix = u"\u2713"
else:
prefix = ""
time = timesince.timesince(tweet.created_at, datetime.utcnow())
return u"{}@\x02{}\x02 ({}): {} ({} ago)".format(prefix, user.screen_name, user.name, text, time)
@hook.command("tw")
@hook.command("twatter")
@hook.command
def twitter(inp, bot=None):
"""twitter <user> [n] -- Gets last/[n]th tweet from <user>"""
api = get_api(bot)
if not api:
return "Error: No Twitter API details."
if re.match(r'^\d+$', inp):
# user is getting a tweet by id
try:
# get tweet by id
tweet = api.get_status(inp)
except tweepy.error.TweepError as e:
if e[0][0]['code'] == 34:
return "Could not find tweet."
else:
return u"Error {}: {}".format(e[0][0]['code'], e[0][0]['message'])
user = tweet.user
elif re.match(r'^\w{1,15}$', inp) or re.match(r'^\w{1,15}\s+\d+$', inp):
# user is getting a tweet by name
if inp.find(' ') == -1:
username = inp
tweet_number = 0
else:
username, tweet_number = inp.split()
tweet_number = int(tweet_number) - 1
if tweet_number > 300:
return "This command can only find the last \x02300\x02 tweets."
try:
# try to get user by username
user = api.get_user(username)
except tweepy.error.TweepError as e:
if e[0][0]['code'] == 34:
return "Could not find user."
else:
return u"Error {}: {}".format(e[0][0]['code'], e[0][0]['message'])
# get the users tweets
user_timeline = api.user_timeline(id=user.id, count=tweet_number + 1)
# if the timeline is empty, return an error
if not user_timeline:
return u"The user \x02{}\x02 has no tweets.".format(user.screen_name)
# grab the newest tweet from the users timeline
try:
tweet = user_timeline[tweet_number]
except IndexError:
tweet_count = len(user_timeline)
return u"The user \x02{}\x02 only has \x02{}\x02 tweets.".format(user.screen_name, tweet_count)
elif re.match(r'^#\w+$', inp):
# user is searching by hashtag
search = api.search(inp)
if not search:
return "No tweets found."
tweet = random.choice(search)
user = tweet.user
else:
# ???
return "Invalid Input"
# Format the return the text of the tweet
text = " ".join(tweet.text.split())
if user.verified:
prefix = u"\u2713"
else:
prefix = ""
time = timesince.timesince(tweet.created_at, datetime.utcnow())
return u"{}@\x02{}\x02 ({}): {} ({} ago)".format(prefix, user.screen_name, user.name, text, time)
@hook.command("twinfo")
@hook.command
def twuser(inp, bot=None):
"""twuser <user> -- Get info on the Twitter user <user>"""
api = get_api(bot)
if not api:
return "Error: No Twitter API details."
try:
# try to get user by username
user = api.get_user(inp)
except tweepy.error.TweepError as e:
if e[0][0]['code'] == 34:
return "Could not find user."
else:
return "Unknown error"
if user.verified:
prefix = u"\u2713"
else:
prefix = ""
if user.location:
loc_str = u" is located in \x02{}\x02 and".format(user.location)
else:
loc_str = ""
if user.description:
desc_str = u" The users description is \"{}\"".format(user.description)
else:
desc_str = ""
return u"{}@\x02{}\x02 ({}){} has \x02{:,}\x02 tweets and \x02{:,}\x02 followers.{}" \
"".format(prefix, user.screen_name, user.name, loc_str, user.statuses_count, user.followers_count,
desc_str)

View File

@ -1,43 +0,0 @@
from git import Repo
from util import hook, web
@hook.command
def update(inp, bot=None):
repo = Repo()
git = repo.git
try:
pull = git.pull()
except Exception as e:
return e
if "\n" in pull:
return web.haste(pull)
else:
return pull
@hook.command
def version(inp, bot=None):
repo = Repo()
# get origin and fetch it
origin = repo.remotes.origin
info = origin.fetch()
# get objects
head = repo.head
origin_head = info[0]
current_commit = head.commit
remote_commit = origin_head.commit
if current_commit == remote_commit:
in_sync = True
else:
in_sync = False
# output
return "Local \x02{}\x02 is at commit \x02{}\x02, remote \x02{}\x02 is at commit \x02{}\x02." \
" You {} running the latest version.".format(head, current_commit.name_rev[:7],
origin_head, remote_commit.name_rev[:7],
"are" if in_sync else "are not")

View File

@ -1,66 +0,0 @@
import re
import random
from util import hook, http, text
base_url = 'http://api.urbandictionary.com/v0'
define_url = base_url + "/define"
random_url = base_url + "/random"
@hook.command('u', autohelp=False)
@hook.command(autohelp=False)
def urban(inp):
"""urban <phrase> [id] -- Looks up <phrase> on urbandictionary.com."""
if inp:
# clean and split the input
inp = inp.lower().strip()
parts = inp.split()
# if the last word is a number, set the ID to that number
if parts[-1].isdigit():
id_num = int(parts[-1])
# remove the ID from the input string
del parts[-1]
inp = " ".join(parts)
else:
id_num = 1
# fetch the definitions
page = http.get_json(define_url, term=inp, referer="http://m.urbandictionary.com")
if page['result_type'] == 'no_results':
return 'Not found.'
else:
# get a random definition!
page = http.get_json(random_url, referer="http://m.urbandictionary.com")
id_num = None
definitions = page['list']
if id_num:
# try getting the requested definition
try:
definition = definitions[id_num - 1]
def_text = " ".join(definition['definition'].split()) # remove excess spaces
def_text = text.truncate_str(def_text, 200)
except IndexError:
return 'Not found.'
url = definition['permalink']
output = u"[%i/%i] %s :: %s" % \
(id_num, len(definitions), def_text, url)
else:
definition = random.choice(definitions)
def_text = " ".join(definition['definition'].split()) # remove excess spaces
def_text = text.truncate_str(def_text, 200)
name = definition['word']
url = definition['permalink']
output = u"\x02{}\x02: {} :: {}".format(name, def_text, url)
return output

View File

@ -1,197 +0,0 @@
import hashlib
import collections
import re
from util import hook, text
# variables
colors = collections.OrderedDict([
('red', '\x0304'),
('orange', '\x0307'),
('yellow', '\x0308'),
('green', '\x0309'),
('cyan', '\x0303'),
('ltblue', '\x0310'),
('rylblue', '\x0312'),
('blue', '\x0302'),
('magenta', '\x0306'),
('pink', '\x0313'),
('maroon', '\x0305')
])
# helper functions
strip_re = re.compile("(\x03|\x02|\x1f)(?:,?\d{1,2}(?:,\d{1,2})?)?", re.UNICODE)
def strip(string):
return strip_re.sub('', string)
# basic text tools
## TODO: make this capitalize sentences correctly
@hook.command("capitalise")
@hook.command
def capitalize(inp):
"""capitalize <string> -- Capitalizes <string>."""
return inp.capitalize()
@hook.command
def upper(inp):
"""upper <string> -- Convert string to uppercase."""
return inp.upper()
@hook.command
def lower(inp):
"""lower <string> -- Convert string to lowercase."""
return inp.lower()
@hook.command
def titlecase(inp):
"""title <string> -- Convert string to title case."""
return inp.title()
@hook.command
def swapcase(inp):
"""swapcase <string> -- Swaps the capitalization of <string>."""
return inp.swapcase()
# encoding
@hook.command
def rot13(inp):
"""rot13 <string> -- Encode <string> with rot13."""
return inp.encode('rot13')
@hook.command
def base64(inp):
"""base64 <string> -- Encode <string> with base64."""
return inp.encode('base64')
@hook.command
def unbase64(inp):
"""unbase64 <string> -- Decode <string> with base64."""
return inp.decode('base64')
@hook.command
def checkbase64(inp):
try:
decoded = inp.decode('base64')
recoded = decoded.encode('base64').strip()
is_base64 = recoded == inp
except:
return '"{}" is not base64 encoded'.format(inp)
if is_base64:
return '"{}" is base64 encoded'.format(recoded)
else:
return '"{}" is not base64 encoded'.format(inp)
@hook.command
def unescape(inp):
"""unescape <string> -- Unescapes <string>."""
try:
return inp.decode('unicode-escape')
except Exception as e:
return "Error: {}".format(e)
@hook.command
def escape(inp):
"""escape <string> -- Escapes <string>."""
try:
return inp.encode('unicode-escape')
except Exception as e:
return "Error: {}".format(e)
# length
@hook.command
def length(inp):
"""length <string> -- gets the length of <string>"""
return "The length of that string is {} characters.".format(len(inp))
# reverse
@hook.command
def reverse(inp):
"""reverse <string> -- reverses <string>."""
return inp[::-1]
# hashing
@hook.command("hash")
def hash_command(inp):
"""hash <string> -- Returns hashes of <string>."""
return ', '.join(x + ": " + getattr(hashlib, x)(inp).hexdigest()
for x in ['md5', 'sha1', 'sha256'])
# novelty
@hook.command
def munge(inp):
"""munge <text> -- Munges up <text>."""
return text.munge(inp)
# colors - based on code by Reece Selwood - <https://github.com/hitzler/homero>
@hook.command
def rainbow(inp):
inp = unicode(inp)
inp = strip(inp)
col = colors.items()
out = ""
l = len(colors)
for i, t in enumerate(inp):
if t == " ":
out += t
else:
out += col[i % l][1] + t
return out
@hook.command
def wrainbow(inp):
inp = unicode(inp)
col = colors.items()
inp = strip(inp).split(' ')
out = []
l = len(colors)
for i, t in enumerate(inp):
out.append(col[i % l][1] + t)
return ' '.join(out)
@hook.command
def usa(inp):
inp = strip(inp)
c = [colors['red'], '\x0300', colors['blue']]
l = len(c)
out = ''
for i, t in enumerate(inp):
out += c[i % l] + t
return out

View File

@ -1,92 +0,0 @@
import json
import urllib2
from util import hook, http, web
def get_sound_info(game, search):
search = search.replace(" ", "+")
try:
data = http.get_json("http://p2sounds.blha303.com.au/search/%s/%s?format=json" % (game, search))
except urllib2.HTTPError as e:
return "Error: " + json.loads(e.read())["error"]
items = []
for item in data["items"]:
if "music" in game:
textsplit = item["text"].split('"')
text = ""
for i in xrange(len(textsplit)):
if i % 2 != 0 and i < 6:
if text:
text += " / " + textsplit[i]
else:
text = textsplit[i]
else:
text = item["text"]
items.append("{} - {} {}".format(item["who"],
text if len(text) < 325 else text[:325] + "...",
item["listen"]))
if len(items) == 1:
return items[0]
else:
return "{} (and {} others: {})".format(items[0], len(items) - 1, web.haste("\n".join(items)))
@hook.command
def portal2(inp):
"""portal2 <quote> - Look up Portal 2 quote.
Example: .portal2 demand to see life's manager"""
return get_sound_info("portal2", inp)
@hook.command
def portal2dlc(inp):
"""portal2dlc <quote> - Look up Portal 2 DLC quote.
Example: .portal2dlc1 these exhibits are interactive"""
return get_sound_info("portal2dlc1", inp)
@hook.command("portal2pti")
@hook.command
def portal2dlc2(inp):
"""portal2dlc2 <quote> - Look up Portal 2 Perpetual Testing Inititive quote.
Example: .portal2 Cave here."""
return get_sound_info("portal2dlc2", inp)
@hook.command
def portal2music(inp):
"""portal2music <title> - Look up Portal 2 music.
Example: .portal2music turret opera"""
return get_sound_info("portal2music", inp)
@hook.command('portal1')
@hook.command
def portal(inp):
"""portal <quote> - Look up Portal quote.
Example: .portal The last thing you want to do is hurt me"""
return get_sound_info("portal1", inp)
@hook.command('portal1music')
@hook.command
def portalmusic(inp):
"""portalmusic <title> - Look up Portal music.
Example: .portalmusic still alive"""
return get_sound_info("portal1music", inp)
@hook.command('tf2sound')
@hook.command
def tf2(inp):
"""tf2 [who - ]<quote> - Look up TF2 quote.
Example: .tf2 may i borrow your earpiece"""
return get_sound_info("tf2", inp)
@hook.command
def tf2music(inp):
"""tf2music title - Look up TF2 music lyrics.
Example: .tf2music rocket jump waltz"""
return get_sound_info("tf2music", inp)

View File

@ -1,20 +0,0 @@
from util import hook, http, timeformat
@hook.regex(r'vimeo.com/([0-9]+)')
def vimeo_url(match):
"""vimeo <url> -- returns information on the Vimeo video at <url>"""
info = http.get_json('http://vimeo.com/api/v2/video/%s.json'
% match.group(1))
if info:
info[0]["duration"] = timeformat.format_time(info[0]["duration"])
info[0]["stats_number_of_likes"] = format(
info[0]["stats_number_of_likes"], ",d")
info[0]["stats_number_of_plays"] = format(
info[0]["stats_number_of_plays"], ",d")
return ("\x02%(title)s\x02 - length \x02%(duration)s\x02 - "
"\x02%(stats_number_of_likes)s\x02 likes - "
"\x02%(stats_number_of_plays)s\x02 plays - "
"\x02%(user_name)s\x02 on \x02%(upload_date)s\x02"
% info[0])

View File

@ -1,99 +0,0 @@
from util import hook, http, web
base_url = "http://api.wunderground.com/api/{}/{}/q/{}.json"
@hook.command(autohelp=None)
def weather(inp, reply=None, db=None, nick=None, bot=None, notice=None):
"""weather <location> [dontsave] -- Gets weather data
for <location> from Wunderground."""
api_key = bot.config.get("api_keys", {}).get("wunderground")
if not api_key:
return "Error: No wunderground API details."
# initialise weather DB
db.execute("create table if not exists weather(nick primary key, loc)")
# if there is no input, try getting the users last location from the DB
if not inp:
location = db.execute("select loc from weather where nick=lower(?)",
[nick]).fetchone()
if not location:
# no location saved in the database, send the user help text
notice(weather.__doc__)
return
loc = location[0]
# no need to save a location, we already have it
dontsave = True
else:
# see if the input ends with "dontsave"
dontsave = inp.endswith(" dontsave")
# remove "dontsave" from the input string after checking for it
if dontsave:
loc = inp[:-9].strip().lower()
else:
loc = inp
location = http.quote_plus(loc)
request_url = base_url.format(api_key, "geolookup/forecast/conditions", location)
response = http.get_json(request_url)
if 'location' not in response:
try:
location_id = response['response']['results'][0]['zmw']
except KeyError:
return "Could not get weather for that location."
# get the weather again, using the closest match
request_url = base_url.format(api_key, "geolookup/forecast/conditions", "zmw:" + location_id)
response = http.get_json(request_url)
if response['location']['state']:
place_name = "\x02{}\x02, \x02{}\x02 (\x02{}\x02)".format(response['location']['city'],
response['location']['state'],
response['location']['country'])
else:
place_name = "\x02{}\x02 (\x02{}\x02)".format(response['location']['city'],
response['location']['country'])
forecast_today = response["forecast"]["simpleforecast"]["forecastday"][0]
forecast_tomorrow = response["forecast"]["simpleforecast"]["forecastday"][1]
# put all the stuff we want to use in a dictionary for easy formatting of the output
weather_data = {
"place": place_name,
"conditions": response['current_observation']['weather'],
"temp_f": response['current_observation']['temp_f'],
"temp_c": response['current_observation']['temp_c'],
"humidity": response['current_observation']['relative_humidity'],
"wind_kph": response['current_observation']['wind_kph'],
"wind_mph": response['current_observation']['wind_mph'],
"wind_direction": response['current_observation']['wind_dir'],
"today_conditions": forecast_today['conditions'],
"today_high_f": forecast_today['high']['fahrenheit'],
"today_high_c": forecast_today['high']['celsius'],
"today_low_f": forecast_today['low']['fahrenheit'],
"today_low_c": forecast_today['low']['celsius'],
"tomorrow_conditions": forecast_tomorrow['conditions'],
"tomorrow_high_f": forecast_tomorrow['high']['fahrenheit'],
"tomorrow_high_c": forecast_tomorrow['high']['celsius'],
"tomorrow_low_f": forecast_tomorrow['low']['fahrenheit'],
"tomorrow_low_c": forecast_tomorrow['low']['celsius'],
"url": web.isgd(response["current_observation"]['forecast_url'] + "?apiref=e535207ff4757b18")
}
reply("{place} - \x02Current:\x02 {conditions}, {temp_f}F/{temp_c}C, {humidity}, "
"Wind: {wind_kph}KPH/{wind_mph}MPH {wind_direction}, \x02Today:\x02 {today_conditions}, "
"High: {today_high_f}F/{today_high_c}C, Low: {today_low_f}F/{today_low_c}C. "
"\x02Tomorrow:\x02 {tomorrow_conditions}, High: {tomorrow_high_f}F/{tomorrow_high_c}C, "
"Low: {tomorrow_low_f}F/{tomorrow_low_c}C - {url}".format(**weather_data))
if location and not dontsave:
db.execute("insert or replace into weather(nick, loc) values (?,?)",
(nick.lower(), location))
db.commit()

View File

@ -1,43 +0,0 @@
import re
from util import hook, http
xkcd_re = (r'(.*:)//(www.xkcd.com|xkcd.com)(.*)', re.I)
months = {1: 'January', 2: 'February', 3: 'March', 4: 'April', 5: 'May', 6: 'June', 7: 'July', 8: 'August',
9: 'September', 10: 'October', 11: 'November', 12: 'December'}
def xkcd_info(xkcd_id, url=False):
""" takes an XKCD entry ID and returns a formatted string """
data = http.get_json("http://www.xkcd.com/" + xkcd_id + "/info.0.json")
date = "%s %s %s" % (data['day'], months[int(data['month'])], data['year'])
if url:
url = " | http://xkcd.com/" + xkcd_id.replace("/", "")
return "xkcd: \x02%s\x02 (%s)%s" % (data['title'], date, url if url else "")
def xkcd_search(term):
search_term = http.quote_plus(term)
soup = http.get_soup("http://www.ohnorobot.com/index.pl?s={}&Search=Search&"
"comic=56&e=0&n=0&b=0&m=0&d=0&t=0".format(search_term))
result = soup.find('li')
if result:
url = result.find('div', {'class': 'tinylink'}).text
xkcd_id = url[:-1].split("/")[-1]
print xkcd_id
return xkcd_info(xkcd_id, url=True)
else:
return "No results found!"
@hook.regex(*xkcd_re)
def xkcd_url(match):
xkcd_id = match.group(3).split(" ")[0].split("/")[1]
return xkcd_info(xkcd_id)
@hook.command
def xkcd(inp):
"""xkcd <search term> - Search for xkcd comic matching <search term>"""
return xkcd_search(inp)

View File

@ -1,136 +0,0 @@
import re
import time
from util import hook, http, timeformat
youtube_re = (r'(?:youtube.*?(?:v=|/v/)|youtu\.be/|yooouuutuuube.*?id=)'
'([-_a-zA-Z0-9]+)', re.I)
base_url = 'http://gdata.youtube.com/feeds/api/'
api_url = base_url + 'videos/{}?v=2&alt=jsonc'
search_api_url = base_url + 'videos?v=2&alt=jsonc&max-results=1'
video_url = "http://youtu.be/%s"
def plural(num=0, text=''):
return "{:,} {}{}".format(num, text, "s"[num == 1:])
def get_video_description(video_id):
request = http.get_json(api_url.format(video_id))
if request.get('error'):
return
data = request['data']
out = u'\x02{}\x02'.format(data['title'])
if not data.get('duration'):
return out
length = data['duration']
out += u' - length \x02{}\x02'.format(timeformat.format_time(length, simple=True))
if 'ratingCount' in data:
likes = plural(int(data['likeCount']), "like")
dislikes = plural(data['ratingCount'] - int(data['likeCount']), "dislike")
percent = 100 * float(data['likeCount']) / float(data['ratingCount'])
out += u' - {}, {} (\x02{:.1f}\x02%)'.format(likes,
dislikes, percent)
if 'viewCount' in data:
views = data['viewCount']
out += u' - \x02{:,}\x02 view{}'.format(views, "s"[views == 1:])
try:
uploader = http.get_json(base_url + "users/{}?alt=json".format(data["uploader"]))["entry"]["author"][0]["name"][
"$t"]
except:
uploader = data["uploader"]
upload_time = time.strptime(data['uploaded'], "%Y-%m-%dT%H:%M:%S.000Z")
out += u' - \x02{}\x02 on \x02{}\x02'.format(uploader,
time.strftime("%Y.%m.%d", upload_time))
if 'contentRating' in data:
out += u' - \x034NSFW\x02'
return out
@hook.regex(*youtube_re)
def youtube_url(match):
return get_video_description(match.group(1))
@hook.command('you')
@hook.command('yt')
@hook.command('y')
@hook.command
def youtube(inp):
"""youtube <query> -- Returns the first YouTube search result for <query>."""
request = http.get_json(search_api_url, q=inp)
if 'error' in request:
return 'error performing search'
if request['data']['totalItems'] == 0:
return 'no results found'
video_id = request['data']['items'][0]['id']
return get_video_description(video_id) + u" - " + video_url % video_id
@hook.command('ytime')
@hook.command
def youtime(inp):
"""youtime <query> -- Gets the total run time of the first YouTube search result for <query>."""
request = http.get_json(search_api_url, q=inp)
if 'error' in request:
return 'error performing search'
if request['data']['totalItems'] == 0:
return 'no results found'
video_id = request['data']['items'][0]['id']
request = http.get_json(api_url.format(video_id))
if request.get('error'):
return
data = request['data']
if not data.get('duration'):
return
length = data['duration']
views = data['viewCount']
total = int(length * views)
length_text = timeformat.format_time(length, simple=True)
total_text = timeformat.format_time(total, accuracy=8)
return u'The video \x02{}\x02 has a length of {} and has been viewed {:,} times for ' \
u'a total run time of {}!'.format(data['title'], length_text, views,
total_text)
ytpl_re = (r'(.*:)//(www.youtube.com/playlist|youtube.com/playlist)(:[0-9]+)?(.*)', re.I)
@hook.regex(*ytpl_re)
def ytplaylist_url(match):
location = match.group(4).split("=")[-1]
try:
soup = http.get_soup("https://www.youtube.com/playlist?list=" + location)
except Exception:
return "\x034\x02Invalid response."
title = soup.find('title').text.split('-')[0].strip()
author = soup.find('img', {'class': 'channel-header-profile-image'})['title']
num_videos = soup.find('ul', {'class': 'header-stats'}).findAll('li')[0].text.split(' ')[0]
views = soup.find('ul', {'class': 'header-stats'}).findAll('li')[1].text.split(' ')[0]
return u"\x02%s\x02 - \x02%s\x02 views - \x02%s\x02 videos - \x02%s\x02" % (title, views, num_videos, author)

View File

@ -1,43 +0,0 @@
Behold, mortal, the origins of Beautiful Soup...
================================================
Leonard Richardson is the primary programmer.
Aaron DeVore is awesome.
Mark Pilgrim provided the encoding detection code that forms the base
of UnicodeDammit.
Thomas Kluyver and Ezio Melotti finished the work of getting Beautiful
Soup 4 working under Python 3.
Simon Willison wrote soupselect, which was used to make Beautiful Soup
support CSS selectors.
Sam Ruby helped with a lot of edge cases.
Jonathan Ellis was awarded the prestigous Beau Potage D'Or for his
work in solving the nestable tags conundrum.
An incomplete list of people have contributed patches to Beautiful
Soup:
Istvan Albert, Andrew Lin, Anthony Baxter, Andrew Boyko, Tony Chang,
Zephyr Fang, Fuzzy, Roman Gaufman, Yoni Gilad, Richie Hindle, Peteris
Krumins, Kent Johnson, Ben Last, Robert Leftwich, Staffan Malmgren,
Ksenia Marasanova, JP Moins, Adam Monsen, John Nagle, "Jon", Ed
Oskiewicz, Greg Phillips, Giles Radford, Arthur Rudolph, Marko
Samastur, Jouni Seppänen, Alexander Schmolck, Andy Theyers, Glyn
Webster, Paul Wright, Danny Yoo
An incomplete list of people who made suggestions or found bugs or
found ways to break Beautiful Soup:
Hanno Böck, Matteo Bertini, Chris Curvey, Simon Cusack, Bruce Eckel,
Matt Ernst, Michael Foord, Tom Harris, Bill de hOra, Donald Howes,
Matt Patterson, Scott Roberts, Steve Strassmann, Mike Williams,
warchild at redho dot com, Sami Kuisma, Carlos Rocha, Bob Hutchison,
Joren Mc, Michal Migurski, John Kleven, Tim Heaney, Tripp Lilley, Ed
Summers, Dennis Sutch, Chris Smith, Aaron Sweep^W Swartz, Stuart
Turner, Greg Edwards, Kevin J Kalupson, Nikos Kouremenos, Artur de
Sousa Rocha, Yichun Wei, Per Vognsen

View File

@ -17,8 +17,8 @@ http://www.crummy.com/software/BeautifulSoup/bs4/doc/
"""
__author__ = "Leonard Richardson (leonardr@segfault.org)"
__version__ = "4.2.1"
__copyright__ = "Copyright (c) 2004-2013 Leonard Richardson"
__version__ = "4.1.3"
__copyright__ = "Copyright (c) 2004-2012 Leonard Richardson"
__license__ = "MIT"
__all__ = ['BeautifulSoup']
@ -201,9 +201,9 @@ class BeautifulSoup(Tag):
"""Create a new tag associated with this soup."""
return Tag(None, self.builder, name, namespace, nsprefix, attrs)
def new_string(self, s, subclass=NavigableString):
def new_string(self, s):
"""Create a new NavigableString associated with this soup."""
navigable = subclass(s)
navigable = NavigableString(s)
navigable.setup()
return navigable
@ -245,15 +245,13 @@ class BeautifulSoup(Tag):
o = containerClass(currentData)
self.object_was_parsed(o)
def object_was_parsed(self, o, parent=None, most_recent_element=None):
def object_was_parsed(self, o):
"""Add an object to the parse tree."""
parent = parent or self.currentTag
most_recent_element = most_recent_element or self._most_recent_element
o.setup(parent, most_recent_element)
if most_recent_element is not None:
most_recent_element.next_element = o
self._most_recent_element = o
parent.contents.append(o)
o.setup(self.currentTag, self.previous_element)
if self.previous_element:
self.previous_element.next_element = o
self.previous_element = o
self.currentTag.contents.append(o)
def _popToTag(self, name, nsprefix=None, inclusivePop=True):
"""Pops the tag stack up to and including the most recent
@ -297,12 +295,12 @@ class BeautifulSoup(Tag):
return None
tag = Tag(self, self.builder, name, namespace, nsprefix, attrs,
self.currentTag, self._most_recent_element)
self.currentTag, self.previous_element)
if tag is None:
return tag
if self._most_recent_element:
self._most_recent_element.next_element = tag
self._most_recent_element = tag
if self.previous_element:
self.previous_element.next_element = tag
self.previous_element = tag
self.pushTag(tag)
return tag
@ -335,10 +333,6 @@ class BeautifulSoup(Tag):
return prefix + super(BeautifulSoup, self).decode(
indent_level, eventual_encoding, formatter)
# Alias to make it easier to type import: 'from bs4 import _soup'
_s = BeautifulSoup
_soup = BeautifulSoup
class BeautifulStoneSoup(BeautifulSoup):
"""Deprecated interface to an XML parser."""

View File

@ -152,7 +152,7 @@ class TreeBuilder(object):
tag_specific = self.cdata_list_attributes.get(
tag_name.lower(), [])
for cdata_list_attr in itertools.chain(universal, tag_specific):
if cdata_list_attr in attrs:
if cdata_list_attr in dict(attrs):
# Basically, we have a "class" attribute whose
# value is a whitespace-separated list of CSS
# classes. Split it into a list.

View File

@ -131,9 +131,9 @@ class Element(html5lib.treebuilders._base.Node):
old_element = self.element.contents[-1]
new_element = self.soup.new_string(old_element + node.element)
old_element.replace_with(new_element)
self.soup._most_recent_element = new_element
else:
self.soup.object_was_parsed(node.element, parent=self.element)
self.element.append(node.element)
node.parent = self
def getAttributes(self):
return AttrList(self.element)

View File

@ -58,8 +58,6 @@ class BeautifulSoupHTMLParser(HTMLParser):
# it's fixed.
if name.startswith('x'):
real_name = int(name.lstrip('x'), 16)
elif name.startswith('X'):
real_name = int(name.lstrip('X'), 16)
else:
real_name = int(name)
@ -87,9 +85,6 @@ class BeautifulSoupHTMLParser(HTMLParser):
self.soup.endData()
if data.startswith("DOCTYPE "):
data = data[len("DOCTYPE "):]
elif data == 'DOCTYPE':
# i.e. "<!DOCTYPE>"
data = ''
self.soup.handle_data(data)
self.soup.endData(Doctype)

View File

@ -3,7 +3,6 @@ __all__ = [
'LXMLTreeBuilder',
]
from io import BytesIO
from StringIO import StringIO
import collections
from lxml import etree
@ -29,10 +28,6 @@ class LXMLTreeBuilderForXML(TreeBuilder):
CHUNK_SIZE = 512
# This namespace mapping is specified in the XML Namespace
# standard.
DEFAULT_NSMAPS = {'http://www.w3.org/XML/1998/namespace' : "xml"}
@property
def default_parser(self):
# This can either return a parser object or a class, which
@ -50,7 +45,7 @@ class LXMLTreeBuilderForXML(TreeBuilder):
parser = parser(target=self, strip_cdata=False)
self.parser = parser
self.soup = None
self.nsmaps = [self.DEFAULT_NSMAPS]
self.nsmaps = None
def _getNsTag(self, tag):
# Split the namespace URL out of a fully-qualified lxml tag
@ -76,9 +71,7 @@ class LXMLTreeBuilderForXML(TreeBuilder):
dammit.contains_replacement_characters)
def feed(self, markup):
if isinstance(markup, bytes):
markup = BytesIO(markup)
elif isinstance(markup, unicode):
if isinstance(markup, basestring):
markup = StringIO(markup)
# Call feed() at least once, even if the markup is empty,
# or the parser won't be initialized.
@ -92,20 +85,23 @@ class LXMLTreeBuilderForXML(TreeBuilder):
self.parser.close()
def close(self):
self.nsmaps = [self.DEFAULT_NSMAPS]
self.nsmaps = None
def start(self, name, attrs, nsmap={}):
# Make sure attrs is a mutable dict--lxml may send an immutable dictproxy.
attrs = dict(attrs)
nsprefix = None
# Invert each namespace map as it comes in.
if len(self.nsmaps) > 1:
# There are no new namespaces for this tag, but
# non-default namespaces are in play, so we need a
# separate tag stack to know when they end.
if len(nsmap) == 0 and self.nsmaps != None:
# There are no new namespaces for this tag, but namespaces
# are in play, so we need a separate tag stack to know
# when they end.
self.nsmaps.append(None)
elif len(nsmap) > 0:
# A new namespace mapping has come into play.
if self.nsmaps is None:
self.nsmaps = []
inverted_nsmap = dict((value, key) for key, value in nsmap.items())
self.nsmaps.append(inverted_nsmap)
# Also treat the namespace mapping as a set of attributes on the
@ -116,19 +112,20 @@ class LXMLTreeBuilderForXML(TreeBuilder):
"xmlns", prefix, "http://www.w3.org/2000/xmlns/")
attrs[attribute] = namespace
# Namespaces are in play. Find any attributes that came in
# from lxml with namespaces attached to their names, and
# turn then into NamespacedAttribute objects.
new_attrs = {}
for attr, value in attrs.items():
namespace, attr = self._getNsTag(attr)
if namespace is None:
new_attrs[attr] = value
else:
nsprefix = self._prefix_for_namespace(namespace)
attr = NamespacedAttribute(nsprefix, attr, namespace)
new_attrs[attr] = value
attrs = new_attrs
if self.nsmaps is not None and len(self.nsmaps) > 0:
# Namespaces are in play. Find any attributes that came in
# from lxml with namespaces attached to their names, and
# turn then into NamespacedAttribute objects.
new_attrs = {}
for attr, value in attrs.items():
namespace, attr = self._getNsTag(attr)
if namespace is None:
new_attrs[attr] = value
else:
nsprefix = self._prefix_for_namespace(namespace)
attr = NamespacedAttribute(nsprefix, attr, namespace)
new_attrs[attr] = value
attrs = new_attrs
namespace, name = self._getNsTag(name)
nsprefix = self._prefix_for_namespace(namespace)
@ -141,7 +138,6 @@ class LXMLTreeBuilderForXML(TreeBuilder):
for inverted_nsmap in reversed(self.nsmaps):
if inverted_nsmap is not None and namespace in inverted_nsmap:
return inverted_nsmap[namespace]
return None
def end(self, name):
self.soup.endData()
@ -154,10 +150,14 @@ class LXMLTreeBuilderForXML(TreeBuilder):
nsprefix = inverted_nsmap[namespace]
break
self.soup.handle_endtag(name, nsprefix)
if len(self.nsmaps) > 1:
if self.nsmaps != None:
# This tag, or one of its parents, introduced a namespace
# mapping, so pop it off the stack.
self.nsmaps.pop()
if len(self.nsmaps) == 0:
# Namespaces are no longer in play, so don't bother keeping
# track of the namespace stack.
self.nsmaps = None
def pi(self, target, data):
pass

View File

@ -81,8 +81,6 @@ class EntitySubstitution(object):
"&(?!#\d+;|#x[0-9a-fA-F]+;|\w+;)"
")")
AMPERSAND_OR_BRACKET = re.compile("([<>&])")
@classmethod
def _substitute_html_entity(cls, matchobj):
entity = cls.CHARACTER_TO_HTML_ENTITY.get(matchobj.group(0))
@ -136,28 +134,6 @@ class EntitySubstitution(object):
def substitute_xml(cls, value, make_quoted_attribute=False):
"""Substitute XML entities for special XML characters.
:param value: A string to be substituted. The less-than sign
will become &lt;, the greater-than sign will become &gt;,
and any ampersands will become &amp;. If you want ampersands
that appear to be part of an entity definition to be left
alone, use substitute_xml_containing_entities() instead.
:param make_quoted_attribute: If True, then the string will be
quoted, as befits an attribute value.
"""
# Escape angle brackets and ampersands.
value = cls.AMPERSAND_OR_BRACKET.sub(
cls._substitute_xml_entity, value)
if make_quoted_attribute:
value = cls.quoted_attribute_value(value)
return value
@classmethod
def substitute_xml_containing_entities(
cls, value, make_quoted_attribute=False):
"""Substitute XML entities for special XML characters.
:param value: A string to be substituted. The less-than sign will
become &lt;, the greater-than sign will become &gt;, and any
ampersands that are not part of an entity defition will
@ -175,7 +151,6 @@ class EntitySubstitution(object):
value = cls.quoted_attribute_value(value)
return value
@classmethod
def substitute_html(cls, s):
"""Replace certain Unicode characters with named HTML entities.
@ -298,6 +273,7 @@ class UnicodeDammit:
return None
self.tried_encodings.append((proposed, errors))
markup = self.markup
# Convert smart quotes to HTML if coming from an encoding
# that might have them.
if (self.smart_quotes_to is not None

View File

@ -1,178 +0,0 @@
"""Diagnostic functions, mainly for use when doing tech support."""
from StringIO import StringIO
from HTMLParser import HTMLParser
from bs4 import BeautifulSoup, __version__
from bs4.builder import builder_registry
import os
import random
import time
import traceback
import sys
import cProfile
def diagnose(data):
"""Diagnostic suite for isolating common problems."""
print "Diagnostic running on Beautiful Soup %s" % __version__
print "Python version %s" % sys.version
basic_parsers = ["html.parser", "html5lib", "lxml"]
for name in basic_parsers:
for builder in builder_registry.builders:
if name in builder.features:
break
else:
basic_parsers.remove(name)
print (
"I noticed that %s is not installed. Installing it may help." %
name)
if 'lxml' in basic_parsers:
basic_parsers.append(["lxml", "xml"])
from lxml import etree
print "Found lxml version %s" % ".".join(map(str,etree.LXML_VERSION))
if 'html5lib' in basic_parsers:
import html5lib
print "Found html5lib version %s" % html5lib.__version__
if hasattr(data, 'read'):
data = data.read()
elif os.path.exists(data):
print '"%s" looks like a filename. Reading data from the file.' % data
data = open(data).read()
elif data.startswith("http:") or data.startswith("https:"):
print '"%s" looks like a URL. Beautiful Soup is not an HTTP client.' % data
print "You need to use some other library to get the document behind the URL, and feed that document to Beautiful Soup."
return
print
for parser in basic_parsers:
print "Trying to parse your markup with %s" % parser
success = False
try:
soup = BeautifulSoup(data, parser)
success = True
except Exception, e:
print "%s could not parse the markup." % parser
traceback.print_exc()
if success:
print "Here's what %s did with the markup:" % parser
print soup.prettify()
print "-" * 80
def lxml_trace(data, html=True):
"""Print out the lxml events that occur during parsing.
This lets you see how lxml parses a document when no Beautiful
Soup code is running.
"""
from lxml import etree
for event, element in etree.iterparse(StringIO(data), html=html):
print("%s, %4s, %s" % (event, element.tag, element.text))
class AnnouncingParser(HTMLParser):
"""Announces HTMLParser parse events, without doing anything else."""
def _p(self, s):
print(s)
def handle_starttag(self, name, attrs):
self._p("%s START" % name)
def handle_endtag(self, name):
self._p("%s END" % name)
def handle_data(self, data):
self._p("%s DATA" % data)
def handle_charref(self, name):
self._p("%s CHARREF" % name)
def handle_entityref(self, name):
self._p("%s ENTITYREF" % name)
def handle_comment(self, data):
self._p("%s COMMENT" % data)
def handle_decl(self, data):
self._p("%s DECL" % data)
def unknown_decl(self, data):
self._p("%s UNKNOWN-DECL" % data)
def handle_pi(self, data):
self._p("%s PI" % data)
def htmlparser_trace(data):
"""Print out the HTMLParser events that occur during parsing.
This lets you see how HTMLParser parses a document when no
Beautiful Soup code is running.
"""
parser = AnnouncingParser()
parser.feed(data)
_vowels = "aeiou"
_consonants = "bcdfghjklmnpqrstvwxyz"
def rword(length=5):
"Generate a random word-like string."
s = ''
for i in range(length):
if i % 2 == 0:
t = _consonants
else:
t = _vowels
s += random.choice(t)
return s
def rsentence(length=4):
"Generate a random sentence-like string."
return " ".join(rword(random.randint(4,9)) for i in range(length))
def rdoc(num_elements=1000):
"""Randomly generate an invalid HTML document."""
tag_names = ['p', 'div', 'span', 'i', 'b', 'script', 'table']
elements = []
for i in range(num_elements):
choice = random.randint(0,3)
if choice == 0:
# New tag.
tag_name = random.choice(tag_names)
elements.append("<%s>" % tag_name)
elif choice == 1:
elements.append(rsentence(random.randint(1,4)))
elif choice == 2:
# Close a tag.
tag_name = random.choice(tag_names)
elements.append("</%s>" % tag_name)
return "<html>" + "\n".join(elements) + "</html>"
def benchmark_parsers(num_elements=100000):
"""Very basic head-to-head performance benchmark."""
print "Comparative parser benchmark on Beautiful Soup %s" % __version__
data = rdoc(num_elements)
print "Generated a large invalid HTML document (%d bytes)." % len(data)
for parser in ["lxml", ["lxml", "html"], "html5lib", "html.parser"]:
success = False
try:
a = time.time()
soup = BeautifulSoup(data, parser)
b = time.time()
success = True
except Exception, e:
print "%s could not parse the markup." % parser
traceback.print_exc()
if success:
print "BS4+%s parsed the markup in %.2fs." % (parser, b-a)
from lxml import etree
a = time.time()
etree.HTML(data)
b = time.time()
print "Raw lxml parsed the markup in %.2fs." % (b-a)
if __name__ == '__main__':
diagnose(sys.stdin.read())

View File

@ -26,9 +26,6 @@ class NamespacedAttribute(unicode):
def __new__(cls, prefix, name, namespace=None):
if name is None:
obj = unicode.__new__(cls, prefix)
elif prefix is None:
# Not really namespaced.
obj = unicode.__new__(cls, name)
else:
obj = unicode.__new__(cls, prefix + ":" + name)
obj.prefix = prefix
@ -81,40 +78,6 @@ class ContentMetaAttributeValue(AttributeValueWithCharsetSubstitution):
return match.group(1) + encoding
return self.CHARSET_RE.sub(rewrite, self.original_value)
class HTMLAwareEntitySubstitution(EntitySubstitution):
"""Entity substitution rules that are aware of some HTML quirks.
Specifically, the contents of <script> and <style> tags should not
undergo entity substitution.
Incoming NavigableString objects are checked to see if they're the
direct children of a <script> or <style> tag.
"""
cdata_containing_tags = set(["script", "style"])
preformatted_tags = set(["pre"])
@classmethod
def _substitute_if_appropriate(cls, ns, f):
if (isinstance(ns, NavigableString)
and ns.parent is not None
and ns.parent.name in cls.cdata_containing_tags):
# Do nothing.
return ns
# Substitute.
return f(ns)
@classmethod
def substitute_html(cls, ns):
return cls._substitute_if_appropriate(
ns, EntitySubstitution.substitute_html)
@classmethod
def substitute_xml(cls, ns):
return cls._substitute_if_appropriate(
ns, EntitySubstitution.substitute_xml)
class PageElement(object):
"""Contains the navigational information for some part of the page
@ -131,60 +94,25 @@ class PageElement(object):
# converted to entities. This is not recommended, but it's
# faster than "minimal".
# A function - This function will be called on every string that
# needs to undergo entity substitution.
#
# In an HTML document, the default "html" and "minimal" functions
# will leave the contents of <script> and <style> tags alone. For
# an XML document, all tags will be given the same treatment.
HTML_FORMATTERS = {
"html" : HTMLAwareEntitySubstitution.substitute_html,
"minimal" : HTMLAwareEntitySubstitution.substitute_xml,
None : None
}
XML_FORMATTERS = {
# needs to undergo entity substition
FORMATTERS = {
"html" : EntitySubstitution.substitute_html,
"minimal" : EntitySubstitution.substitute_xml,
None : None
}
@classmethod
def format_string(self, s, formatter='minimal'):
"""Format the given string using the given formatter."""
if not callable(formatter):
formatter = self._formatter_for_name(formatter)
formatter = self.FORMATTERS.get(
formatter, EntitySubstitution.substitute_xml)
if formatter is None:
output = s
else:
output = formatter(s)
return output
@property
def _is_xml(self):
"""Is this element part of an XML tree or an HTML tree?
This is used when mapping a formatter name ("minimal") to an
appropriate function (one that performs entity-substitution on
the contents of <script> and <style> tags, or not). It's
inefficient, but it should be called very rarely.
"""
if self.parent is None:
# This is the top-level object. It should have .is_xml set
# from tree creation. If not, take a guess--BS is usually
# used on HTML markup.
return getattr(self, 'is_xml', False)
return self.parent._is_xml
def _formatter_for_name(self, name):
"Look up a formatter function based on its name and the tree."
if self._is_xml:
return self.XML_FORMATTERS.get(
name, EntitySubstitution.substitute_xml)
else:
return self.HTML_FORMATTERS.get(
name, HTMLAwareEntitySubstitution.substitute_xml)
def setup(self, parent=None, previous_element=None):
"""Sets up the initial relations between this element and
other elements."""
@ -438,7 +366,7 @@ class PageElement(object):
# NOTE: We can't use _find_one because findParents takes a different
# set of arguments.
r = None
l = self.find_parents(name, attrs, 1, **kwargs)
l = self.find_parents(name, attrs, 1)
if l:
r = l[0]
return r
@ -567,14 +495,6 @@ class PageElement(object):
value =" ".join(value)
return value
def _tag_name_matches_and(self, function, tag_name):
if not tag_name:
return function
else:
def _match(tag):
return tag.name == tag_name and function(tag)
return _match
def _attribute_checker(self, operator, attribute, value=''):
"""Create a function that performs a CSS selector operation.
@ -616,6 +536,87 @@ class PageElement(object):
else:
return lambda el: el.has_attr(attribute)
def select(self, selector):
"""Perform a CSS selection operation on the current element."""
tokens = selector.split()
current_context = [self]
for index, token in enumerate(tokens):
if tokens[index - 1] == '>':
# already found direct descendants in last step. skip this
# step.
continue
m = self.attribselect_re.match(token)
if m is not None:
# Attribute selector
tag, attribute, operator, value = m.groups()
if not tag:
tag = True
checker = self._attribute_checker(operator, attribute, value)
found = []
for context in current_context:
found.extend(
[el for el in context.find_all(tag) if checker(el)])
current_context = found
continue
if '#' in token:
# ID selector
tag, id = token.split('#', 1)
if tag == "":
tag = True
el = current_context[0].find(tag, {'id': id})
if el is None:
return [] # No match
current_context = [el]
continue
if '.' in token:
# Class selector
tag_name, klass = token.split('.', 1)
if not tag_name:
tag_name = True
classes = set(klass.split('.'))
found = []
def classes_match(tag):
if tag_name is not True and tag.name != tag_name:
return False
if not tag.has_attr('class'):
return False
return classes.issubset(tag['class'])
for context in current_context:
found.extend(context.find_all(classes_match))
current_context = found
continue
if token == '*':
# Star selector
found = []
for context in current_context:
found.extend(context.findAll(True))
current_context = found
continue
if token == '>':
# Child selector
tag = tokens[index + 1]
if not tag:
tag = True
found = []
for context in current_context:
found.extend(context.find_all(tag, recursive=False))
current_context = found
continue
# Here we should just have a regular tag
if not self.tag_name_re.match(token):
return []
found = []
for context in current_context:
found.extend(context.findAll(token))
current_context = found
return current_context
# Old non-property versions of the generators, for backwards
# compatibility with BS3.
def nextGenerator(self):
@ -651,9 +652,6 @@ class NavigableString(unicode, PageElement):
return unicode.__new__(cls, value)
return unicode.__new__(cls, value, DEFAULT_OUTPUT_ENCODING)
def __copy__(self):
return self
def __getnewargs__(self):
return (unicode(self),)
@ -711,7 +709,7 @@ class Doctype(PreformattedString):
@classmethod
def for_name_and_ids(cls, name, pub_id, system_id):
value = name or ''
value = name
if pub_id is not None:
value += ' PUBLIC "%s"' % pub_id
if system_id is not None:
@ -805,24 +803,16 @@ class Tag(PageElement):
self.clear()
self.append(string.__class__(string))
def _all_strings(self, strip=False, types=(NavigableString, CData)):
"""Yield all strings of certain classes, possibly stripping them.
By default, yields only NavigableString and CData objects. So
no comments, processing instructions, etc.
"""
def _all_strings(self, strip=False):
"""Yield all child strings, possibly stripping them."""
for descendant in self.descendants:
if (
(types is None and not isinstance(descendant, NavigableString))
or
(types is not None and type(descendant) not in types)):
if not isinstance(descendant, NavigableString):
continue
if strip:
descendant = descendant.strip()
if len(descendant) == 0:
continue
yield descendant
strings = property(_all_strings)
@property
@ -830,13 +820,11 @@ class Tag(PageElement):
for string in self._all_strings(True):
yield string
def get_text(self, separator=u"", strip=False,
types=(NavigableString, CData)):
def get_text(self, separator=u"", strip=False):
"""
Get all child strings, concatenated using the given separator.
"""
return separator.join([s for s in self._all_strings(
strip, types=types)])
return separator.join([s for s in self._all_strings(strip)])
getText = get_text
text = property(get_text)
@ -847,7 +835,6 @@ class Tag(PageElement):
while i is not None:
next = i.next_element
i.__dict__.clear()
i.contents = []
i = next
def clear(self, decompose=False):
@ -979,13 +966,6 @@ class Tag(PageElement):
u = self.decode(indent_level, encoding, formatter)
return u.encode(encoding, errors)
def _should_pretty_print(self, indent_level):
"""Should this tag be pretty-printed?"""
return (
indent_level is not None and
(self.name not in HTMLAwareEntitySubstitution.preformatted_tags
or self._is_xml))
def decode(self, indent_level=None,
eventual_encoding=DEFAULT_OUTPUT_ENCODING,
formatter="minimal"):
@ -998,12 +978,6 @@ class Tag(PageElement):
document contains a <META> tag that mentions the document's
encoding.
"""
# First off, turn a string formatter into a function. This
# will stop the lookup from happening over and over again.
if not callable(formatter):
formatter = self._formatter_for_name(formatter)
attrs = []
if self.attrs:
for key, val in sorted(self.attrs.items()):
@ -1036,15 +1010,12 @@ class Tag(PageElement):
else:
closeTag = '</%s%s>' % (prefix, self.name)
pretty_print = self._should_pretty_print(indent_level)
space = ''
indent_space = ''
if indent_level is not None:
indent_space = (' ' * (indent_level - 1))
pretty_print = (indent_level is not None)
if pretty_print:
space = indent_space
space = (' ' * (indent_level - 1))
indent_contents = indent_level + 1
else:
space = ''
indent_contents = None
contents = self.decode_contents(
indent_contents, eventual_encoding, formatter)
@ -1057,10 +1028,8 @@ class Tag(PageElement):
attribute_string = ''
if attrs:
attribute_string = ' ' + ' '.join(attrs)
if indent_level is not None:
# Even if this particular tag is not pretty-printed,
# we should indent up to the start of the tag.
s.append(indent_space)
if pretty_print:
s.append(space)
s.append('<%s%s%s%s>' % (
prefix, self.name, attribute_string, close))
if pretty_print:
@ -1071,10 +1040,7 @@ class Tag(PageElement):
if pretty_print and closeTag:
s.append(space)
s.append(closeTag)
if indent_level is not None and closeTag and self.next_sibling:
# Even if this particular tag is not pretty-printed,
# we're now done with the tag, and we should add a
# newline if appropriate.
if pretty_print and closeTag and self.next_sibling:
s.append("\n")
s = ''.join(s)
return s
@ -1097,11 +1063,6 @@ class Tag(PageElement):
document contains a <META> tag that mentions the document's
encoding.
"""
# First off, turn a string formatter into a function. This
# will stop the lookup from happening over and over again.
if not callable(formatter):
formatter = self._formatter_for_name(formatter)
pretty_print = (indent_level is not None)
s = []
for c in self:
@ -1111,13 +1072,13 @@ class Tag(PageElement):
elif isinstance(c, Tag):
s.append(c.decode(indent_level, eventual_encoding,
formatter))
if text and indent_level and not self.name == 'pre':
if text and indent_level:
text = text.strip()
if text:
if pretty_print and not self.name == 'pre':
if pretty_print:
s.append(" " * (indent_level - 1))
s.append(text)
if pretty_print and not self.name == 'pre':
if pretty_print:
s.append("\n")
return ''.join(s)
@ -1184,207 +1145,6 @@ class Tag(PageElement):
yield current
current = current.next_element
# CSS selector code
_selector_combinators = ['>', '+', '~']
_select_debug = False
def select(self, selector, _candidate_generator=None):
"""Perform a CSS selection operation on the current element."""
tokens = selector.split()
current_context = [self]
if tokens[-1] in self._selector_combinators:
raise ValueError(
'Final combinator "%s" is missing an argument.' % tokens[-1])
if self._select_debug:
print 'Running CSS selector "%s"' % selector
for index, token in enumerate(tokens):
if self._select_debug:
print ' Considering token "%s"' % token
recursive_candidate_generator = None
tag_name = None
if tokens[index-1] in self._selector_combinators:
# This token was consumed by the previous combinator. Skip it.
if self._select_debug:
print ' Token was consumed by the previous combinator.'
continue
# Each operation corresponds to a checker function, a rule
# for determining whether a candidate matches the
# selector. Candidates are generated by the active
# iterator.
checker = None
m = self.attribselect_re.match(token)
if m is not None:
# Attribute selector
tag_name, attribute, operator, value = m.groups()
checker = self._attribute_checker(operator, attribute, value)
elif '#' in token:
# ID selector
tag_name, tag_id = token.split('#', 1)
def id_matches(tag):
return tag.get('id', None) == tag_id
checker = id_matches
elif '.' in token:
# Class selector
tag_name, klass = token.split('.', 1)
classes = set(klass.split('.'))
def classes_match(candidate):
return classes.issubset(candidate.get('class', []))
checker = classes_match
elif ':' in token:
# Pseudo-class
tag_name, pseudo = token.split(':', 1)
if tag_name == '':
raise ValueError(
"A pseudo-class must be prefixed with a tag name.")
pseudo_attributes = re.match('([a-zA-Z\d-]+)\(([a-zA-Z\d]+)\)', pseudo)
found = []
if pseudo_attributes is not None:
pseudo_type, pseudo_value = pseudo_attributes.groups()
if pseudo_type == 'nth-of-type':
try:
pseudo_value = int(pseudo_value)
except:
raise NotImplementedError(
'Only numeric values are currently supported for the nth-of-type pseudo-class.')
if pseudo_value < 1:
raise ValueError(
'nth-of-type pseudo-class value must be at least 1.')
class Counter(object):
def __init__(self, destination):
self.count = 0
self.destination = destination
def nth_child_of_type(self, tag):
self.count += 1
if self.count == self.destination:
return True
if self.count > self.destination:
# Stop the generator that's sending us
# these things.
raise StopIteration()
return False
checker = Counter(pseudo_value).nth_child_of_type
else:
raise NotImplementedError(
'Only the following pseudo-classes are implemented: nth-of-type.')
elif token == '*':
# Star selector -- matches everything
pass
elif token == '>':
# Run the next token as a CSS selector against the
# direct children of each tag in the current context.
recursive_candidate_generator = lambda tag: tag.children
elif token == '~':
# Run the next token as a CSS selector against the
# siblings of each tag in the current context.
recursive_candidate_generator = lambda tag: tag.next_siblings
elif token == '+':
# For each tag in the current context, run the next
# token as a CSS selector against the tag's next
# sibling that's a tag.
def next_tag_sibling(tag):
yield tag.find_next_sibling(True)
recursive_candidate_generator = next_tag_sibling
elif self.tag_name_re.match(token):
# Just a tag name.
tag_name = token
else:
raise ValueError(
'Unsupported or invalid CSS selector: "%s"' % token)
if recursive_candidate_generator:
# This happens when the selector looks like "> foo".
#
# The generator calls select() recursively on every
# member of the current context, passing in a different
# candidate generator and a different selector.
#
# In the case of "> foo", the candidate generator is
# one that yields a tag's direct children (">"), and
# the selector is "foo".
next_token = tokens[index+1]
def recursive_select(tag):
if self._select_debug:
print ' Calling select("%s") recursively on %s %s' % (next_token, tag.name, tag.attrs)
print '-' * 40
for i in tag.select(next_token, recursive_candidate_generator):
if self._select_debug:
print '(Recursive select picked up candidate %s %s)' % (i.name, i.attrs)
yield i
if self._select_debug:
print '-' * 40
_use_candidate_generator = recursive_select
elif _candidate_generator is None:
# By default, a tag's candidates are all of its
# children. If tag_name is defined, only yield tags
# with that name.
if self._select_debug:
if tag_name:
check = "[any]"
else:
check = tag_name
print ' Default candidate generator, tag name="%s"' % check
if self._select_debug:
# This is redundant with later code, but it stops
# a bunch of bogus tags from cluttering up the
# debug log.
def default_candidate_generator(tag):
for child in tag.descendants:
if not isinstance(child, Tag):
continue
if tag_name and not child.name == tag_name:
continue
yield child
_use_candidate_generator = default_candidate_generator
else:
_use_candidate_generator = lambda tag: tag.descendants
else:
_use_candidate_generator = _candidate_generator
new_context = []
new_context_ids = set([])
for tag in current_context:
if self._select_debug:
print " Running candidate generator on %s %s" % (
tag.name, repr(tag.attrs))
for candidate in _use_candidate_generator(tag):
if not isinstance(candidate, Tag):
continue
if tag_name and candidate.name != tag_name:
continue
if checker is not None:
try:
result = checker(candidate)
except StopIteration:
# The checker has decided we should no longer
# run the generator.
break
if checker is None or result:
if self._select_debug:
print " SUCCESS %s %s" % (candidate.name, repr(candidate.attrs))
if id(candidate) not in new_context_ids:
# If a tag matches a selector more than once,
# don't include it in the context more than once.
new_context.append(candidate)
new_context_ids.add(id(candidate))
elif self._select_debug:
print " FAILURE %s %s" % (candidate.name, repr(candidate.attrs))
current_context = new_context
if self._select_debug:
print "Final verdict:"
for i in current_context:
print " %s %s" % (i.name, i.attrs)
return current_context
# Old names for backwards compatibility
def childGenerator(self):
return self.children
@ -1392,13 +1152,10 @@ class Tag(PageElement):
def recursiveChildGenerator(self):
return self.descendants
def has_key(self, key):
"""This was kind of misleading because has_key() (attributes)
was different from __in__ (contents). has_key() is gone in
Python 3, anyway."""
warnings.warn('has_key is deprecated. Use has_attr("%s") instead.' % (
key))
return self.has_attr(key)
# This was kind of misleading because has_key() (attributes) was
# different from __in__ (contents). has_key() is gone in Python 3,
# anyway.
has_key = has_attr
# Next, a couple classes to represent queries and their results.
class SoupStrainer(object):

View File

@ -81,11 +81,6 @@ class HTMLTreeBuilderSmokeTest(object):
self.assertDoctypeHandled(
'html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN"')
def test_empty_doctype(self):
soup = self.soup("<!DOCTYPE>")
doctype = soup.contents[0]
self.assertEqual("", doctype.strip())
def test_public_doctype_with_url(self):
doctype = 'html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"'
self.assertDoctypeHandled(doctype)
@ -164,12 +159,6 @@ class HTMLTreeBuilderSmokeTest(object):
comment = soup.find(text="foobar")
self.assertEqual(comment.__class__, Comment)
# The comment is properly integrated into the tree.
foo = soup.find(text="foo")
self.assertEqual(comment, foo.next_element)
baz = soup.find(text="baz")
self.assertEqual(comment, baz.previous_element)
def test_preserved_whitespace_in_pre_and_textarea(self):
"""Whitespace must be preserved in <pre> and <textarea> tags."""
self.assertSoupEquals("<pre> </pre>")
@ -228,14 +217,12 @@ class HTMLTreeBuilderSmokeTest(object):
expect = u'<p id="pi\N{LATIN SMALL LETTER N WITH TILDE}ata"></p>'
self.assertSoupEquals('<p id="pi&#241;ata"></p>', expect)
self.assertSoupEquals('<p id="pi&#xf1;ata"></p>', expect)
self.assertSoupEquals('<p id="pi&#Xf1;ata"></p>', expect)
self.assertSoupEquals('<p id="pi&ntilde;ata"></p>', expect)
def test_entities_in_text_converted_to_unicode(self):
expect = u'<p>pi\N{LATIN SMALL LETTER N WITH TILDE}ata</p>'
self.assertSoupEquals("<p>pi&#241;ata</p>", expect)
self.assertSoupEquals("<p>pi&#xf1;ata</p>", expect)
self.assertSoupEquals("<p>pi&#Xf1;ata</p>", expect)
self.assertSoupEquals("<p>pi&ntilde;ata</p>", expect)
def test_quot_entity_converted_to_quotation_mark(self):
@ -248,12 +235,6 @@ class HTMLTreeBuilderSmokeTest(object):
self.assertSoupEquals("&#x10000000000000;", expect)
self.assertSoupEquals("&#1000000000;", expect)
def test_multipart_strings(self):
"Mostly to prevent a recurrence of a bug in the html5lib treebuilder."
soup = self.soup("<html><h2>\nfoo</h2><p></p></html>")
self.assertEqual("p", soup.h2.string.next_element.name)
self.assertEqual("p", soup.p.name)
def test_basic_namespaces(self):
"""Parsers don't need to *understand* namespaces, but at the
very least they should not choke on namespaces or lose
@ -472,18 +453,6 @@ class XMLTreeBuilderSmokeTest(object):
self.assertEqual(
soup.encode("utf-8"), markup)
def test_formatter_processes_script_tag_for_xml_documents(self):
doc = """
<script type="text/javascript">
</script>
"""
soup = BeautifulSoup(doc, "xml")
# lxml would have stripped this while parsing, but we can add
# it later.
soup.script.string = 'console.log("< < hey > > ");'
encoded = soup.encode()
self.assertTrue(b"&lt; &lt; hey &gt; &gt;" in encoded)
def test_popping_namespaced_tag(self):
markup = '<rss xmlns:dc="foo"><dc:creator>b</dc:creator><dc:date>2012-07-02T20:33:42Z</dc:date><dc:rights>c</dc:rights></rss>'
soup = self.soup(markup)
@ -526,11 +495,6 @@ class XMLTreeBuilderSmokeTest(object):
soup = self.soup(markup)
self.assertEqual(unicode(soup.foo), markup)
def test_namespaced_attributes_xml_namespace(self):
markup = '<foo xml:lang="fr">bar</foo>'
soup = self.soup(markup)
self.assertEqual(unicode(soup.foo), markup)
class HTML5TreeBuilderSmokeTest(HTMLTreeBuilderSmokeTest):
"""Smoke test for a tree builder that supports HTML5."""
@ -559,12 +523,6 @@ class HTML5TreeBuilderSmokeTest(HTMLTreeBuilderSmokeTest):
self.assertEqual(namespace, soup.math.namespace)
self.assertEqual(namespace, soup.msqrt.namespace)
def test_xml_declaration_becomes_comment(self):
markup = '<?xml version="1.0" encoding="utf-8"?><html></html>'
soup = self.soup(markup)
self.assertTrue(isinstance(soup.contents[0], Comment))
self.assertEqual(soup.contents[0], '?xml version="1.0" encoding="utf-8"?')
self.assertEqual("html", soup.contents[0].next_element.name)
def skipIf(condition, reason):
def nothing(test, *args, **kwargs):

View File

@ -56,17 +56,3 @@ class HTML5LibBuilderSmokeTest(SoupTest, HTML5TreeBuilderSmokeTest):
"<table><thead><tr><td>Foo</td></tr></thead>"
"<tbody><tr><td>Bar</td></tr></tbody>"
"<tfoot><tr><td>Baz</td></tr></tfoot></table>")
def test_xml_declaration_followed_by_doctype(self):
markup = '''<?xml version="1.0" encoding="utf-8"?>
<!DOCTYPE html>
<html>
<head>
</head>
<body>
<p>foo</p>
</body>
</html>'''
soup = self.soup(markup)
# Verify that we can reach the <p> tag; this means the tree is connected.
self.assertEqual(b"<p>foo</p>", soup.p.encode())

Some files were not shown because too many files have changed in this diff Show More