repo
string | commit
string | message
string | diff
string |
---|---|---|---|
pjleonhardt/mysql_mirror
|
03b4dafd179364a02a4cef282593e1a5dd61c6f3
|
Beginnings of a demo rails app
|
diff --git a/README.rdoc b/README.rdoc
index 0e3de38..5dab617 100644
--- a/README.rdoc
+++ b/README.rdoc
@@ -1,34 +1,59 @@
= Mysql Mirror
Use MysqlMirror to mirror data between databases. This can be useful when you want to update your
development or staging environments with real data to work with. Or, if you do some heavy lifting
calculations on another server, you might want to use a seperate database on another host, etc.
=== General Approach
- Mirror Across Hosts: performs a mysql_dump to an sql file, then imports file to target host
- Mirror Same Host: uses CREATE TABLE ( SELECT ... ) style for mirroring. Much faster than mysql_dump
Note:
ALL information will be lost in the tables mirrored to the Target Database
== Dependencies
- Active Record
- FileUtils
== Usage
-Run `rake demo` to a bunch of different invocations of the tool
+Basic usage, copy production db to development
+ @m = MysqlMirror.new({
+ :source => :production,
+ :target => :development
+ })
+
+Choose what tables you want to bring over and how you want to scope them...
+ @m = MysqlMirror.new({
+ :source => :production,
+ :target => :development,
+ :tables => [:users, :widgets],
+ :where => {:users => "is_admin NOT NULL"},
+ })
+
+Database information not in your database.yml file? (Or Not Running Rails?) No Problem!
+ @m = MysqlMirror.new({
+ :source => { :database => "app_production", :user => ..., :password => ..., :hostname => ...},
+ :target => {:database => "app_development", :hostname => 'localhost'}
+ })
+
+Want to use everything in :production environment (user, pass, host) but need to change the database?
+ @m = MysqlMirror.new({
+ :source => :production,
+ :override => {:source => {:database => "heavy_calculations_database"}},
+ :target => :production
+ })
== Advanced Usage
Transfer Method
same_server => Doesn't use the mysqldump binary, instead does everything via raw sql on db.
different_servers => Uses mysqldump to build a file from source then loads this file to the target
Strategy
replace_existing => Drop's target tables that exist in source database
bomb_and_rebuild => Drops all tables on target schema then copies tables from source
atomic_rename => Copies tables to target first to a temp table (*_<timestamp>) then
once all are copied, atomically renames newly built and populated table
and the table being replaced (to *_old)
Specify an alternative mysql configuration
\ No newline at end of file
diff --git a/Rakefile b/Rakefile
index 34a2fe9..f33b18f 100644
--- a/Rakefile
+++ b/Rakefile
@@ -1,67 +1,20 @@
require 'rubygems'
require 'rake'
begin
require 'jeweler'
Jeweler::Tasks.new do |gemspec|
gemspec.name = "mysql_mirror"
gemspec.summary = "Helps mirror MySql Databases"
gemspec.description = "Will mirror tables / databases between mysql databases and across hosts"
gemspec.email = "peterleonhardt@gmail.com"
gemspec.homepage = "http://github.com/pjleonhardt/mysql_mirror"
gemspec.authors = ["Peter Leonhardt", "Joe Goggins"]
gemspec.files = FileList["[A-Z]*", "lib/mysql_mirror.rb"]
end
Jeweler::GemcutterTasks.new
rescue LoadError
puts "Jeweler not available. Please install the jeweler gem."
end
-Dir["#{File.dirname(__FILE__)}/tasks/*.rake"].sort.each { |ext| load ext }
-
-
-desc 'Run the demo/test cases'
-task :demo do
- require 'lib/mysql_mirror'
- puts 'MySqlMirror Demo'
- # Basic usage, copy production db to development
- # @m = MysqlMirror.new({
- # :source => :production,
- # :target => :development
- # })
- #
- # Choose what tables you want to bring over and how you want to scope them...
- # @m = MysqlMirror.new({
- # :source => :production,
- # :target => :development,
- # :tables => [:users, :widgets],
- # :where => {:users => "is_admin NOT NULL"},
- # })
- #
- # Database information not in your database.yml file? (Or Not Running Rails?) No Problem!
- # @m = MysqlMirror.new({
- # :source => { :database => "app_production", :user => ..., :password => ..., :hostname => ...},
- # :target => {:database => "app_development", :hostname => 'localhost'}
- # })
- #
- # Want to use everything in :production environment (user, pass, host) but need to change the database?
- # @m = MysqlMirror.new({
- # :source => :production,
- # :override => {:source => {:database => "heavy_calculations_database"}},
- # :target => :production
- # })
- require 'ostruct'
- @demos = []
- @demos << OpenStruct.new(:name => "Basic usage, copy production db to development",
- :code => Proc.new {
- @m = MysqlMirror.new({
- :source => :production,
- :target => :development
- })
- })
- @demos.each do |demo|
- demo.code.call(binding)
- puts @m.inspect
- end
-
-end
\ No newline at end of file
+Dir["#{File.dirname(__FILE__)}/tasks/*.rake"].sort.each { |ext| load ext }
\ No newline at end of file
diff --git a/demo/README b/demo/README
index 37ec8ea..225d442 100644
--- a/demo/README
+++ b/demo/README
@@ -1,243 +1 @@
-== Welcome to Rails
-
-Rails is a web-application framework that includes everything needed to create
-database-backed web applications according to the Model-View-Control pattern.
-
-This pattern splits the view (also called the presentation) into "dumb" templates
-that are primarily responsible for inserting pre-built data in between HTML tags.
-The model contains the "smart" domain objects (such as Account, Product, Person,
-Post) that holds all the business logic and knows how to persist themselves to
-a database. The controller handles the incoming requests (such as Save New Account,
-Update Product, Show Post) by manipulating the model and directing data to the view.
-
-In Rails, the model is handled by what's called an object-relational mapping
-layer entitled Active Record. This layer allows you to present the data from
-database rows as objects and embellish these data objects with business logic
-methods. You can read more about Active Record in
-link:files/vendor/rails/activerecord/README.html.
-
-The controller and view are handled by the Action Pack, which handles both
-layers by its two parts: Action View and Action Controller. These two layers
-are bundled in a single package due to their heavy interdependence. This is
-unlike the relationship between the Active Record and Action Pack that is much
-more separate. Each of these packages can be used independently outside of
-Rails. You can read more about Action Pack in
-link:files/vendor/rails/actionpack/README.html.
-
-
-== Getting Started
-
-1. At the command prompt, start a new Rails application using the <tt>rails</tt> command
- and your application name. Ex: rails myapp
-2. Change directory into myapp and start the web server: <tt>script/server</tt> (run with --help for options)
-3. Go to http://localhost:3000/ and get "Welcome aboard: You're riding the Rails!"
-4. Follow the guidelines to start developing your application
-
-
-== Web Servers
-
-By default, Rails will try to use Mongrel if it's are installed when started with script/server, otherwise Rails will use WEBrick, the webserver that ships with Ruby. But you can also use Rails
-with a variety of other web servers.
-
-Mongrel is a Ruby-based webserver with a C component (which requires compilation) that is
-suitable for development and deployment of Rails applications. If you have Ruby Gems installed,
-getting up and running with mongrel is as easy as: <tt>gem install mongrel</tt>.
-More info at: http://mongrel.rubyforge.org
-
-Say other Ruby web servers like Thin and Ebb or regular web servers like Apache or LiteSpeed or
-Lighttpd or IIS. The Ruby web servers are run through Rack and the latter can either be setup to use
-FCGI or proxy to a pack of Mongrels/Thin/Ebb servers.
-
-== Apache .htaccess example for FCGI/CGI
-
-# General Apache options
-AddHandler fastcgi-script .fcgi
-AddHandler cgi-script .cgi
-Options +FollowSymLinks +ExecCGI
-
-# If you don't want Rails to look in certain directories,
-# use the following rewrite rules so that Apache won't rewrite certain requests
-#
-# Example:
-# RewriteCond %{REQUEST_URI} ^/notrails.*
-# RewriteRule .* - [L]
-
-# Redirect all requests not available on the filesystem to Rails
-# By default the cgi dispatcher is used which is very slow
-#
-# For better performance replace the dispatcher with the fastcgi one
-#
-# Example:
-# RewriteRule ^(.*)$ dispatch.fcgi [QSA,L]
-RewriteEngine On
-
-# If your Rails application is accessed via an Alias directive,
-# then you MUST also set the RewriteBase in this htaccess file.
-#
-# Example:
-# Alias /myrailsapp /path/to/myrailsapp/public
-# RewriteBase /myrailsapp
-
-RewriteRule ^$ index.html [QSA]
-RewriteRule ^([^.]+)$ $1.html [QSA]
-RewriteCond %{REQUEST_FILENAME} !-f
-RewriteRule ^(.*)$ dispatch.cgi [QSA,L]
-
-# In case Rails experiences terminal errors
-# Instead of displaying this message you can supply a file here which will be rendered instead
-#
-# Example:
-# ErrorDocument 500 /500.html
-
-ErrorDocument 500 "<h2>Application error</h2>Rails application failed to start properly"
-
-
-== Debugging Rails
-
-Sometimes your application goes wrong. Fortunately there are a lot of tools that
-will help you debug it and get it back on the rails.
-
-First area to check is the application log files. Have "tail -f" commands running
-on the server.log and development.log. Rails will automatically display debugging
-and runtime information to these files. Debugging info will also be shown in the
-browser on requests from 127.0.0.1.
-
-You can also log your own messages directly into the log file from your code using
-the Ruby logger class from inside your controllers. Example:
-
- class WeblogController < ActionController::Base
- def destroy
- @weblog = Weblog.find(params[:id])
- @weblog.destroy
- logger.info("#{Time.now} Destroyed Weblog ID ##{@weblog.id}!")
- end
- end
-
-The result will be a message in your log file along the lines of:
-
- Mon Oct 08 14:22:29 +1000 2007 Destroyed Weblog ID #1
-
-More information on how to use the logger is at http://www.ruby-doc.org/core/
-
-Also, Ruby documentation can be found at http://www.ruby-lang.org/ including:
-
-* The Learning Ruby (Pickaxe) Book: http://www.ruby-doc.org/docs/ProgrammingRuby/
-* Learn to Program: http://pine.fm/LearnToProgram/ (a beginners guide)
-
-These two online (and free) books will bring you up to speed on the Ruby language
-and also on programming in general.
-
-
-== Debugger
-
-Debugger support is available through the debugger command when you start your Mongrel or
-Webrick server with --debugger. This means that you can break out of execution at any point
-in the code, investigate and change the model, AND then resume execution!
-You need to install ruby-debug to run the server in debugging mode. With gems, use 'gem install ruby-debug'
-Example:
-
- class WeblogController < ActionController::Base
- def index
- @posts = Post.find(:all)
- debugger
- end
- end
-
-So the controller will accept the action, run the first line, then present you
-with a IRB prompt in the server window. Here you can do things like:
-
- >> @posts.inspect
- => "[#<Post:0x14a6be8 @attributes={\"title\"=>nil, \"body\"=>nil, \"id\"=>\"1\"}>,
- #<Post:0x14a6620 @attributes={\"title\"=>\"Rails you know!\", \"body\"=>\"Only ten..\", \"id\"=>\"2\"}>]"
- >> @posts.first.title = "hello from a debugger"
- => "hello from a debugger"
-
-...and even better is that you can examine how your runtime objects actually work:
-
- >> f = @posts.first
- => #<Post:0x13630c4 @attributes={"title"=>nil, "body"=>nil, "id"=>"1"}>
- >> f.
- Display all 152 possibilities? (y or n)
-
-Finally, when you're ready to resume execution, you enter "cont"
-
-
-== Console
-
-You can interact with the domain model by starting the console through <tt>script/console</tt>.
-Here you'll have all parts of the application configured, just like it is when the
-application is running. You can inspect domain models, change values, and save to the
-database. Starting the script without arguments will launch it in the development environment.
-Passing an argument will specify a different environment, like <tt>script/console production</tt>.
-
-To reload your controllers and models after launching the console run <tt>reload!</tt>
-
-== dbconsole
-
-You can go to the command line of your database directly through <tt>script/dbconsole</tt>.
-You would be connected to the database with the credentials defined in database.yml.
-Starting the script without arguments will connect you to the development database. Passing an
-argument will connect you to a different database, like <tt>script/dbconsole production</tt>.
-Currently works for mysql, postgresql and sqlite.
-
-== Description of Contents
-
-app
- Holds all the code that's specific to this particular application.
-
-app/controllers
- Holds controllers that should be named like weblogs_controller.rb for
- automated URL mapping. All controllers should descend from ApplicationController
- which itself descends from ActionController::Base.
-
-app/models
- Holds models that should be named like post.rb.
- Most models will descend from ActiveRecord::Base.
-
-app/views
- Holds the template files for the view that should be named like
- weblogs/index.html.erb for the WeblogsController#index action. All views use eRuby
- syntax.
-
-app/views/layouts
- Holds the template files for layouts to be used with views. This models the common
- header/footer method of wrapping views. In your views, define a layout using the
- <tt>layout :default</tt> and create a file named default.html.erb. Inside default.html.erb,
- call <% yield %> to render the view using this layout.
-
-app/helpers
- Holds view helpers that should be named like weblogs_helper.rb. These are generated
- for you automatically when using script/generate for controllers. Helpers can be used to
- wrap functionality for your views into methods.
-
-config
- Configuration files for the Rails environment, the routing map, the database, and other dependencies.
-
-db
- Contains the database schema in schema.rb. db/migrate contains all
- the sequence of Migrations for your schema.
-
-doc
- This directory is where your application documentation will be stored when generated
- using <tt>rake doc:app</tt>
-
-lib
- Application specific libraries. Basically, any kind of custom code that doesn't
- belong under controllers, models, or helpers. This directory is in the load path.
-
-public
- The directory available for the web server. Contains subdirectories for images, stylesheets,
- and javascripts. Also contains the dispatchers and the default HTML files. This should be
- set as the DOCUMENT_ROOT of your web server.
-
-script
- Helper scripts for automation and generation.
-
-test
- Unit and functional tests along with fixtures. When using the script/generate scripts, template
- test files will be generated for you and placed in this directory.
-
-vendor
- External libraries that the application depends on. Also includes the plugins subdirectory.
- If the app has frozen rails, those gems also go here, under vendor/rails/.
- This directory is in the load path.
+This is the Demo playground for mysql_mirror gem on github
\ No newline at end of file
diff --git a/demo/config/database.yml b/demo/config/database.yml
index 025d62a..b9d9e57 100644
--- a/demo/config/database.yml
+++ b/demo/config/database.yml
@@ -1,22 +1,15 @@
# SQLite version 3.x
# gem install sqlite3-ruby (not necessary on OS X Leopard)
development:
- adapter: sqlite3
- database: db/development.sqlite3
- pool: 5
- timeout: 5000
-
-# Warning: The database defined as "test" will be erased and
-# re-generated from your development database when you run "rake".
-# Do not set this db to the same as development or production.
-test:
- adapter: sqlite3
- database: db/test.sqlite3
- pool: 5
- timeout: 5000
+ adapter: mysql
+ database: mm_demo_local_source
+ username: root
+ password:
+ socket: /tmp/mysql.sock
production:
- adapter: sqlite3
- database: db/production.sqlite3
- pool: 5
- timeout: 5000
+ adapter: mysql
+ database: mm_demo_local_source
+ username: root
+ password:
+ socket: /tmp/mysql.sock
\ No newline at end of file
diff --git a/demo/db/migrate/20100406015530_create_t1s.rb b/demo/db/migrate/20100406015530_create_t1s.rb
new file mode 100644
index 0000000..c33ddc9
--- /dev/null
+++ b/demo/db/migrate/20100406015530_create_t1s.rb
@@ -0,0 +1,13 @@
+class CreateT1s < ActiveRecord::Migration
+ def self.up
+ create_table :t1s do |t|
+ t.string :name
+
+ t.timestamps
+ end
+ end
+
+ def self.down
+ drop_table :t1s
+ end
+end
diff --git a/demo/doc/README_FOR_APP b/demo/doc/README_FOR_APP
index fe41f5c..10dbe63 100644
--- a/demo/doc/README_FOR_APP
+++ b/demo/doc/README_FOR_APP
@@ -1,2 +1,2 @@
-Use this README file to introduce your application and point to useful places in the API for learning more.
-Run "rake doc:app" to generate API documentation for your models, controllers, helpers, and libraries.
+Run rake demo:setup
+Run rake demo:run
\ No newline at end of file
diff --git a/demo/lib/tasks/demo.rake b/demo/lib/tasks/demo.rake
index ebe0d33..f39cc59 100644
--- a/demo/lib/tasks/demo.rake
+++ b/demo/lib/tasks/demo.rake
@@ -1,35 +1,75 @@
namespace :demo do
desc "requirements for the basic demo"
task :instructions do
s =<<-EOS
---
Setup privileged MySql user accounts and add this to database.yml
- demo_local:
+ development:
adapter: mysql
database: mm_demo_local_source
username: root
password:
socket: /tmp/mysql.sock
- demo_remote:
+ production:
adapter: mysql
- database: mm_demo_remote_source
+ database: mm_demo_remote_target
username: <SOME_PRIVILEGED_ACCOUNT>
password:
host: <SOME_HOST>
---
These users will need root privileges
EOS
end
desc "Setup the demo"
- task :setup => ["demo:instructions", "environment"] do
-
+ task :setup => ["demo:instructions", "environment", "db:create","db:migrate","db:fixtures:load"] do
+ puts "mkay, you should be able do rake demo:run now"
end
desc "Run the demo"
- task :run => :environment do
-
+ task :run => ['environment'] do
+ require '../lib/mysql_mirror'
+ puts 'MySqlMirror Demo'
+ # Basic usage, copy production db to development
+ # @m = MysqlMirror.new({
+ # :source => :production,
+ # :target => :development
+ # })
+ #
+ # Choose what tables you want to bring over and how you want to scope them...
+ # @m = MysqlMirror.new({
+ # :source => :production,
+ # :target => :development,
+ # :tables => [:users, :widgets],
+ # :where => {:users => "is_admin NOT NULL"},
+ # })
+ #
+ # Database information not in your database.yml file? (Or Not Running Rails?) No Problem!
+ # @m = MysqlMirror.new({
+ # :source => { :database => "app_production", :user => ..., :password => ..., :hostname => ...},
+ # :target => {:database => "app_development", :hostname => 'localhost'}
+ # })
+ #
+ # Want to use everything in :production environment (user, pass, host) but need to change the database?
+ # @m = MysqlMirror.new({
+ # :source => :production,
+ # :override => {:source => {:database => "heavy_calculations_database"}},
+ # :target => :production
+ # })
+ require 'ostruct'
+ @demos = []
+ @demos << OpenStruct.new(:name => "Basic usage, copy production db to development",
+ :code => Proc.new {
+ @m = MysqlMirror.new({
+ :source => :production,
+ :target => :development
+ })
+ })
+ @demos.each do |demo|
+ demo.code.call(binding)
+ puts @m.inspect
+ end
end
end
diff --git a/demo/log/development.log b/demo/log/development.log
index e69de29..5bb9719 100644
--- a/demo/log/development.log
+++ b/demo/log/development.log
@@ -0,0 +1,17 @@
+ [4;36;1mSQL (0.1ms)[0m [0;1mSET SQL_AUTO_IS_NULL=0[0m
+ [4;36;1mSQL (0.1ms)[0m [0;1mSET SQL_AUTO_IS_NULL=0[0m
+ [4;35;1mSQL (0.3ms)[0m [0mSHOW TABLES[0m
+ [4;36;1mSQL (205.3ms)[0m [0;1mCREATE TABLE `schema_migrations` (`version` varchar(255) NOT NULL) ENGINE=InnoDB[0m
+ [4;35;1mSQL (141.6ms)[0m [0mCREATE UNIQUE INDEX `unique_schema_migrations` ON `schema_migrations` (`version`)[0m
+ [4;36;1mSQL (0.3ms)[0m [0;1mSHOW TABLES[0m
+ [4;35;1mSQL (0.2ms)[0m [0mSELECT version FROM schema_migrations[0m
+Migrating to CreateT1s (20100406015530)
+ [4;36;1mSQL (117.3ms)[0m [0;1mCREATE TABLE `t1s` (`id` int(11) DEFAULT NULL auto_increment PRIMARY KEY, `name` varchar(255), `created_at` datetime, `updated_at` datetime) ENGINE=InnoDB[0m
+ [4;35;1mSQL (0.5ms)[0m [0mINSERT INTO schema_migrations (version) VALUES ('20100406015530')[0m
+ [4;36;1mSQL (0.4ms)[0m [0;1mSHOW TABLES[0m
+ [4;35;1mSQL (0.2ms)[0m [0mSELECT version FROM schema_migrations[0m
+ [4;36;1mSQL (0.2ms)[0m [0;1mSHOW TABLES[0m
+ [4;35;1mSQL (1.0ms)[0m [0mSHOW FIELDS FROM `t1s`[0m
+ [4;36;1mSQL (1.2ms)[0m [0;1mdescribe `t1s`[0m
+ [4;35;1mSQL (0.7ms)[0m [0mSHOW KEYS FROM `t1s`[0m
+ [4;36;1mSQL (0.1ms)[0m [0;1mSET SQL_AUTO_IS_NULL=0[0m
diff --git a/lib/mysql_mirror.rb b/lib/mysql_mirror.rb
index 9d96ff3..48091c6 100644
--- a/lib/mysql_mirror.rb
+++ b/lib/mysql_mirror.rb
@@ -1,209 +1,212 @@
require 'active_record'
require 'fileutils'
class MysqlMirror
class MysqlMirrorException < Exception; end
class InvalidStrategy < Exception; end
class InvalidConfiguration < Exception; end
class Source < ActiveRecord::Base
end
class Target < ActiveRecord::Base
end
- attr_accessor :tables, :where
+ attr_accessor :tables, :where, :strategy
def initialize(options = {})
unless ([:source, :target] - options.keys).blank?
# Need to specify a Source and Target database
raise MysqlMirrorException.new("You must specify both Source and Target connections")
end
self.tables = options.delete(:tables)
self.where = options.delete(:where)
overrides = options.delete(:override) || {}
source_override = overrides.delete(:source) || {}
target_override = overrides.delete(:target) || {}
@source_config = get_configuration(options.delete(:source))
@target_config = get_configuration(options.delete(:target))
@source_config.merge!(source_override)
@target_config.merge!(target_override)
# @commands is an array of methods to call
if mirroring_same_host?
@commands = commands_for_local_mirror
else
@commands = commands_for_remote_mirror
end
+ debugger
end
def commands_for_local_mirror
case self.strategy
when :atomic_rename
when :bomb_and_rebuild
when :replace_existing
[:local_copy]
else
raise InvalidStrategy.new("Invalid mirror strategy")
end
end
def commands_for_remote_mirror
[
:remote_mysqldump,
:remote_tmp_file_table_rename,
:remote_insert_command,
:remote_rename_tmp_tables,
:remote_remove_tmp_file
]
end
def mirroring_same_host?
@source_config[:host] == @target_config[:host]
end
def execute!
@start_time = Time.now
@source = connect_to(:source)
@target = connect_to(:target)
@commands.each do |c|
self.send(c)
end
end
def to_s
"Mirroring #{self.tables.join(', ')} from #{@source_config[:host]}.#{@source_config[:database]} to #{@target_config[:host]}.#{@target_config[:database]}"
end
private
# e.g, connect_to(:source)
# => MysqlMirror::Source.establish_connection(@source_config).connection
#
def connect_to(which)
"MysqlMirror::#{which.to_s.classify}".constantize.establish_connection(self.instance_variable_get("@#{which}_config")).connection
end
def local_copy
get_tables.each do |table|
target_db = @target_config[:database]
source_db = @source_config[:database]
target_table = "#{target_db}.#{table}"
target_tmp_table = "#{target_db}.#{table}_MirrorTmp"
target_old_table = "#{target_db}.#{table}_OldMarkedToDelete"
source_table = "#{source_db}.#{table}"
prime_statement_1 = "DROP TABLE IF EXISTS #{target_tmp_table}"
prime_statement_2 = "CREATE TABLE IF NOT EXISTS #{target_table} LIKE #{source_table}"
create_statement = "CREATE TABLE #{target_tmp_table} LIKE #{source_table}"
select_clause = "SELECT * FROM #{source_table}"
select_clause << " WHERE #{self.where[table]}" unless (self.where.blank? or self.where[table].blank?)
insert_statement = "INSERT INTO #{target_tmp_table} #{select_clause}"
rename_statement = "RENAME TABLE #{target_table} TO #{target_old_table}, #{target_tmp_table} TO #{target_table}"
cleanup_statement = "DROP TABLE IF EXISTS #{target_old_table}"
staments_to_run = [prime_statement_1, prime_statement_2, create_statement, insert_statement, rename_statement, cleanup_statement]
staments_to_run.each do |statement|
@target.execute(statement)
end
end
end
def mysqldump_command_prefix
"mysqldump --compact=TRUE --max_allowed_packet=100663296 --extended-insert=TRUE --lock-tables=FALSE --add-locks=FALSE --add-drop-table=FALSE"
end
def remote_mysqldump
@tmp_file_name = "mysql_mirror_#{@start_time.to_i}.sql"
tables = get_tables.map(&:to_s).join(" ")
if self.where.blank?
where = ""
else
where_statement = self.where.values.first
where = "--where=\"#{where_statement}\""
end
config = "-u#{@source_config[:username]} -p'#{@source_config[:password]}' -h #{@source_config[:host]} #{@source_config[:database]}"
the_cmd = "#{mysqldump_command_prefix} #{where} #{config} #{tables} > #{@tmp_file_name}"
puts the_cmd
`#{the_cmd}`
end
def remote_tmp_file_table_rename
create_or_insert_regex = Regexp.new('(^CREATE TABLE|^INSERT INTO)( `)(.+?)(`)(.+)')
new_file_name = @tmp_file_name + ".replaced.sql"
new_file = File.new(new_file_name, "w")
IO.foreach(@tmp_file_name) do |line|
if match_data = line.match(create_or_insert_regex)
table_name = match_data[3]
new_table_name = "#{table_name}_#{@start_time.to_i}"
new_file.puts match_data[1] + match_data[2] + new_table_name + match_data[4]+ match_data[5]
else
new_file.puts line
end
end
new_file.close
# replace dump'd sql file with this gsub'd one
FileUtils.move(new_file_name, @tmp_file_name)
end
def remote_insert_command
config = "-u#{@target_config[:username]} -p'#{@target_config[:password]}' -h #{@target_config[:host]} #{@target_config[:database]}"
the_cmd = "mysql #{config} < #{@tmp_file_name}"
`#{the_cmd}`
end
def remote_rename_tmp_tables
get_tables.each do |table|
tmp_table_name = "#{table}_#{@start_time.to_i}"
old_table_name = "#{table}_OldMarkedToDelete"
@target.transaction do
@target.execute("DROP TABLE IF EXISTS #{old_table_name}")
@target.execute("RENAME TABLE #{table} TO #{old_table_name}, #{tmp_table_name} TO #{table}")
@target.execute("DROP TABLE IF EXISTS #{old_table_name}")
end
end
end
def remote_remove_tmp_file
FileUtils.rm(@tmp_file_name)
end
def get_tables
the_tables = self.tables.blank? ? @source.select_values("SHOW TABLES").map!(&:to_sym) : self.tables
end
def get_configuration(env_or_hash)
config = env_or_hash
if(env_or_hash.is_a? Symbol)
config = ActiveRecord::Base.configurations[env_or_hash.to_s]
end
- raise InvalidConfiguration.new("Specified configuration, #{env_or_hash.inspect}, does not exist.")
+ if config.blank?
+ raise InvalidConfiguration.new("Specified configuration, #{env_or_hash.inspect}, does not exist.")
+ end
config.symbolize_keys
end
end
|
pjleonhardt/mysql_mirror
|
b993beefc5addd19e010c4a72da586da9080d916
|
perms check
|
diff --git a/README.rdoc b/README.rdoc
index 79245a5..dbb40b9 100644
--- a/README.rdoc
+++ b/README.rdoc
@@ -1,43 +1,44 @@
= Mysql Mirror
Use MysqlMirror to mirror data between databases. This can be useful when you want to update your
development or staging environments with real data to work with. Or, if you do some heavy lifting
calculations on another server, you might want to use a seperate database on another host, etc.
+
=== General Approach
- Mirror Across Hosts: performs a mysql_dump to an sql file, then imports file to target host
- Mirror Same Host: uses CREATE TABLE ( SELECT ... ) style for mirroring. Much faster than mysql_dump
Note:
ALL information will be lost in the tables mirrored to the Target Database
== Dependencies
- Active Record
- FileUtils
== Usage
Basic usage, copy production db to development
@m = MysqlMirror.new({
:source => :production,
:target => :development
})
Choose what tables you want to bring over and how you want to scope them...
@m = MysqlMirror.new({
:source => :production,
:target => :development,
:tables => [:users, :widgets],
:where => {:users => "is_admin NOT NULL"},
})
Database information not in your database.yml file? (Or Not Running Rails?) No Problem!
@m = MysqlMirror.new({
:source => { :database => "app_production", :user => ..., :password => ..., :hostname => ...},
:target => {:database => "app_development", :hostname => 'localhost'}
})
Want to use everything in :production environment (user, pass, host) but need to change the database?
@m = MysqlMirror.new({
:source => :production,
:override => {:source => {:database => "heavy_calculations_database"}},
:target => :production
})
|
pjleonhardt/mysql_mirror
|
4b2be2c531f61681e48ef22dde80c59e557fa441
|
Fix for cross-host mirroring without where clause
|
diff --git a/mysql_mirror.gemspec b/mysql_mirror.gemspec
index 2766167..3ff914f 100644
--- a/mysql_mirror.gemspec
+++ b/mysql_mirror.gemspec
@@ -1,40 +1,40 @@
# Generated by jeweler
# DO NOT EDIT THIS FILE DIRECTLY
# Instead, edit Jeweler::Tasks in Rakefile, and run the gemspec command
# -*- encoding: utf-8 -*-
Gem::Specification.new do |s|
s.name = %q{mysql_mirror}
- s.version = "0.1.2"
+ s.version = "0.1.3"
s.required_rubygems_version = Gem::Requirement.new(">= 0") if s.respond_to? :required_rubygems_version=
s.authors = ["Peter Leonhardt", "Joe Goggins"]
- s.date = %q{2010-03-29}
+ s.date = %q{2010-03-31}
s.description = %q{Will mirror tables / databases between mysql databases and across hosts}
s.email = %q{peterleonhardt@gmail.com}
s.extra_rdoc_files = [
"README.rdoc"
]
s.files = [
"README.rdoc",
"Rakefile",
"VERSION",
"lib/mysql_mirror.rb"
]
s.homepage = %q{http://github.com/pjleonhardt/mysql_mirror}
s.rdoc_options = ["--charset=UTF-8"]
s.require_paths = ["lib"]
s.rubygems_version = %q{1.3.6}
s.summary = %q{Helps mirror MySql Databases}
if s.respond_to? :specification_version then
current_version = Gem::Specification::CURRENT_SPECIFICATION_VERSION
s.specification_version = 3
if Gem::Version.new(Gem::RubyGemsVersion) >= Gem::Version.new('1.2.0') then
else
end
else
end
end
|
pjleonhardt/mysql_mirror
|
dcaa93a4143999cd6b49f1a174b9aac43937372b
|
Version bump to 0.1.3
|
diff --git a/VERSION b/VERSION
index d917d3e..b1e80bb 100644
--- a/VERSION
+++ b/VERSION
@@ -1 +1 @@
-0.1.2
+0.1.3
diff --git a/lib/mysql_mirror.rb b/lib/mysql_mirror.rb
index 5e6df26..8a7ea40 100644
--- a/lib/mysql_mirror.rb
+++ b/lib/mysql_mirror.rb
@@ -1,194 +1,200 @@
require 'active_record'
require 'fileutils'
class MysqlMirror
class MysqlMirrorException < Exception; end
class Source < ActiveRecord::Base
end
class Target < ActiveRecord::Base
end
attr_accessor :tables, :where
def initialize(options = {})
unless ([:source, :target] - options.keys).blank?
# Need to specify a Source and Target database
raise MysqlMirrorException.new("You must specify both Source and Target connections")
end
self.tables = options.delete(:tables)
self.where = options.delete(:where)
overrides = options.delete(:override) || {}
source_override = overrides.delete(:source) || {}
target_override = overrides.delete(:target) || {}
@source_config = get_configuration(options.delete(:source))
@target_config = get_configuration(options.delete(:target))
@source_config.merge!(source_override)
@target_config.merge!(target_override)
# @commands is an array of methods to call
if mirroring_same_host?
@commands = commands_for_local_mirror
else
@commands = commands_for_remote_mirror
end
end
def commands_for_local_mirror
[:local_copy]
end
def commands_for_remote_mirror
[
:remote_mysqldump,
:remote_tmp_file_table_rename,
:remote_insert_command,
:remote_rename_tmp_tables,
:remote_remove_tmp_file
]
end
def mirroring_same_host?
@source_config[:host] == @target_config[:host]
end
def execute!
@start_time = Time.now
@source = connect_to(:source)
@target = connect_to(:target)
@commands.each do |c|
self.send(c)
end
end
def to_s
"Mirroring #{self.tables.join(', ')} from #{@source_config[:host]}.#{@source_config[:database]} to #{@target_config[:host]}.#{@target_config[:database]}"
end
private
# e.g, connect_to(:source)
# => MysqlMirror::Source.establish_connection(@source_config).connection
#
def connect_to(which)
"MysqlMirror::#{which.to_s.classify}".constantize.establish_connection(self.instance_variable_get("@#{which}_config")).connection
end
def local_copy
get_tables.each do |table|
target_db = @target_config[:database]
source_db = @source_config[:database]
target_table = "#{target_db}.#{table}"
target_tmp_table = "#{target_db}.#{table}_MirrorTmp"
target_old_table = "#{target_db}.#{table}_OldMarkedToDelete"
source_table = "#{source_db}.#{table}"
prime_statement_1 = "DROP TABLE IF EXISTS #{target_tmp_table}"
prime_statement_2 = "CREATE TABLE IF NOT EXISTS #{target_table} LIKE #{source_table}"
create_statement = "CREATE TABLE #{target_tmp_table} LIKE #{source_table}"
select_clause = "SELECT * FROM #{source_table}"
select_clause << " WHERE #{self.where[table]}" unless (self.where.blank? or self.where[table].blank?)
insert_statement = "INSERT INTO #{target_tmp_table} #{select_clause}"
rename_statement = "RENAME TABLE #{target_table} TO #{target_old_table}, #{target_tmp_table} TO #{target_table}"
cleanup_statement = "DROP TABLE IF EXISTS #{target_old_table}"
staments_to_run = [prime_statement_1, prime_statement_2, create_statement, insert_statement, rename_statement, cleanup_statement]
staments_to_run.each do |statement|
@target.execute(statement)
end
end
end
def mysqldump_command_prefix
"mysqldump --compact=TRUE --max_allowed_packet=100663296 --extended-insert=TRUE --lock-tables=FALSE --add-locks=FALSE --add-drop-table=FALSE"
end
def remote_mysqldump
@tmp_file_name = "mysql_mirror_#{@start_time.to_i}.sql"
tables = get_tables.map(&:to_s).join(" ")
- where_statement = self.where.values.first
- where = self.where.blank? ? "" : "--where=\"#{where_statement}\""
+
+ if self.where.blank?
+ where = ""
+ else
+ where_statement = self.where.values.first
+ where = "--where=\"#{where_statement}\""
+ end
+
config = "-u#{@source_config[:username]} -p'#{@source_config[:password]}' -h #{@source_config[:host]} #{@source_config[:database]}"
the_cmd = "#{mysqldump_command_prefix} #{where} #{config} #{tables} > #{@tmp_file_name}"
puts the_cmd
`#{the_cmd}`
end
def remote_tmp_file_table_rename
create_or_insert_regex = Regexp.new('(^CREATE TABLE|^INSERT INTO)( `)(.+?)(`)(.+)')
new_file_name = @tmp_file_name + ".replaced.sql"
new_file = File.new(new_file_name, "w")
IO.foreach(@tmp_file_name) do |line|
if match_data = line.match(create_or_insert_regex)
table_name = match_data[3]
new_table_name = "#{table_name}_#{@start_time.to_i}"
new_file.puts match_data[1] + match_data[2] + new_table_name + match_data[4]+ match_data[5]
else
new_file.puts line
end
end
new_file.close
# replace dump'd sql file with this gsub'd one
FileUtils.move(new_file_name, @tmp_file_name)
end
def remote_insert_command
config = "-u#{@target_config[:username]} -p'#{@target_config[:password]}' -h #{@target_config[:host]} #{@target_config[:database]}"
the_cmd = "mysql #{config} < #{@tmp_file_name}"
`#{the_cmd}`
end
def remote_rename_tmp_tables
get_tables.each do |table|
tmp_table_name = "#{table}_#{@start_time.to_i}"
old_table_name = "#{table}_OldMarkedToDelete"
@target.transaction do
@target.execute("DROP TABLE IF EXISTS #{old_table_name}")
@target.execute("RENAME TABLE #{table} TO #{old_table_name}, #{tmp_table_name} TO #{table}")
@target.execute("DROP TABLE IF EXISTS #{old_table_name}")
end
end
end
def remote_remove_tmp_file
FileUtils.rm(@tmp_file_name)
end
def get_tables
the_tables = self.tables.blank? ? @source.select_values("SHOW TABLES").map!(&:to_sym) : self.tables
end
def get_configuration(env_or_hash)
config = env_or_hash
if(env_or_hash.is_a? Symbol)
config = ActiveRecord::Base.configurations[env_or_hash.to_s]
end
config.symbolize_keys
end
end
|
pjleonhardt/mysql_mirror
|
77ed548685f79d99e1dcb61246aeee439cfd2e29
|
Version Bump: 0.1.1 -> 0.1.2
|
diff --git a/mysql_mirror.gemspec b/mysql_mirror.gemspec
index 60db834..2766167 100644
--- a/mysql_mirror.gemspec
+++ b/mysql_mirror.gemspec
@@ -1,40 +1,40 @@
# Generated by jeweler
# DO NOT EDIT THIS FILE DIRECTLY
# Instead, edit Jeweler::Tasks in Rakefile, and run the gemspec command
# -*- encoding: utf-8 -*-
Gem::Specification.new do |s|
s.name = %q{mysql_mirror}
- s.version = "0.1.1"
+ s.version = "0.1.2"
s.required_rubygems_version = Gem::Requirement.new(">= 0") if s.respond_to? :required_rubygems_version=
s.authors = ["Peter Leonhardt", "Joe Goggins"]
- s.date = %q{2010-03-26}
+ s.date = %q{2010-03-29}
s.description = %q{Will mirror tables / databases between mysql databases and across hosts}
s.email = %q{peterleonhardt@gmail.com}
s.extra_rdoc_files = [
"README.rdoc"
]
s.files = [
"README.rdoc",
"Rakefile",
"VERSION",
"lib/mysql_mirror.rb"
]
s.homepage = %q{http://github.com/pjleonhardt/mysql_mirror}
s.rdoc_options = ["--charset=UTF-8"]
s.require_paths = ["lib"]
s.rubygems_version = %q{1.3.6}
s.summary = %q{Helps mirror MySql Databases}
if s.respond_to? :specification_version then
current_version = Gem::Specification::CURRENT_SPECIFICATION_VERSION
s.specification_version = 3
if Gem::Version.new(Gem::RubyGemsVersion) >= Gem::Version.new('1.2.0') then
else
end
else
end
end
|
pjleonhardt/mysql_mirror
|
f4488ee24368fc262ff06c7fc513d96247bf1864
|
Version bump to 0.1.2
|
diff --git a/VERSION b/VERSION
index 17e51c3..d917d3e 100644
--- a/VERSION
+++ b/VERSION
@@ -1 +1 @@
-0.1.1
+0.1.2
|
pjleonhardt/mysql_mirror
|
440a065ac58773b57afcf75f8ebabfdf4f23886c
|
Bugfix for CLI where clause. Adding to_s method
|
diff --git a/lib/mysql_mirror.rb b/lib/mysql_mirror.rb
index cf89a25..5e6df26 100644
--- a/lib/mysql_mirror.rb
+++ b/lib/mysql_mirror.rb
@@ -1,190 +1,194 @@
require 'active_record'
require 'fileutils'
class MysqlMirror
class MysqlMirrorException < Exception; end
class Source < ActiveRecord::Base
end
class Target < ActiveRecord::Base
end
attr_accessor :tables, :where
def initialize(options = {})
unless ([:source, :target] - options.keys).blank?
# Need to specify a Source and Target database
raise MysqlMirrorException.new("You must specify both Source and Target connections")
end
self.tables = options.delete(:tables)
self.where = options.delete(:where)
overrides = options.delete(:override) || {}
source_override = overrides.delete(:source) || {}
target_override = overrides.delete(:target) || {}
@source_config = get_configuration(options.delete(:source))
@target_config = get_configuration(options.delete(:target))
@source_config.merge!(source_override)
@target_config.merge!(target_override)
# @commands is an array of methods to call
if mirroring_same_host?
@commands = commands_for_local_mirror
else
@commands = commands_for_remote_mirror
end
end
def commands_for_local_mirror
[:local_copy]
end
def commands_for_remote_mirror
[
:remote_mysqldump,
:remote_tmp_file_table_rename,
:remote_insert_command,
:remote_rename_tmp_tables,
:remote_remove_tmp_file
]
end
def mirroring_same_host?
@source_config[:host] == @target_config[:host]
end
def execute!
@start_time = Time.now
@source = connect_to(:source)
@target = connect_to(:target)
@commands.each do |c|
self.send(c)
end
end
+ def to_s
+ "Mirroring #{self.tables.join(', ')} from #{@source_config[:host]}.#{@source_config[:database]} to #{@target_config[:host]}.#{@target_config[:database]}"
+ end
private
# e.g, connect_to(:source)
# => MysqlMirror::Source.establish_connection(@source_config).connection
#
def connect_to(which)
"MysqlMirror::#{which.to_s.classify}".constantize.establish_connection(self.instance_variable_get("@#{which}_config")).connection
end
def local_copy
get_tables.each do |table|
target_db = @target_config[:database]
source_db = @source_config[:database]
target_table = "#{target_db}.#{table}"
target_tmp_table = "#{target_db}.#{table}_MirrorTmp"
target_old_table = "#{target_db}.#{table}_OldMarkedToDelete"
source_table = "#{source_db}.#{table}"
prime_statement_1 = "DROP TABLE IF EXISTS #{target_tmp_table}"
prime_statement_2 = "CREATE TABLE IF NOT EXISTS #{target_table} LIKE #{source_table}"
create_statement = "CREATE TABLE #{target_tmp_table} LIKE #{source_table}"
select_clause = "SELECT * FROM #{source_table}"
select_clause << " WHERE #{self.where[table]}" unless (self.where.blank? or self.where[table].blank?)
insert_statement = "INSERT INTO #{target_tmp_table} #{select_clause}"
rename_statement = "RENAME TABLE #{target_table} TO #{target_old_table}, #{target_tmp_table} TO #{target_table}"
cleanup_statement = "DROP TABLE IF EXISTS #{target_old_table}"
staments_to_run = [prime_statement_1, prime_statement_2, create_statement, insert_statement, rename_statement, cleanup_statement]
staments_to_run.each do |statement|
@target.execute(statement)
end
end
end
def mysqldump_command_prefix
"mysqldump --compact=TRUE --max_allowed_packet=100663296 --extended-insert=TRUE --lock-tables=FALSE --add-locks=FALSE --add-drop-table=FALSE"
end
def remote_mysqldump
@tmp_file_name = "mysql_mirror_#{@start_time.to_i}.sql"
tables = get_tables.map(&:to_s).join(" ")
- where = self.where.blank? ? "" : "--where\"#{@source_config[:where]}\""
+ where_statement = self.where.values.first
+ where = self.where.blank? ? "" : "--where=\"#{where_statement}\""
config = "-u#{@source_config[:username]} -p'#{@source_config[:password]}' -h #{@source_config[:host]} #{@source_config[:database]}"
the_cmd = "#{mysqldump_command_prefix} #{where} #{config} #{tables} > #{@tmp_file_name}"
puts the_cmd
`#{the_cmd}`
end
def remote_tmp_file_table_rename
create_or_insert_regex = Regexp.new('(^CREATE TABLE|^INSERT INTO)( `)(.+?)(`)(.+)')
new_file_name = @tmp_file_name + ".replaced.sql"
new_file = File.new(new_file_name, "w")
IO.foreach(@tmp_file_name) do |line|
if match_data = line.match(create_or_insert_regex)
table_name = match_data[3]
new_table_name = "#{table_name}_#{@start_time.to_i}"
new_file.puts match_data[1] + match_data[2] + new_table_name + match_data[4]+ match_data[5]
else
new_file.puts line
end
end
new_file.close
# replace dump'd sql file with this gsub'd one
FileUtils.move(new_file_name, @tmp_file_name)
end
def remote_insert_command
config = "-u#{@target_config[:username]} -p'#{@target_config[:password]}' -h #{@target_config[:host]} #{@target_config[:database]}"
the_cmd = "mysql #{config} < #{@tmp_file_name}"
`#{the_cmd}`
end
def remote_rename_tmp_tables
get_tables.each do |table|
tmp_table_name = "#{table}_#{@start_time.to_i}"
old_table_name = "#{table}_OldMarkedToDelete"
@target.transaction do
@target.execute("DROP TABLE IF EXISTS #{old_table_name}")
@target.execute("RENAME TABLE #{table} TO #{old_table_name}, #{tmp_table_name} TO #{table}")
@target.execute("DROP TABLE IF EXISTS #{old_table_name}")
end
end
end
def remote_remove_tmp_file
FileUtils.rm(@tmp_file_name)
end
def get_tables
the_tables = self.tables.blank? ? @source.select_values("SHOW TABLES").map!(&:to_sym) : self.tables
end
def get_configuration(env_or_hash)
config = env_or_hash
if(env_or_hash.is_a? Symbol)
config = ActiveRecord::Base.configurations[env_or_hash.to_s]
end
config.symbolize_keys
end
end
|
pjleonhardt/mysql_mirror
|
8a7752d553db9c12f8822f13c85408c1e4e69bb6
|
Gemifying mysql_mirror
|
diff --git a/.gitignore b/.gitignore
new file mode 100644
index 0000000..088af20
--- /dev/null
+++ b/.gitignore
@@ -0,0 +1 @@
+pkg/*
diff --git a/README.rdoc b/README.rdoc
index 06efc15..79245a5 100644
--- a/README.rdoc
+++ b/README.rdoc
@@ -1,43 +1,43 @@
= Mysql Mirror
Use MysqlMirror to mirror data between databases. This can be useful when you want to update your
development or staging environments with real data to work with. Or, if you do some heavy lifting
calculations on another server, you might want to use a seperate database on another host, etc.
=== General Approach
- - Mirror Across Hosts: performs a mysql_dump to an sql file, then imports file to target host
- - Mirror Same Host: uses CREATE TABLE ( SELECT ... ) style for mirroring. Much faster than mysql_dump
+- Mirror Across Hosts: performs a mysql_dump to an sql file, then imports file to target host
+- Mirror Same Host: uses CREATE TABLE ( SELECT ... ) style for mirroring. Much faster than mysql_dump
Note:
ALL information will be lost in the tables mirrored to the Target Database
== Dependencies
- Active Record
- FileUtils
== Usage
Basic usage, copy production db to development
@m = MysqlMirror.new({
:source => :production,
:target => :development
})
Choose what tables you want to bring over and how you want to scope them...
@m = MysqlMirror.new({
:source => :production,
:target => :development,
:tables => [:users, :widgets],
:where => {:users => "is_admin NOT NULL"},
})
Database information not in your database.yml file? (Or Not Running Rails?) No Problem!
@m = MysqlMirror.new({
:source => { :database => "app_production", :user => ..., :password => ..., :hostname => ...},
:target => {:database => "app_development", :hostname => 'localhost'}
})
Want to use everything in :production environment (user, pass, host) but need to change the database?
@m = MysqlMirror.new({
:source => :production,
:override => {:source => {:database => "heavy_calculations_database"}},
:target => :production
})
diff --git a/Rakefile b/Rakefile
new file mode 100644
index 0000000..f33b18f
--- /dev/null
+++ b/Rakefile
@@ -0,0 +1,20 @@
+require 'rubygems'
+require 'rake'
+
+begin
+ require 'jeweler'
+ Jeweler::Tasks.new do |gemspec|
+ gemspec.name = "mysql_mirror"
+ gemspec.summary = "Helps mirror MySql Databases"
+ gemspec.description = "Will mirror tables / databases between mysql databases and across hosts"
+ gemspec.email = "peterleonhardt@gmail.com"
+ gemspec.homepage = "http://github.com/pjleonhardt/mysql_mirror"
+ gemspec.authors = ["Peter Leonhardt", "Joe Goggins"]
+ gemspec.files = FileList["[A-Z]*", "lib/mysql_mirror.rb"]
+ end
+ Jeweler::GemcutterTasks.new
+rescue LoadError
+ puts "Jeweler not available. Please install the jeweler gem."
+end
+
+Dir["#{File.dirname(__FILE__)}/tasks/*.rake"].sort.each { |ext| load ext }
\ No newline at end of file
diff --git a/mysql_mirror.rb b/lib/mysql_mirror.rb
similarity index 100%
rename from mysql_mirror.rb
rename to lib/mysql_mirror.rb
diff --git a/mysql_mirror.gemspec b/mysql_mirror.gemspec
new file mode 100644
index 0000000..60db834
--- /dev/null
+++ b/mysql_mirror.gemspec
@@ -0,0 +1,40 @@
+# Generated by jeweler
+# DO NOT EDIT THIS FILE DIRECTLY
+# Instead, edit Jeweler::Tasks in Rakefile, and run the gemspec command
+# -*- encoding: utf-8 -*-
+
+Gem::Specification.new do |s|
+ s.name = %q{mysql_mirror}
+ s.version = "0.1.1"
+
+ s.required_rubygems_version = Gem::Requirement.new(">= 0") if s.respond_to? :required_rubygems_version=
+ s.authors = ["Peter Leonhardt", "Joe Goggins"]
+ s.date = %q{2010-03-26}
+ s.description = %q{Will mirror tables / databases between mysql databases and across hosts}
+ s.email = %q{peterleonhardt@gmail.com}
+ s.extra_rdoc_files = [
+ "README.rdoc"
+ ]
+ s.files = [
+ "README.rdoc",
+ "Rakefile",
+ "VERSION",
+ "lib/mysql_mirror.rb"
+ ]
+ s.homepage = %q{http://github.com/pjleonhardt/mysql_mirror}
+ s.rdoc_options = ["--charset=UTF-8"]
+ s.require_paths = ["lib"]
+ s.rubygems_version = %q{1.3.6}
+ s.summary = %q{Helps mirror MySql Databases}
+
+ if s.respond_to? :specification_version then
+ current_version = Gem::Specification::CURRENT_SPECIFICATION_VERSION
+ s.specification_version = 3
+
+ if Gem::Version.new(Gem::RubyGemsVersion) >= Gem::Version.new('1.2.0') then
+ else
+ end
+ else
+ end
+end
+
|
pjleonhardt/mysql_mirror
|
b517dfd919851e946d92a556c5d49df41b98e4dc
|
Version bump to 0.1.1
|
diff --git a/VERSION b/VERSION
index 6e8bf73..17e51c3 100644
--- a/VERSION
+++ b/VERSION
@@ -1 +1 @@
-0.1.0
+0.1.1
|
pjleonhardt/mysql_mirror
|
f73276477b734393ef2e0c7056f5df8f19f86c7a
|
Version bump to 0.1.0
|
diff --git a/VERSION b/VERSION
index 77d6f4c..6e8bf73 100644
--- a/VERSION
+++ b/VERSION
@@ -1 +1 @@
-0.0.0
+0.1.0
|
pjleonhardt/mysql_mirror
|
2752fef8b2cc370fb909d9f05de47e16d1ac8164
|
Version bump to 0.0.0
|
diff --git a/VERSION b/VERSION
new file mode 100644
index 0000000..77d6f4c
--- /dev/null
+++ b/VERSION
@@ -0,0 +1 @@
+0.0.0
|
pjleonhardt/mysql_mirror
|
8d0843793264fd90a30b3cd36c7084f5e944f6f6
|
Tweaking README
|
diff --git a/README.rdoc b/README.rdoc
index a5463f1..06efc15 100644
--- a/README.rdoc
+++ b/README.rdoc
@@ -1,42 +1,43 @@
+= Mysql Mirror
Use MysqlMirror to mirror data between databases. This can be useful when you want to update your
development or staging environments with real data to work with. Or, if you do some heavy lifting
calculations on another server, you might want to use a seperate database on another host, etc.
-= General Approach =
+=== General Approach
- Mirror Across Hosts: performs a mysql_dump to an sql file, then imports file to target host
- Mirror Same Host: uses CREATE TABLE ( SELECT ... ) style for mirroring. Much faster than mysql_dump
Note:
ALL information will be lost in the tables mirrored to the Target Database
-== Dependencies ==
- - Active Record
- - FileUtils
+== Dependencies
+- Active Record
+- FileUtils
-== Usage ==
+== Usage
Basic usage, copy production db to development
@m = MysqlMirror.new({
:source => :production,
:target => :development
})
- Choose what tables you want to bring over and how you want to scope them...
+Choose what tables you want to bring over and how you want to scope them...
@m = MysqlMirror.new({
:source => :production,
:target => :development,
:tables => [:users, :widgets],
:where => {:users => "is_admin NOT NULL"},
})
- Database information not in your database.yml file? (Or Not Running Rails?) No Problem!
+Database information not in your database.yml file? (Or Not Running Rails?) No Problem!
@m = MysqlMirror.new({
:source => { :database => "app_production", :user => ..., :password => ..., :hostname => ...},
:target => {:database => "app_development", :hostname => 'localhost'}
})
- Want to use everything in :production environment (user, pass, host) but need to change the database?
+Want to use everything in :production environment (user, pass, host) but need to change the database?
@m = MysqlMirror.new({
:source => :production,
:override => {:source => {:database => "heavy_calculations_database"}},
:target => :production
})
|
pjleonhardt/mysql_mirror
|
99ba4e48b2e801bd46b311b9b73c1c231ac2907b
|
Moved doc to README, renaming from MysqlMirror2 -> MysqlMirror
|
diff --git a/README.rdoc b/README.rdoc
new file mode 100644
index 0000000..a5463f1
--- /dev/null
+++ b/README.rdoc
@@ -0,0 +1,42 @@
+Use MysqlMirror to mirror data between databases. This can be useful when you want to update your
+development or staging environments with real data to work with. Or, if you do some heavy lifting
+calculations on another server, you might want to use a seperate database on another host, etc.
+
+= General Approach =
+ - Mirror Across Hosts: performs a mysql_dump to an sql file, then imports file to target host
+ - Mirror Same Host: uses CREATE TABLE ( SELECT ... ) style for mirroring. Much faster than mysql_dump
+
+Note:
+ALL information will be lost in the tables mirrored to the Target Database
+
+== Dependencies ==
+ - Active Record
+ - FileUtils
+
+== Usage ==
+Basic usage, copy production db to development
+ @m = MysqlMirror.new({
+ :source => :production,
+ :target => :development
+ })
+
+ Choose what tables you want to bring over and how you want to scope them...
+ @m = MysqlMirror.new({
+ :source => :production,
+ :target => :development,
+ :tables => [:users, :widgets],
+ :where => {:users => "is_admin NOT NULL"},
+ })
+
+ Database information not in your database.yml file? (Or Not Running Rails?) No Problem!
+ @m = MysqlMirror.new({
+ :source => { :database => "app_production", :user => ..., :password => ..., :hostname => ...},
+ :target => {:database => "app_development", :hostname => 'localhost'}
+ })
+
+ Want to use everything in :production environment (user, pass, host) but need to change the database?
+ @m = MysqlMirror.new({
+ :source => :production,
+ :override => {:source => {:database => "heavy_calculations_database"}},
+ :target => :production
+ })
diff --git a/mysql_mirror.rb b/mysql_mirror.rb
index 5cf15c2..cf89a25 100644
--- a/mysql_mirror.rb
+++ b/mysql_mirror.rb
@@ -1,231 +1,190 @@
-# Use MysqlMirror to mirror data between databases. This can be useful when you want to update your
-# development or staging environments with real data to work with. Or, if you do some heavy lifting
-# calculations on another server, you might want to use a seperate database on another host, etc.
-#
-#= General approach=
-# - Mirror Across Hosts: performs a mysql_dump to an sql file, then imports file to target host
-# - Mirror Same Host: uses CREATE TABLE ( SELECT ... ) style for mirroring. Much faster than mysql_dump
-#
-# Note:
-# ALL information will be lost in the tables mirrored to the Target Database
-#
-# == Usage ==
-# Basic usage, copy production db to development
-# @m = MysqlMirror.new({
-# :source => :production,
-# :target => :development
-# })
-#
-# Choose what tables you want to bring over and how you want to scope them...
-# @m = MysqlMirror.new({
-# :source => :production,
-# :target => :development,
-# :tables => [:users, :widgets],
-# :where => {:users => "is_admin NOT NULL"},
-# })
-#
-# Database information not in your database.yml file? (Or Not Running Rails?) No Problem!
-# @m = MysqlMirror.new({
-# :source => { :database => "app_production", :user => ..., :password => ..., :hostname => ...},
-# :target => {:database => "app_development", :hostname => 'localhost'}
-# })
-#
-# Want to use everything in :production environment (user, pass, host) but need to change the database?
-# @m = MysqlMirror.new({
-# :source => :production,
-# :override => {:source => {:database => "heavy_calculations_database"}},
-# :target => :production
-# })
-#
-#
-#
require 'active_record'
require 'fileutils'
-class MysqlMirror2
+class MysqlMirror
class MysqlMirrorException < Exception; end
class Source < ActiveRecord::Base
end
class Target < ActiveRecord::Base
end
attr_accessor :tables, :where
def initialize(options = {})
unless ([:source, :target] - options.keys).blank?
# Need to specify a Source and Target database
raise MysqlMirrorException.new("You must specify both Source and Target connections")
end
self.tables = options.delete(:tables)
self.where = options.delete(:where)
overrides = options.delete(:override) || {}
source_override = overrides.delete(:source) || {}
target_override = overrides.delete(:target) || {}
@source_config = get_configuration(options.delete(:source))
@target_config = get_configuration(options.delete(:target))
@source_config.merge!(source_override)
@target_config.merge!(target_override)
# @commands is an array of methods to call
if mirroring_same_host?
@commands = commands_for_local_mirror
else
@commands = commands_for_remote_mirror
end
end
def commands_for_local_mirror
[:local_copy]
end
def commands_for_remote_mirror
[
:remote_mysqldump,
:remote_tmp_file_table_rename,
:remote_insert_command,
:remote_rename_tmp_tables,
:remote_remove_tmp_file
]
end
def mirroring_same_host?
@source_config[:host] == @target_config[:host]
end
def execute!
@start_time = Time.now
@source = connect_to(:source)
@target = connect_to(:target)
@commands.each do |c|
self.send(c)
end
end
private
# e.g, connect_to(:source)
# => MysqlMirror::Source.establish_connection(@source_config).connection
#
def connect_to(which)
- "MysqlMirror2::#{which.to_s.classify}".constantize.establish_connection(self.instance_variable_get("@#{which}_config")).connection
+ "MysqlMirror::#{which.to_s.classify}".constantize.establish_connection(self.instance_variable_get("@#{which}_config")).connection
end
def local_copy
get_tables.each do |table|
target_db = @target_config[:database]
source_db = @source_config[:database]
target_table = "#{target_db}.#{table}"
target_tmp_table = "#{target_db}.#{table}_MirrorTmp"
target_old_table = "#{target_db}.#{table}_OldMarkedToDelete"
source_table = "#{source_db}.#{table}"
prime_statement_1 = "DROP TABLE IF EXISTS #{target_tmp_table}"
prime_statement_2 = "CREATE TABLE IF NOT EXISTS #{target_table} LIKE #{source_table}"
create_statement = "CREATE TABLE #{target_tmp_table} LIKE #{source_table}"
select_clause = "SELECT * FROM #{source_table}"
select_clause << " WHERE #{self.where[table]}" unless (self.where.blank? or self.where[table].blank?)
insert_statement = "INSERT INTO #{target_tmp_table} #{select_clause}"
rename_statement = "RENAME TABLE #{target_table} TO #{target_old_table}, #{target_tmp_table} TO #{target_table}"
cleanup_statement = "DROP TABLE IF EXISTS #{target_old_table}"
staments_to_run = [prime_statement_1, prime_statement_2, create_statement, insert_statement, rename_statement, cleanup_statement]
staments_to_run.each do |statement|
@target.execute(statement)
end
end
end
def mysqldump_command_prefix
"mysqldump --compact=TRUE --max_allowed_packet=100663296 --extended-insert=TRUE --lock-tables=FALSE --add-locks=FALSE --add-drop-table=FALSE"
end
def remote_mysqldump
@tmp_file_name = "mysql_mirror_#{@start_time.to_i}.sql"
tables = get_tables.map(&:to_s).join(" ")
where = self.where.blank? ? "" : "--where\"#{@source_config[:where]}\""
config = "-u#{@source_config[:username]} -p'#{@source_config[:password]}' -h #{@source_config[:host]} #{@source_config[:database]}"
the_cmd = "#{mysqldump_command_prefix} #{where} #{config} #{tables} > #{@tmp_file_name}"
puts the_cmd
`#{the_cmd}`
end
def remote_tmp_file_table_rename
create_or_insert_regex = Regexp.new('(^CREATE TABLE|^INSERT INTO)( `)(.+?)(`)(.+)')
new_file_name = @tmp_file_name + ".replaced.sql"
new_file = File.new(new_file_name, "w")
IO.foreach(@tmp_file_name) do |line|
if match_data = line.match(create_or_insert_regex)
table_name = match_data[3]
new_table_name = "#{table_name}_#{@start_time.to_i}"
new_file.puts match_data[1] + match_data[2] + new_table_name + match_data[4]+ match_data[5]
else
new_file.puts line
end
end
new_file.close
# replace dump'd sql file with this gsub'd one
FileUtils.move(new_file_name, @tmp_file_name)
end
def remote_insert_command
config = "-u#{@target_config[:username]} -p'#{@target_config[:password]}' -h #{@target_config[:host]} #{@target_config[:database]}"
the_cmd = "mysql #{config} < #{@tmp_file_name}"
`#{the_cmd}`
end
def remote_rename_tmp_tables
get_tables.each do |table|
tmp_table_name = "#{table}_#{@start_time.to_i}"
old_table_name = "#{table}_OldMarkedToDelete"
@target.transaction do
@target.execute("DROP TABLE IF EXISTS #{old_table_name}")
@target.execute("RENAME TABLE #{table} TO #{old_table_name}, #{tmp_table_name} TO #{table}")
@target.execute("DROP TABLE IF EXISTS #{old_table_name}")
end
end
end
def remote_remove_tmp_file
FileUtils.rm(@tmp_file_name)
end
def get_tables
the_tables = self.tables.blank? ? @source.select_values("SHOW TABLES").map!(&:to_sym) : self.tables
end
def get_configuration(env_or_hash)
config = env_or_hash
if(env_or_hash.is_a? Symbol)
config = ActiveRecord::Base.configurations[env_or_hash.to_s]
end
config.symbolize_keys
end
end
|
rubyunworks/xmlproof
|
b874daae001cab17af9d56c37e67b8d2ee0795ec
|
upload CHSF demos
|
diff --git a/demo/chsf/data.xml b/demo/chsf/data.xml
new file mode 100644
index 0000000..e868039
--- /dev/null
+++ b/demo/chsf/data.xml
@@ -0,0 +1,10 @@
+<customer>
+ <name>Joe Blow</name>
+ <phone>505-995-4066</phone>
+ <address>
+ 118 San Salvador
+ Santa Fe NM, 87501
+ </address>
+ <on_account>Yes</on_account>
+ <balance>0.00</balance>
+</customer>
diff --git a/demo/chsf/proof.xml b/demo/chsf/proof.xml
new file mode 100644
index 0000000..4355862
--- /dev/null
+++ b/demo/chsf/proof.xml
@@ -0,0 +1,7 @@
+<customer count="[1..1]">
+ <name trace="index">.+</name>
+ <phone>.+</phone>
+ <address>.+</address>
+ <on_account>Yes|No|Y|N</on_account>
+ <balance trace="aggregate">.+</balance>
+</customer>
diff --git a/demo/chsf/rule.xml b/demo/chsf/rule.xml
new file mode 100644
index 0000000..5805a6d
--- /dev/null
+++ b/demo/chsf/rule.xml
@@ -0,0 +1,9 @@
+<customer>
+ <on_account>if name == "tom" then "Yes"</on_account>
+</customer>
+
+<customers>
+ <customer>
+ puts name
+ </customer>
+</customers>
diff --git a/demo/chsf/table.sql b/demo/chsf/table.sql
new file mode 100644
index 0000000..e7e5ef5
--- /dev/null
+++ b/demo/chsf/table.sql
@@ -0,0 +1,15 @@
+CREATE TABLE document
+(
+ id PRIMARY KEY
+ root text
+ xml text
+);
+
+CREATE TABLE trace
+(
+ id PRIMARY KEY,
+ docid int REFERENCES xmldoc(id),
+ type varchar(1),
+ name text,
+ content text
+);
diff --git a/demo/chsf/work/data.xml b/demo/chsf/work/data.xml
new file mode 100644
index 0000000..e868039
--- /dev/null
+++ b/demo/chsf/work/data.xml
@@ -0,0 +1,10 @@
+<customer>
+ <name>Joe Blow</name>
+ <phone>505-995-4066</phone>
+ <address>
+ 118 San Salvador
+ Santa Fe NM, 87501
+ </address>
+ <on_account>Yes</on_account>
+ <balance>0.00</balance>
+</customer>
diff --git a/demo/chsf/work/proof.xml b/demo/chsf/work/proof.xml
new file mode 100644
index 0000000..4355862
--- /dev/null
+++ b/demo/chsf/work/proof.xml
@@ -0,0 +1,7 @@
+<customer count="[1..1]">
+ <name trace="index">.+</name>
+ <phone>.+</phone>
+ <address>.+</address>
+ <on_account>Yes|No|Y|N</on_account>
+ <balance trace="aggregate">.+</balance>
+</customer>
diff --git a/demo/chsf/work/rule.xml b/demo/chsf/work/rule.xml
new file mode 100644
index 0000000..5805a6d
--- /dev/null
+++ b/demo/chsf/work/rule.xml
@@ -0,0 +1,9 @@
+<customer>
+ <on_account>if name == "tom" then "Yes"</on_account>
+</customer>
+
+<customers>
+ <customer>
+ puts name
+ </customer>
+</customers>
diff --git a/demo/chsf/work/table.sql b/demo/chsf/work/table.sql
new file mode 100644
index 0000000..e7e5ef5
--- /dev/null
+++ b/demo/chsf/work/table.sql
@@ -0,0 +1,15 @@
+CREATE TABLE document
+(
+ id PRIMARY KEY
+ root text
+ xml text
+);
+
+CREATE TABLE trace
+(
+ id PRIMARY KEY,
+ docid int REFERENCES xmldoc(id),
+ type varchar(1),
+ name text,
+ content text
+);
diff --git a/demo/chsf/work/tkxml.rb b/demo/chsf/work/tkxml.rb
new file mode 100644
index 0000000..3e893a3
--- /dev/null
+++ b/demo/chsf/work/tkxml.rb
@@ -0,0 +1,167 @@
+# TkXML
+# by Thomas Sawyer (transami@runbox.com)
+# version April 2002 - Alpha (2.04a)
+# A combination of Ruby/Tk + REXML to allow for fast and easy creation of Tk interfaces using standard XML
+
+require 'REXML/document'
+require 'tk'
+
+include REXML
+
+class TkXML
+
+ def initialize(source)
+ @listener = TkXML_Listener.new
+ Document.parse_stream(source, @listener)
+ end
+
+ def build
+ Tk.mainloop
+ end
+
+end
+
+class TkXML_Listener
+
+ def initialize
+ puts "Initializing TkXML_Listner"
+ @widget = Hash.new
+ @widget_stack = Array.new
+ @parent = nil
+ end
+
+ def tag_start name, attributes
+
+ # get parent, the widget on the bottom of the stack
+ @parent = @widget_stack.last
+
+ # pull off the tag name if prefixed with the Tk namespace
+ if name[0..2] == "Tk:"
+ tag_name = name[3..name.length]
+ else
+ tag_name = name
+ end
+
+ # looks life the attributes object given is nothing morethen a array in an array. how lame!
+ # this wll turn it into a hash
+ attr_hash = Hash.new
+ attributes.each do |a|
+ attr_hash[a[0]] = a[1]
+ end
+
+ # okay, lets do this thing
+
+ # is it a method call or a new widget?
+ if tag_name[0..0] == '_'
+
+ # apply method
+ puts "Applying Method: #{name} to #{@parent}"
+
+ # get method name
+ meth_name = tag_name[1..tag_name.length]
+
+ # assign the method's parameters
+ p_arr = Array.new # array for parameters to be passed
+ p_init = Hash.new # for the ordered arguments _1 _2 etc.
+ p_hash = Hash.new # for all other named parameters
+
+ # weed out the ordered parameters from the hash parameters
+ attr_hash.each do |n, v|
+ puts " #{n} => #{v}"
+ if n[0..0] == "_"
+ p_init[n] = v
+ else
+ p_hash[n] = v
+ end
+ end
+
+ # sort the ordered parameters based on the hash key
+ # note: this converts p_init into an associative array
+ # then place each one in the parameter array
+ if not p_init.empty?
+ p_init.sort
+ p_init.each do |a|
+ p_arr.push a[1]
+ end
+ end
+
+ # now add the hash to the array if there is one
+ if not p_hash.empty?
+ p_arr.push p_hash
+ end
+
+ # call the method
+ @parent.send(meth_name, *p_arr)
+
+ else
+
+ # create widget
+ puts "Creating Widget: #{name} of #{@parent}"
+
+ widget_class = "Tk" + tag_name.capitalize
+ widget_name = attr_hash['name']
+
+ if @parent == nil
+ @widget[widget_name] = Tk.const_get(widget_class).new
+ else
+ @widget[widget_name] = Tk.const_get(widget_class).new(@parent)
+ end
+
+ # assign the widget properties
+ attr_hash.each do |n, v|
+ if not n == 'name'
+ puts " #{n} => #{v}"
+ @widget[widget_name].send(n, v)
+ end
+ end
+
+ # push widget on to the stack
+ @widget_stack.push(@widget[widget_name])
+
+ end
+
+ end
+
+
+ def tag_end name
+
+ # pull off the tag name if prefixed with the Tk namespace
+ if name[0..2] == "Tk:"
+ tag_name = name[3..name.length]
+ else
+ tag_name = name
+ end
+
+ # if method then we're finish
+ # else if widget then finish creation and pop off the widget stack
+ if tag_name[0..0] == "_"
+
+ puts "End Method: #{name}"
+
+ else
+
+ @parent = @widget_stack[-2]
+ current = @widget_stack.last
+
+ case tag_name.downcase
+ when "menu"
+ @parent.menu(current)
+ end
+
+ # pop current widget off of stack
+ @widget_stack.pop
+
+ puts "End Widget: #{name}"
+
+ end
+
+ end
+
+
+ def text free_radical
+ if not free_radical.strip == ""
+ puts "Error: TkXML does not use XML text entries: #{free_radical}"
+ end
+ end
+
+end
diff --git a/demo/chsf/work/tkxmltest.rb b/demo/chsf/work/tkxmltest.rb
new file mode 100644
index 0000000..89cad0d
--- /dev/null
+++ b/demo/chsf/work/tkxmltest.rb
@@ -0,0 +1,7 @@
+require 'tkxml'
+
+xml_file = File.open("ui.xml")
+tkxml_instance = TkXML.new(xml_file)
+tkxml_instance.build
+
+
diff --git a/demo/chsf/work/ui.xml b/demo/chsf/work/ui.xml
new file mode 100644
index 0000000..2cff3cb
--- /dev/null
+++ b/demo/chsf/work/ui.xml
@@ -0,0 +1,32 @@
+<Tk:Root name="test" title="Test">
+
+ <Tk:Frame name="menuframe">
+ <Tk:Menubutton name="filebutton" text="File" underline="0">
+ <Tk:Menu name="filemenu" tearoff="false">
+ <Tk:_add
+ _1="command"
+ label="Open"
+ command="openDocument"
+ underline="0"
+ accel="Ctrl+O" />
+ <Tk:_add
+ _1="command"
+ label="Exit"
+ command="exitApplication"
+ underline="0"
+ accel="Ctrl+Q" />
+ </Tk:Menu>
+ <Tk:_pack side="left" />
+ </Tk:Menubutton>
+ <Tk:_pack side="left" />
+ </Tk:Frame>
+
+ <Tk:_bind
+ _1="Control-o"
+ _2="openDocument" />
+
+ <Tk:_bind
+ _1="Control-q"
+ _2="exitApplication" />
+
+</Tk:Root>
diff --git a/demo/chsf/work/xmlprove.rb b/demo/chsf/work/xmlprove.rb
new file mode 100644
index 0000000..147ad74
--- /dev/null
+++ b/demo/chsf/work/xmlprove.rb
@@ -0,0 +1,146 @@
+# XML Prove
+# This is a simple XML validator or schema (not to be confused with the XML-Schema).
+# It is simpler in nature then XML-Schema itself.
+# But no XML-Schema validator is available for Ruby at this time.
+# I don't want to write my own XML-Schema validator as REXML is said to have one in the works.
+# In the mean time I will use this, my own basic validator, XML-Proof.
+# Who knows, perhaps it will prove better than XML-Schema in the long run ;-)
+
+require "rexml/document"
+include REXML
+
+class Proof_Listener
+
+ #
+ def initialize(xml_file)
+
+ # input
+ @xmldoc = Document.new(xml_file)
+
+ # thruput
+ @tags = Array.new
+ @errors = Array.new
+
+ end
+
+ # listener function for when a tag element opens
+ def tag_start(name, attr)
+
+ @tags.push(name)
+
+ attr.each do |a|
+ if a[0] != 'count' and a[0] != 'trace'
+ attribute(a[0], a[1])
+ end
+ end
+
+ end
+
+ # listener function for when a tag element closes
+ def tag_end(name)
+
+ @tags.pop
+
+ end
+
+ # called from tag_start to process attribute nodes
+ def attribute(name, text)
+
+ re = Regexp.new(text)
+ xp = build_xpath(@tags) + "/@" + name
+ prove_entry(re, xp)
+
+ end
+
+ # listener function when a text entry is processed
+ def text(text)
+
+ re = Regexp.new(text)
+ xp = build_xpath(@tags)
+ prove_entry(re, xp)
+
+ end
+
+ # this builds an xpath from the tags array
+ def build_xpath(tags_array)
+
+ result = ""
+
+ if tags_array.empty?
+ result = ""
+ else
+ tags_array.each do |a|
+ result = result + "/" + a
+ end
+ end
+
+ return(result)
+
+ end
+
+ # this validates entries angaint the regular expession
+ def prove_entry(reg_exp, xpath)
+
+ @xmldoc.elements.each(xpath) do |element|
+ md = reg_exp.match(element.text)
+ # error check
+ if md == nil
+ puts "Error at #{xpath}, '#{element.text}' =~ #{reg_exp.source}"
+ @errors.push [[xpath, reg_exp.source]]
+ end
+
+ end
+
+ end
+
+ # this returns the xml document with editorial markups
+ def editorial_markup
+
+ end
+
+ # this returns the array of error results
+ def editorial_errors
+ return @errors
+ end
+
+ # this returns true if the document passed validation, otherwise false
+ def valid?
+ if @errors.empty?
+ return true
+ else
+ return false
+ end
+ end
+
+end
+
+# command line operation
+if $0 == __FILE__
+
+ if ARGV.length < 2
+
+ puts "USAGE: #{$0} proof-sheet xml-document"
+ puts "e.g. #{$0} proof.xml data.xml"
+
+ else
+
+ proof_path = ARGV[0]
+ doc_path = ARGV[1]
+
+ xml_proof = File.new(proof_path)
+ xml_file = File.new(doc_path)
+
+ # validate source file against proof
+ listener = Proof_Listener.new(xml_file)
+ Document.parse_stream(xml_proof, listener)
+
+ # report
+ if listener.valid?
+ puts "GOOD DOCUMENT"
+ else
+ puts "BAD DOCUMENT"
+ end
+
+ end
+
+end
diff --git a/demo/chsf/xmlprove.rb b/demo/chsf/xmlprove.rb
new file mode 100644
index 0000000..147ad74
--- /dev/null
+++ b/demo/chsf/xmlprove.rb
@@ -0,0 +1,146 @@
+# XML Prove
+# This is a simple XML validator or schema (not to be confused with the XML-Schema).
+# It is simpler in nature then XML-Schema itself.
+# But no XML-Schema validator is available for Ruby at this time.
+# I don't want to write my own XML-Schema validator as REXML is said to have one in the works.
+# In the mean time I will use this, my own basic validator, XML-Proof.
+# Who knows, perhaps it will prove better than XML-Schema in the long run ;-)
+
+require "rexml/document"
+include REXML
+
+class Proof_Listener
+
+ #
+ def initialize(xml_file)
+
+ # input
+ @xmldoc = Document.new(xml_file)
+
+ # thruput
+ @tags = Array.new
+ @errors = Array.new
+
+ end
+
+ # listener function for when a tag element opens
+ def tag_start(name, attr)
+
+ @tags.push(name)
+
+ attr.each do |a|
+ if a[0] != 'count' and a[0] != 'trace'
+ attribute(a[0], a[1])
+ end
+ end
+
+ end
+
+ # listener function for when a tag element closes
+ def tag_end(name)
+
+ @tags.pop
+
+ end
+
+ # called from tag_start to process attribute nodes
+ def attribute(name, text)
+
+ re = Regexp.new(text)
+ xp = build_xpath(@tags) + "/@" + name
+ prove_entry(re, xp)
+
+ end
+
+ # listener function when a text entry is processed
+ def text(text)
+
+ re = Regexp.new(text)
+ xp = build_xpath(@tags)
+ prove_entry(re, xp)
+
+ end
+
+ # this builds an xpath from the tags array
+ def build_xpath(tags_array)
+
+ result = ""
+
+ if tags_array.empty?
+ result = ""
+ else
+ tags_array.each do |a|
+ result = result + "/" + a
+ end
+ end
+
+ return(result)
+
+ end
+
+ # this validates entries angaint the regular expession
+ def prove_entry(reg_exp, xpath)
+
+ @xmldoc.elements.each(xpath) do |element|
+ md = reg_exp.match(element.text)
+ # error check
+ if md == nil
+ puts "Error at #{xpath}, '#{element.text}' =~ #{reg_exp.source}"
+ @errors.push [[xpath, reg_exp.source]]
+ end
+
+ end
+
+ end
+
+ # this returns the xml document with editorial markups
+ def editorial_markup
+
+ end
+
+ # this returns the array of error results
+ def editorial_errors
+ return @errors
+ end
+
+ # this returns true if the document passed validation, otherwise false
+ def valid?
+ if @errors.empty?
+ return true
+ else
+ return false
+ end
+ end
+
+end
+
+# command line operation
+if $0 == __FILE__
+
+ if ARGV.length < 2
+
+ puts "USAGE: #{$0} proof-sheet xml-document"
+ puts "e.g. #{$0} proof.xml data.xml"
+
+ else
+
+ proof_path = ARGV[0]
+ doc_path = ARGV[1]
+
+ xml_proof = File.new(proof_path)
+ xml_file = File.new(doc_path)
+
+ # validate source file against proof
+ listener = Proof_Listener.new(xml_file)
+ Document.parse_stream(xml_proof, listener)
+
+ # report
+ if listener.valid?
+ puts "GOOD DOCUMENT"
+ else
+ puts "BAD DOCUMENT"
+ end
+
+ end
+
+end
|
rubyunworks/xmlproof
|
58b1c450686fb1a60bb865ccb97ac6c182891c85
|
add rerexml to vendor
|
diff --git a/README b/README
index 472a2f7..1a7d304 100644
--- a/README
+++ b/README
@@ -1,97 +1,89 @@
-
= XMLProof/Ruby - An Implementation of xml:Proof for Ruby
==<a schema for the rest of us/>
== Introduction
XMLProof/Ruby is a 100% Ruby API for using xml:Proof, an alternate XML schema.
It was born out of a need to typecast data taken from an XML Document.
Imagine! All this just for that one simple need. But it seemed the
-right way to go about it, and the outcome has produced many additional fruits.
-xml:Proof has prooved to be quite a sophisticated XML schema,
-rivaling all others in capability and ease of use.
-
-== Installation
-
-To install XMLProof simply unpack the gzipped tarball into your local site_ruby path.
-This path is usually +/usr/local/lib/site_ruby/1.6/+.
-An install script is not provided as installation does not require any special files or settings.
+right way to go about it. xml:Proof has prooved to be quite a sophisticated
+XML schema.
== Requirements
XMLProof/Ruby requires:
-* TomsLib and REREXML, available at "http://www.transami.net/files/ruby/index.html".
+* REREXML, available at "http://www.transami.net/files/ruby/index.html".
* REXML, available at "http://www.germane-software.com/software/rexml/".
== Usage
First read the documentation for xml:Proof at "http://www.transami.net/files/ruby/xmlproof/xmlproof.html".
For greater comprehension read the xml:Proof specification at "http://www.transami.net/files/ruby/xmlproof/xmlproof-spec.html".
Example of using the proofreader:
require 'xmlproof/proofreader'
def validate(xml_filename)
# create a proofreader
prover = XMLProof::Proofreader.new
# validate using document's internal schema instructions
valid = proofreader.proofread_document_internal(xml_filename)
# return results
- if valid
- puts "GOOD DOCUMENT"
- else
+ if valid
+ puts "GOOD DOCUMENT"
+ else
puts "BAD DOCUMENT"
prover.errors.each do |e|
puts "namespace->#{e[0]} \t xpath->#{e[1]} \t error->#{e[2]}"
- end
+ end
end
end
Another using external proofsheets:
require 'xmlproof/proofreader'
def validate_with(xml_filename, *proofsheet_files)
# create a proof
proofsheets = XMLProof::Proofsheets.new
proofsheets.load_proofsheets(*proofsheet_files)
proof = XMLProof::Proof.new(proofsheets)
# create a proofreader using the proof
proofreader = XMLProof::Proofreader.new
proofreader.use_proof(proof)
# validate
valid = proofreader.proofread_document(xml)
# return results
- if valid
- puts "GOOD DOCUMENT"
- else
+ if valid
+ puts "GOOD DOCUMENT"
+ else
puts "BAD DOCUMENT"
prover.errors.each do |e|
puts "namespace->#{e[0]} \t xpath->#{e[1]} \t error->#{e[2]}"
- end
+ end
end
end
Until a more detailed tutorial can be written,
please refer to prooftool.rb for more concise examples,
and refer to the API RDocs for further details.
== Authentication
Package:: XMLProof/Ruby
Author:: Thomas Sawyer
Requires:: Ruby 1.6.5+
License:: Copyright (c) 2002 Thomas Sawyer, transami@transami.net under the Ruby License
diff --git a/vendor/rerexml/README b/vendor/rerexml/README
new file mode 100644
index 0000000..e69de29
diff --git a/vendor/rerexml/bin/xmlinherit b/vendor/rerexml/bin/xmlinherit
new file mode 100644
index 0000000..a938b7f
--- /dev/null
+++ b/vendor/rerexml/bin/xmlinherit
@@ -0,0 +1,45 @@
+# REREXML - InheritsTool
+# Copyright (c) 2002 Thomas Sawyer, Ruby License
+#
+# Command line conversion tool for inherits notation.
+
+require 'rerexml/rerexml'
+require 'rerexml/inherits'
+require 'rexml/contrib/prettyxml'
+require 'tomslib/communication'
+
+extend TomsLib::Communication
+
+if $0 == __FILE__
+
+ option = ARGV[0]
+ xml_file = ARGV[1]
+
+ if option == '-t'
+
+ xml_string = fetch_xml(xml_file)
+ xml_document = REXML::Document.new(xml_string)
+ new_document = REXML::NamespaceConversion.to_standard(xml_document)
+
+ out = ''
+ new_document.write(out, -1)
+ puts PrettyXML.pretty(out, 2)
+
+ elsif option == '-f'
+
+ xml_string = fetch_xml(xml_file)
+ xml_document = REXML::Document.new(xml_string)
+ new_document = REXML::NamespaceConversion.from_standard(xml_document)
+
+ out = ''
+ new_document.write(out, -1)
+ puts PrettyXML.pretty(out, 2)
+
+ else
+
+ puts "USEAGE: #{$0} [-f|-t] file.xml"
+
+ end
+
+end
+
diff --git a/vendor/rerexml/demo/example1.xml b/vendor/rerexml/demo/example1.xml
new file mode 100644
index 0000000..aad7e95
--- /dev/null
+++ b/vendor/rerexml/demo/example1.xml
@@ -0,0 +1,25 @@
+<?xml version="1.0" encoding="ISO-8859-1" ?>
+<?xml:ns prefix="example" uri="http://www.transami.net/namespace/testing" ?>
+<?xml:schema url="example1.xps" uri="http://www.transami.net/namespace/xmlproof" ?>
+
+<example:shiporder orderid="889923">
+ <orderperson>John Smith</orderperson>
+ <shipto>
+ <name>Ola Nordmann</name>
+ <address>Langgt 23</address>
+ <city>4000 Stavanger</city>
+ <country>Norway</country>
+ </shipto>
+ <item>
+ <title>Empire Burlesque</title>
+ <note>Special Edition</note>
+ <quantity>1</quantity>
+ <price>10.90</price>
+ </item>
+ <item>
+ <title>Hide your heart</title>
+ <quantity>1</quantity>
+ <overstock>1</overstock>
+ <price>9.90</price>
+ </item>
+</example:shiporder>
diff --git a/vendor/rerexml/lib/rexml/inherits.rb b/vendor/rerexml/lib/rexml/inherits.rb
new file mode 100644
index 0000000..f64c469
--- /dev/null
+++ b/vendor/rerexml/lib/rexml/inherits.rb
@@ -0,0 +1,95 @@
+# REREXML - Inherits
+# Copyright (c) 2002 Thomas Sawyer, Ruby License
+#
+# Enhances REXML to support inherited prefixes and namespaces.
+# Also provides conversion tool.
+# This notation violates W3C Recommendations!
+
+module REXML
+
+ module Namespace
+
+ # Returns prefix if given, otherwise recurses through parents to find a prefix.
+ # NOTE: Does not take into account default prefix-less namespaces. In which case there are no inherited prefixes.
+ # This is in violation of xml standard ;p Prefixes are not inhertited, but they should be!
+ def inherited_prefix
+ pf = prefix
+ if pf.size > 0
+ return pf # has its own prefix
+ elsif self.type == REXML::Element
+ if parent
+ return parent.inherited_prefix
+ else
+ return nil # warning: element does not belong to a document (null-namespace)
+ end
+ elsif self.type == REXML::Attribute
+ if element
+ return element.inherited_prefix
+ else
+ return nil # warning: attribute does not belong to an element (null-namespace)
+ end
+ elsif self.type == REXML::Document
+ if self.namespace_instructions.length > 0
+ return self.namespace_instructions[0].attributes['prefix']
+ else
+ return nil # null-namespace
+ end
+ else
+ return nil # warning: what the hell is this then? (null-namespace)
+ end
+ end
+
+
+ # Returns the uri space from the matching namespace processing instruction based on the inherited prefix
+ def inherited_namespace #(prefix=inherited_prefix)
+ if self.type == REXML::Element
+ doc = root.parent
+ elsif self.type == REXML::Attribute
+ doc = element.root.parent
+ else
+ doc = nil
+ end
+ if doc
+ # THIS SHOULD BE CHANGED TO FIRST DOC XMLNS ATTRIBUTE, NOT INSTRUCTION
+ ns_pi = doc.namespace_instructions.find { |i| i.attributes['prefix'] == prefix }
+ return ns_pi.attributes['uri']
+ else
+ return nil
+ end
+ end
+
+ end
+
+ #
+ module NamespaceConversion
+
+ #
+ def NamespaceConversion.to_standard(xml_document)
+
+ elements = REXML::XPath.match(xml_document,'//')
+ elements.each do |element|
+ pf = element.prefix
+ if pf.empty?
+ element.name = "#{element.inherited_prefix}:#{element.name}"
+ end
+ end
+
+ xml_document.namespace_instructions.each do |nsi|
+ xml_document.root.add_namespace(nsi.attributes['prefix'], nsi.attributes['uri'])
+ nsi.remove
+ end
+
+ return xml_document
+
+ end
+
+
+ def NamespaceConversion.from_standard(xml_source)
+ # TO DO
+ end
+
+ end
+
+
+end # REXML
+
diff --git a/vendor/rerexml/lib/rexml/rerexml.rb b/vendor/rerexml/lib/rexml/rerexml.rb
new file mode 100644
index 0000000..b05af6c
--- /dev/null
+++ b/vendor/rerexml/lib/rexml/rerexml.rb
@@ -0,0 +1,206 @@
+# TomsLib - Tom's Ruby Support Library
+# Copyright (c) 2002 Thomas Sawyer, LGPL
+#
+# REREXML, Basic REXML Modifications
+#
+# These are some basic enhancements to REXML.
+# This does NOT cause any incompatabilities with W3C Recommendations.
+
+# TomsLib is free software; you can redistribute it and/or modify
+# it under the terms of the GNU Lesser General Public License as published by
+# the Free Software Foundation; either version 2 of the License, or
+# (at your option) any later version.
+#
+# TomsLib is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU Lesser General Public License for more details.
+#
+# You should have received a copy of the GNU Lesser General Public License
+# along with TomsLib; if not, write to the Free Software
+# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+
+
+require 'rexml/document'
+
+module REXML
+
+ # Modifications to Instruction class:
+ # Adds an attributes instance variable and accessor
+ # for treating instruction content (cdata) as a list of attributes.
+ class Instruction
+
+ attr_accessor :attributes
+
+ # Additionally loads contents as a hash of attributes
+ alias :orig_initialize :initialize
+ def initialize(target, content=nil)
+ orig_initialize(target, content)
+ if @content
+ hash = {}
+ @content.scan(/\s*([\w_]+)\s*=\s*(['"])(.*?)\2/).each{ |k,d,v| hash[k] = v }
+ @attributes = hash
+ else
+ @attributes = nil
+ end
+ end
+
+ end # Instruction
+
+ # Modifications to Namespace module:
+ # Adds absolute xpath and index methods.
+ module Namespace
+
+ # Returns the absolute xpath of the given element
+ # Does not include prefixes?
+ def absolute_xpath(noroot=false)
+ xp = xpath_recurse
+ xp.gsub!(/^\//,'') # remove initial path divider if present
+ if noroot
+ return xp.gsub(/^\w+\//,'') # remove first path tag (root)
+ else
+ return xp
+ end
+ end
+ # Returns the relative xpath of the given element (i.e. no indexes)
+ def relative_xpath(noroot=false)
+ xp = xpath_recurse(true)
+ xp.gsub!(/^\//,'') # remove initial path divider if present
+ if noroot
+ return xp.gsub(/^\w+\//,'') # remove first path tag (root)
+ else
+ return xp
+ end
+ end
+ def xpath_recurse(rel=false)
+ case self
+ when REXML::Document
+ xp = ""
+ when REXML::Element
+ if self.parent
+ xp = "#{self.parent.xpath_recurse}/#{self.expanded_name}"
+ xpi = self.xpath_index(self)
+ xp << "[#{xpi}]" if xpi and not rel
+ else
+ xp = self.expanded_name
+ xpi = self.xpath_index(self)
+ xp << "[#{xpi}]" if xpi and not rel
+ end
+ when REXML::Attribute
+ if self.element
+ xp = "#{self.element.xpath_recurse}/@#{self.expanded_name}"
+ else
+ xp = self.expanded_name
+ end
+ end
+ return xp
+ end
+ protected :xpath_recurse
+
+ # Returns xpath index of a given element
+ def xpath_index(element)
+ nm = element.expanded_name
+ i = 0
+ indice = 0
+ element.parent.elements.each do |el|
+ i += 1 if el.expanded_name == nm
+ indice = i if el == element
+ end
+ indice = nil if indice < 2 and i < 2
+ return indice
+ end
+
+ # Returns the xpath index total for a given element
+ def xpath_index_length(element)
+ nm = element.expanded_name
+ i = 0
+ element.parent.elements.each do |el|
+ i += 1 if el.expanded_name == nm
+ end
+ return i
+ end
+ alias xpath_index_size xpath_index_length
+
+ # !!!!!!! these two will eventually be removed
+ alias inherited_prefix prefix
+ def inherited_namespace(prefix=inherited_prefix)
+ namespace(prefix) # returns the uri namespace
+ end
+
+ end # Namespace
+
+ # Document class modification:
+ # Adds instance variables and readers for namespace and schema processing instructions.
+ # And turns namespace instructions into ATTRLIST.
+ class Document
+
+ attr_reader :namespace_instructions, :schema_instructions
+
+ alias :orig_initialize :initialize
+ def initialize(source=nil, context={})
+ orig_initialize(source, context) # first origianl inititalize
+ @namespace_instructions = load_namespaces # load namespace instrunctions
+ @schema_instructions = load_schemas # load schema instructions
+ create_namespace_attrlist
+ end
+
+ # Loads the namespace xml processing instruction entities. (This is a non-standard notation!)
+ def load_namespaces
+ namespace_instructions = []
+ ns_pi = self.find_all { |i| i.is_a? REXML::Instruction and i.target == 'xml:ns' }
+ ns_pi.each do |i|
+ if i.attributes.has_key?('name')
+ i.attributes['prefix'] = i.attributes['name']
+ elsif i.attributes.has_key?('prefix')
+ i.attributes['name'] = i.attributes['prefix']
+ else
+ i.attributes['name'] = i.attributes['prefix'] = ''
+ end
+ if i.attributes.has_key?('space')
+ i.attributes['uri'] = i.attributes['space']
+ elsif i.attributes.has_key?('uri')
+ i.attributes['space'] = i.attributes['uri']
+ else
+ i.attributes['space'] = i.attributes['uri'] = ''
+ end
+ namespace_instructions << i
+ end
+ return namespace_instructions
+ end
+
+ # Loads the schema xml processing instruction entities. (This is a non-standard notation!)
+ def load_schemas
+ schema_instructions = []
+ schema_pi = self.find_all { |i| i.is_a? REXML::Instruction and i.target == 'xml:schema' }
+ schema_pi.each do |i|
+ if i.attributes.has_key?('url')
+ i.attributes['source'] = i.attributes['url']
+ elsif i.attributes.has_key?('source')
+ i.attributes['url'] = i.attributes['source']
+ else
+ raise "parse error schema instruction missing required url attribute"
+ end
+ if i.attributes.has_key?('uri')
+ i.attributes['space'] = i.attributes['uri']
+ elsif i.attributes.has_key?('space')
+ i.attributes['uri'] = i.attributes['space']
+ else
+ raise "parse error schema instruction missing required type attribute"
+ end
+ schema_instructions << i
+ end
+ return schema_instructions
+ end
+
+ # Creates the document attrlist from the namespace instructions
+ def create_namespace_attrlist
+ @namespace_instructions.each do |nsi|
+ self.add_namespace(nsi.attributes['prefix'], nsi.attributes['uri'])
+ end
+ end
+
+ end # Document
+
+end
+
+
diff --git a/vendor/rerexml/work/rerexml.tgz b/vendor/rerexml/work/rerexml.tgz
new file mode 100644
index 0000000..d88d6f6
Binary files /dev/null and b/vendor/rerexml/work/rerexml.tgz differ
diff --git a/vendor/rerexml/work/rerexml/example1.xml b/vendor/rerexml/work/rerexml/example1.xml
new file mode 100644
index 0000000..aad7e95
--- /dev/null
+++ b/vendor/rerexml/work/rerexml/example1.xml
@@ -0,0 +1,25 @@
+<?xml version="1.0" encoding="ISO-8859-1" ?>
+<?xml:ns prefix="example" uri="http://www.transami.net/namespace/testing" ?>
+<?xml:schema url="example1.xps" uri="http://www.transami.net/namespace/xmlproof" ?>
+
+<example:shiporder orderid="889923">
+ <orderperson>John Smith</orderperson>
+ <shipto>
+ <name>Ola Nordmann</name>
+ <address>Langgt 23</address>
+ <city>4000 Stavanger</city>
+ <country>Norway</country>
+ </shipto>
+ <item>
+ <title>Empire Burlesque</title>
+ <note>Special Edition</note>
+ <quantity>1</quantity>
+ <price>10.90</price>
+ </item>
+ <item>
+ <title>Hide your heart</title>
+ <quantity>1</quantity>
+ <overstock>1</overstock>
+ <price>9.90</price>
+ </item>
+</example:shiporder>
diff --git a/vendor/rerexml/work/rerexml/inherits.rb b/vendor/rerexml/work/rerexml/inherits.rb
new file mode 100644
index 0000000..f64c469
--- /dev/null
+++ b/vendor/rerexml/work/rerexml/inherits.rb
@@ -0,0 +1,95 @@
+# REREXML - Inherits
+# Copyright (c) 2002 Thomas Sawyer, Ruby License
+#
+# Enhances REXML to support inherited prefixes and namespaces.
+# Also provides conversion tool.
+# This notation violates W3C Recommendations!
+
+module REXML
+
+ module Namespace
+
+ # Returns prefix if given, otherwise recurses through parents to find a prefix.
+ # NOTE: Does not take into account default prefix-less namespaces. In which case there are no inherited prefixes.
+ # This is in violation of xml standard ;p Prefixes are not inhertited, but they should be!
+ def inherited_prefix
+ pf = prefix
+ if pf.size > 0
+ return pf # has its own prefix
+ elsif self.type == REXML::Element
+ if parent
+ return parent.inherited_prefix
+ else
+ return nil # warning: element does not belong to a document (null-namespace)
+ end
+ elsif self.type == REXML::Attribute
+ if element
+ return element.inherited_prefix
+ else
+ return nil # warning: attribute does not belong to an element (null-namespace)
+ end
+ elsif self.type == REXML::Document
+ if self.namespace_instructions.length > 0
+ return self.namespace_instructions[0].attributes['prefix']
+ else
+ return nil # null-namespace
+ end
+ else
+ return nil # warning: what the hell is this then? (null-namespace)
+ end
+ end
+
+
+ # Returns the uri space from the matching namespace processing instruction based on the inherited prefix
+ def inherited_namespace #(prefix=inherited_prefix)
+ if self.type == REXML::Element
+ doc = root.parent
+ elsif self.type == REXML::Attribute
+ doc = element.root.parent
+ else
+ doc = nil
+ end
+ if doc
+ # THIS SHOULD BE CHANGED TO FIRST DOC XMLNS ATTRIBUTE, NOT INSTRUCTION
+ ns_pi = doc.namespace_instructions.find { |i| i.attributes['prefix'] == prefix }
+ return ns_pi.attributes['uri']
+ else
+ return nil
+ end
+ end
+
+ end
+
+ #
+ module NamespaceConversion
+
+ #
+ def NamespaceConversion.to_standard(xml_document)
+
+ elements = REXML::XPath.match(xml_document,'//')
+ elements.each do |element|
+ pf = element.prefix
+ if pf.empty?
+ element.name = "#{element.inherited_prefix}:#{element.name}"
+ end
+ end
+
+ xml_document.namespace_instructions.each do |nsi|
+ xml_document.root.add_namespace(nsi.attributes['prefix'], nsi.attributes['uri'])
+ nsi.remove
+ end
+
+ return xml_document
+
+ end
+
+
+ def NamespaceConversion.from_standard(xml_source)
+ # TO DO
+ end
+
+ end
+
+
+end # REXML
+
diff --git a/vendor/rerexml/work/rerexml/inheritstool.rb b/vendor/rerexml/work/rerexml/inheritstool.rb
new file mode 100644
index 0000000..89846bd
--- /dev/null
+++ b/vendor/rerexml/work/rerexml/inheritstool.rb
@@ -0,0 +1,41 @@
+# REREXML - InheritsTool
+# Copyright (c) 2002 Thomas Sawyer, Ruby License
+#
+# Command line conversion tool for inherits notation.
+
+require 'rerexml/rerexml'
+require 'rerexml/inherits'
+require 'rexml/contrib/prettyxml'
+require 'tomslib/communication'
+
+extend TomsLib::Communication
+
+if $0 == __FILE__
+
+ option = ARGV[0]
+ xml_file = ARGV[1]
+
+ if option == '-t'
+
+ xml_string = fetch_xml(xml_file)
+ xml_document = REXML::Document.new(xml_string)
+ new_document = REXML::NamespaceConversion.to_standard(xml_document)
+
+ out = ''
+ new_document.write(out, -1)
+ puts PrettyXML.pretty(out)
+
+ elsif option == '-f'
+
+ xml_string = fetch_xml(xml_file)
+ xml_document = REXML::Document.new(xml_string)
+ new_document = REXML::NamespaceConversion.from_standard(xml_document)
+
+ out = ''
+ new_document.write(out, -1)
+ puts PrettyXML.pretty(out)
+
+ end
+
+end
+
diff --git a/vendor/rerexml/work/rerexml/rerexml.rb b/vendor/rerexml/work/rerexml/rerexml.rb
new file mode 100644
index 0000000..22a3be6
--- /dev/null
+++ b/vendor/rerexml/work/rerexml/rerexml.rb
@@ -0,0 +1,148 @@
+# REREXML - Basic REXML Modifications
+#
+# These are some basic enhancements to REXML.
+# This does NOT cause any incompatabilities with W3C Recommendations.
+
+require 'rexml/document'
+
+module REXML
+
+ class Instruction
+
+ attr_accessor :attributes
+
+ # Additionally loads contents as a hash of attributes
+ alias :orig_initialize :initialize
+ def initialize(target, content=nil)
+ orig_initialize(target, content)
+ if @content
+ hash = {}
+ @content.scan(/\s*([\w_]+)\s*=\s*(['"])(.*?)\2/).each{ |k,d,v| hash[k] = v }
+ @attributes = hash
+ else
+ @attributes = nil
+ end
+ end
+
+ end # Instruction
+
+
+ module Namespace
+
+ # Returns an xpath built up from anscestor names
+ # i.e. {root tag name}/.../{this node's tag name}
+ # Does not include prefixes
+ def absolute_xpath(noroot=false)
+ if self.type == REXML::Element
+ if self.parent
+ if self.parent.name == REXML::Element::UNDEFINED # we've hit the root
+ if noroot
+ xp = ""
+ else
+ xp = name
+ end
+ else
+ xp = "#{self.parent.absolute_xpath(noroot)}/#{self.name}"
+ end
+ else
+ xp = self.name
+ end
+ elsif self.type == REXML::Attribute
+ xp = "#{self.element.absolute_xpath(noroot)}/@#{self.name}"
+ end
+ if noroot
+ return xp.gsub(/^\//,'')
+ else
+ return xp
+ end
+ end
+
+ # Returns prefix if given (used to provide option for inherits.rb)
+ def inherited_prefix
+ prefix
+ end
+
+ # Returns the uri namespace (used to provide option for inherits.rb)
+ def inherited_namespace(prefix=inherited_prefix)
+ namespace(prefix)
+ end
+
+ end # Namespace
+
+
+ class Document
+
+ attr_reader :namespace_instructions, :schema_instructions
+
+ alias :orig_initialize :initialize
+ def initialize(source=nil, context={})
+ orig_initialize(source, context) # first origianl inititalize
+ @namespace_instructions = load_namespaces # load namespace instrunctions
+ @schema_instructions = load_schemas # load schema instructions
+ create_namespace_attrlist
+ end
+
+ # Loads the namespace xml processing instruction entities. (This is a non-standard notation!)
+ def load_namespaces
+ namespace_instructions = []
+ ns_pi = self.find_all { |i| i.is_a? REXML::Instruction and i.target == 'xml:ns' }
+ ns_pi.each do |i|
+ if i.attributes.has_key?('name')
+ i.attributes['prefix'] = i.attributes['name']
+ elsif i.attributes.has_key?('prefix')
+ i.attributes['name'] = i.attributes['prefix']
+ else
+ raise "parse error namespace instruction missing required name or prefix attribute."
+ end
+ if i.attributes.has_key?('space')
+ i.attributes['uri'] = i.attributes['space']
+ elsif i.attributes.has_key?('uri')
+ i.attributes['space'] = i.attributes['uri']
+ else
+ raise "parse error namespace instruction missing required space or uri attribute."
+ end
+ namespace_instructions << i
+ end
+ return namespace_instructions
+ end
+
+ # Loads the schema xml processing instruction entities. (This is a non-standard notation!)
+ def load_schemas
+ schema_instructions = []
+ schema_pi = self.find_all { |i| i.is_a? REXML::Instruction and i.target == 'xml:schema' }
+ schema_pi.each do |i|
+ if i.attributes.has_key?('url')
+ i.attributes['source'] = i.attributes['url']
+ elsif i.attributes.has_key?('source')
+ i.attributes['url'] = i.attributes['source']
+ else
+ raise "parse error schema instruction missing required url attribute"
+ end
+ if i.attributes.has_key?('uri')
+ i.attributes['space'] = i.attributes['uri']
+ elsif i.attributes.has_key?('space')
+ i.attributes['uri'] = i.attributes['space']
+ else
+ raise "parse error schema instruction missing required type attribute"
+ end
+ schema_instructions << i
+ end
+ return schema_instructions
+ end
+
+ #
+ def create_namespace_attrlist
+ @namespace_instructions.each do |nsi|
+ if nsi.attributes['prefix'].empty?
+ self.add_namespace("xmlns", nsi.attributes['uri'])
+ else
+ self.add_namespace("xmlns:#{nsi.attributes['prefix']}", nsi.attributes['uri'])
+ end
+ end
+ end
+
+ end
+
+end
+
+
|
rubyunworks/xmlproof
|
546f4b13df0c4972764851e67796708841fde706
|
initial import
|
diff --git a/README b/README
new file mode 100644
index 0000000..472a2f7
--- /dev/null
+++ b/README
@@ -0,0 +1,97 @@
+
+= XMLProof/Ruby - An Implementation of xml:Proof for Ruby
+
+==<a schema for the rest of us/>
+
+== Introduction
+
+XMLProof/Ruby is a 100% Ruby API for using xml:Proof, an alternate XML schema.
+It was born out of a need to typecast data taken from an XML Document.
+Imagine! All this just for that one simple need. But it seemed the
+right way to go about it, and the outcome has produced many additional fruits.
+xml:Proof has prooved to be quite a sophisticated XML schema,
+rivaling all others in capability and ease of use.
+
+== Installation
+
+To install XMLProof simply unpack the gzipped tarball into your local site_ruby path.
+This path is usually +/usr/local/lib/site_ruby/1.6/+.
+An install script is not provided as installation does not require any special files or settings.
+
+== Requirements
+
+XMLProof/Ruby requires:
+
+* TomsLib and REREXML, available at "http://www.transami.net/files/ruby/index.html".
+* REXML, available at "http://www.germane-software.com/software/rexml/".
+
+== Usage
+
+First read the documentation for xml:Proof at "http://www.transami.net/files/ruby/xmlproof/xmlproof.html".
+For greater comprehension read the xml:Proof specification at "http://www.transami.net/files/ruby/xmlproof/xmlproof-spec.html".
+
+Example of using the proofreader:
+
+ require 'xmlproof/proofreader'
+
+ def validate(xml_filename)
+
+ # create a proofreader
+ prover = XMLProof::Proofreader.new
+
+ # validate using document's internal schema instructions
+ valid = proofreader.proofread_document_internal(xml_filename)
+
+ # return results
+ if valid
+ puts "GOOD DOCUMENT"
+ else
+ puts "BAD DOCUMENT"
+ prover.errors.each do |e|
+ puts "namespace->#{e[0]} \t xpath->#{e[1]} \t error->#{e[2]}"
+ end
+ end
+
+ end
+
+Another using external proofsheets:
+
+ require 'xmlproof/proofreader'
+
+ def validate_with(xml_filename, *proofsheet_files)
+
+ # create a proof
+ proofsheets = XMLProof::Proofsheets.new
+ proofsheets.load_proofsheets(*proofsheet_files)
+ proof = XMLProof::Proof.new(proofsheets)
+
+ # create a proofreader using the proof
+ proofreader = XMLProof::Proofreader.new
+ proofreader.use_proof(proof)
+
+ # validate
+ valid = proofreader.proofread_document(xml)
+
+ # return results
+ if valid
+ puts "GOOD DOCUMENT"
+ else
+ puts "BAD DOCUMENT"
+ prover.errors.each do |e|
+ puts "namespace->#{e[0]} \t xpath->#{e[1]} \t error->#{e[2]}"
+ end
+ end
+
+ end
+
+Until a more detailed tutorial can be written,
+please refer to prooftool.rb for more concise examples,
+and refer to the API RDocs for further details.
+
+== Authentication
+
+Package:: XMLProof/Ruby
+Author:: Thomas Sawyer
+Requires:: Ruby 1.6.5+
+License:: Copyright (c) 2002 Thomas Sawyer, transami@transami.net under the Ruby License
+
diff --git a/bin/prooftool.rb b/bin/prooftool.rb
new file mode 100755
index 0000000..978beb6
--- /dev/null
+++ b/bin/prooftool.rb
@@ -0,0 +1,82 @@
+# XMLProof/Ruby - Proof Tool for the Command Line
+# <a schema for the rest of us/>
+# Copyright (c) 2002 Thomas Sayer, Ruby License
+#
+# This is a command line tool for XMLProver's Proofreader
+
+require 'xmltoolkit/xmlproof/about'
+require 'xmltoolkit/xmlproof/proofreader'
+
+
+if $0 == __FILE__
+
+ validargs = true
+
+ case ARGV.length
+ when 0
+ validargs = false
+ when 1
+ validargs = true
+ xml = ARGV[0]
+ xps = nil
+ else
+ validargs = true
+ xml = ARGV[0]
+ xps = ARGV[1..-1]
+ end
+
+
+ if not validargs
+
+ puts
+ puts "#{XMLProof::Package} - Proof Tool"
+ puts
+ puts "USAGE: #{$0} xml-document [proofsheet1 proofsheet2 ...]"
+ puts
+ puts " e.g. #{$0} example.xml"
+ puts " -or- #{$0} example.xml example.xps"
+ puts
+
+ else
+
+ if xps
+
+ # create proof
+ proof = XMLProof::Proof.new(*xps)
+
+ # validate
+ proofreader = XMLProof::Proofreader.new(proof)
+ valid = proofreader.proofread_document(xml)
+
+ # return results
+ if valid
+ puts "GOOD DOCUMENT"
+ else
+ puts "BAD DOCUMENT"
+ prover.errors.each do |e|
+ puts "namespace->#{e[0]} \t xpath->#{e[1]} \t error->#{e[2]}"
+ end
+ end
+
+ else
+
+ # validate against internal schema instructions
+ proofreader = XMLProof::Proofreader.new
+ valid = proofreader.proofread_document_internal(xml)
+
+ # return results
+ if valid
+ puts "GOOD DOCUMENT"
+ else
+ puts "BAD DOCUMENT"
+ proofreader.errors.each do |e|
+ puts "namespace->#{e[0]} \t xpath->#{e[1]} \t error->#{e[2]}"
+ end
+ end
+
+ end
+
+ end
+
+end
+
diff --git a/demo/example1.xml b/demo/example1.xml
new file mode 100644
index 0000000..28cb538
--- /dev/null
+++ b/demo/example1.xml
@@ -0,0 +1,28 @@
+<?xml version="1.0" encoding="ISO-8859-1" ?>
+<?xml:ns prefix="" uri="http://www.transami.net/namespace/testing" ?>
+<?xml:schema url="example1.xps" uri="http://www.transami.net/namespace/xmlproof" ?>
+
+<shiporder orderid="889923">
+ <orderperson>John Smith</orderperson>
+ <note>n</note>
+ <note>n</note>
+ <shipto>
+ <note>n</note>
+ <name>Ola Nordmann</name>
+ <address>Langgt 23</address>
+ <city>4000 Stavanger</city>
+ <country>Norway</country>
+ </shipto>
+ <item>
+ <title>Empire Burlesque</title>
+ <quantity>1</quantity>
+ <overstock>1</overstock>
+ <price>10.90</price>
+ </item>
+ <item>
+ <title>Hide your heart</title>
+ <quantity>1</quantity>
+ <overstock>1</overstock>
+ <price>9.90</price>
+ </item>
+</shiporder>
diff --git a/demo/example1.xps b/demo/example1.xps
new file mode 100755
index 0000000..01da59f
--- /dev/null
+++ b/demo/example1.xps
@@ -0,0 +1,23 @@
+<?xml version="1.0" encoding="ISO-8859-1" ?>
+<?xml:ns name="" space="http://www.transami.net/namespace/testing" ?>
+<?xml:ns name="xp" space="http://www.transami.net/namespace/xmlproof" ?>
+
+<xp:proofsheet>
+ <xp:arbitrary xp:xpath="//note">/^n.*/ :string: </xp:arbitrary>
+ <shiporder>
+ <orderperson> :text: ?bywho? </orderperson>
+ <orderclerk> :text: ?bywho? </orderclerk>
+ <shipto> #1..1# @tag@
+ <name> :text: </name>
+ <address> :text: </address>
+ <city> :text: </city>
+ <country> :text: </country>
+ </shipto>
+ <item> #1..*# +inclusive+
+ <title> :text: #1..1# </title>
+ <quantity> =use_again= :int: </quantity>
+ <overstock> =use_again= </overstock>
+ <price> :float: </price>
+ </item>
+ </shiporder>
+</xp:proofsheet>
diff --git a/demo/example2.xml b/demo/example2.xml
new file mode 100644
index 0000000..5bbf328
--- /dev/null
+++ b/demo/example2.xml
@@ -0,0 +1,15 @@
+<?xml version="1.0" encoding="ISO-8859-1" ?>
+<?xml:ns name="" uri="" ?>
+<?xml:schema url="example2.xps" uri="http://www.transami.net/namespace/xmlproof" ?>
+
+<contact>
+ <name>John Smith</name>
+ <address type="home">
+ <address>112 Testing St.</address>
+ <city>New York</city>
+ <state>NY</state>
+ <country>USA</country>
+ </address>
+ <phone type="home">505-555-5555</phone>
+ <phone type="mobile">505-555-1212</phone>
+</contact>
diff --git a/demo/example2.xps b/demo/example2.xps
new file mode 100755
index 0000000..7ab8c1f
--- /dev/null
+++ b/demo/example2.xps
@@ -0,0 +1,16 @@
+<?xml version="1.0" encoding="ISO-8859-1" ?>
+<?xml:ns name="xp" space="http://www.transami.net/namespaces/xmlproof" ?>
+<?xml:ns name="opt" space="" ?>
+
+<xp:proofsheet>
+ <opt:contact>
+ <opt:name>:text:</opt:name>
+ <opt:address opt:type=":text:"> :text: #1..*# @tag@
+ <opt:street>:text:</opt:street>
+ <opt:city>:text:</opt:city>
+ <opt:state>:text:</opt:state>
+ <opt:country>:text:</opt:country>
+ </opt:address>
+ <opt:phone opt:type=":text:"> :text: #1..*# /\d{3,3}-\d{3,3}-\d{4,4}/ </opt:phone>
+ </opt:contact>
+</xp:proofsheet>
diff --git a/doc/index.html b/doc/index.html
new file mode 100644
index 0000000..b39e558
--- /dev/null
+++ b/doc/index.html
@@ -0,0 +1,109 @@
+
+<html>
+
+<head>
+ <title>XML TOOLS for Ruby</title>
+
+ <style>
+
+ body { margin: 40px;
+ font-family: sans-serif;
+ font-size: 11pt;
+ line-height: 16pt;
+ }
+
+ h1 { color: blue;
+ font-size: 36pt;
+ }
+
+ div#menu { float: left;
+ background: #DDDDDD;
+ margin-right: 20px;
+ margin-bottom: 20px;
+ padding-left: 20px;
+ padding-right: 20px;
+ border: 1px solid black;
+ text-align: center;
+ }
+
+ table#chart {
+ margin-left: 15px;
+ }
+
+ table#chart th {
+ text-align: left;
+ padding: 5px;
+ background: #DDDDDD;
+ }
+
+ table#chart td {
+ font-family: fixed;
+ font-size: 8pt;
+ padding: 5px;
+ }
+
+ table#list {
+ margin-left: 20px;
+ font-size: 11pt;
+ }
+
+ p { text-align: justify; }
+
+ </style>
+
+</head>
+
+<body>
+
+ <h1 style="test-align: center;">XML TOOLS for Ruby</h1>
+
+ <hr/><br/>
+
+ <div id="menu">
+ <h2>DOCS</h2>
+ <a href="xmlproof/xmlproof.html">XML Proof</a> <br/><br/>
+ <a href="xmlproof/xmlproof-spec.html">XML Proof Spec</a> <br/><br/>
+ <h2>API DOCS</h2>
+ <a href="libxml/rdoc/index.html">LibXML</a> <br/><br/>
+ <a href="libxslt/rdoc/index.html">LibXSLT</a> <br/><br/>
+ <a href="xmlproof/rdoc/index.html">XML Proof</a> <br/><br/>
+ <a href="xmltailor/rdoc/index.html">XML Tailor</a> <br/><br/>
+ </div>
+
+ <b>XML Tools</b> is a collection of scripts for working with XML via Ruby.
+
+ <p>XML Tools features C-based bindings to the GNU <b>libxml2</b> and <b>libxslt</b> libraries.
+ These are blazing fast parsers. Performance is very impressive compared to rexml
+ <i>(see table to right).</i> If speed is your need, these are good libraries
+ to consider.</p>
+
+ <div style="float: right; background: #EEEEEE; margin-left: 20px; padding: 5px;">
+ <b>Speed Comparison libxml vs. rexml</b>
+ <table id="chart" border="1">
+ <tr><td><i>in seconds</i></td><th>libxml </th><th>rexml </th></tr>
+ <tr><th>opening </th><td>0.003954</td><td>0.104750</td></tr>
+ <tr><th>attribute_add </th><td>0.001895</td><td>0.011114</td></tr>
+ <tr><th>subelems </th><td>0.000585</td><td>0.004729</td></tr>
+ <tr><th>xpath </th><td>0.013269</td><td>2.981499</td></tr>
+ </table>
+ </div>
+
+ <p>XML-Tools also includes pure-Ruby libraries suitable to a variety of use cases:</p>
+
+ <table id="list">
+ <tr><td> <b>XML Proof</b> - Unique XML schema language
+ <tr><td> <b>XML Tailor</b> - XML-valid Tals template system
+ <tr><td> <b>TeXML</b> - XML to TeX translation tool
+ <tr><td> <b>TkXML</b> - XML to Tk translation tool for building Tk GUIs via markup
+ <tr><td> <b>RTals</b> - Another Tals templates system (like Zope's)
+ <tr><td> <b>Sqlix</b> - SQL to XML generator
+ </table>
+
+ <p>Each library is in various stages of
+ development. The libxml and libxslt bindings work fairly well, but still need improvements.
+ Some of the other libs currently do not function at all --as they are in need of updating.
+ Everyone is encouraged to contribute.</p>
+
+</body>
+
+</html>
diff --git a/doc/xmlproof-spec.html b/doc/xmlproof-spec.html
new file mode 100644
index 0000000..59b3e79
--- /dev/null
+++ b/doc/xmlproof-spec.html
@@ -0,0 +1,474 @@
+<html>
+
+<head>
+
+ <title>XML:Proof Specifications</title>
+
+ <style>
+
+ span.n { font-size: 8pt; font-family: helvetica; font-weight: bold }
+
+ p { font-size: 10pt; font-family: arial }
+
+ h1 { font-family: arial }
+
+ h2 { font-family: arial }
+
+ h3 { font-family: arial }
+
+ h4 { font-family: arial }
+
+ </style>
+
+</head>
+
+<body>
+
+<table width="100%" cellspacing="0" cellpadding="0">
+<tr>
+ <td>
+ <font size="10">xml:Proof</font><br/>
+ <font size="4"><a schema for the rest of us/></font>
+ </td>
+ <td align="right">
+ <font size="2">v.02.06.10 Beta</font><br/>
+ <font size="2"> Thomas Sawyer (c)2002</font>
+ </td>
+</tr>
+</table>
+
+<br/>
+<br/>
+
+<center>
+<h1>Specification</h1>
+</center>
+
+<ol>
+
+<li><b>Prologue</b>
+
+ <ol>
+
+ <li>General comprehension of the W3C XML, Namespace, and XPath Recommendations,
+ and the Regular Expression Specification (see 10.1) is presumed by this document.</li>
+
+ <li><i>xml:Proof</i> is an XML schema. It was desgined to be easy to use
+ and to cover a vast portion of the XML schematic problem set.</li>
+
+ <li>A <i>proofsheet</i> is a valid XML document conforming to the xml:Proof specification.</li>
+
+ <li>A <i>target document</i> is a XML document to which a proofsheet is intended to be applied.</li>
+
+ <li>A <i>proof</i> is a parsed ordered set of proofsheets used to validate a target document.</li>
+
+ <li>A <i>proof-processor</i> is a program able to parse proofsheets and validate XML documents against such proofsheets.
+ The term <i>processor</i>, when unqualified, shall refer to this special case, proof-processor, in contrast to
+ the more general case, XML processor, throughout this document.</li>
+
+ <li>A <i>symbol</i> or <i>symbolic name</i> is a string of characters, matching against the regular expression /\w*/.</li>
+
+ <li>For the purposes of this specification, a <i>tag</i> will be the symbolic name of an XML element or attribute.
+ Element tags will be notated as <code><<i>tagname</i>></code> and attrtibute tags will be notated as <code><i>tagname</i>=</code></li>
+
+ </ol>
+
+</li>
+
+
+<li><b>Special Tags</b>
+
+ <ol>
+
+ <li><i>Special tags</i> are proofsheet tags defined by the xml:Proof specification, in contrast to
+ <i>general tags</i> which instead derive from a target document.</li>
+
+ <li>The special root tag of a proofsheet is <code><proofsheet></code>.
+ The root tag can take the alternate form of <code><schema></code>.
+ Both forms of the root tag serve the exact same purpose.</li>
+
+ <li>The <code><arbit></code> tag is a special xml:Proof tag used to indicate arbitrary
+ location within the target document. It has single valid attribute, <code>xpath=</code>,
+ which specifies the valid XPath to be matched against in the target document.</li>
+
+ <li>Both the root tag and the arbit tag, and its xpath attribute tag, must be prefixed in reference to the
+ xml:Proof namespace (3.5). While any arbitrary, but valid, prefix can be used to accomplish this,
+ it is recommended that you use <code>xp:</code> for consistancy and clearity.</li>
+
+ <li>All the general tags in a proofsheet are the same as those of the target document's
+ it intends to model. The hiearchy of those elements are also the same.</li>
+
+ </ol>
+
+</li>
+
+
+<li><b>Die</b>
+
+ <ol>
+
+ <li>A <i>Die</i> is a syntatical contruction which defines contstraints on a target document.</li>
+
+ <li>The sole text node of any proofsheet element and the value of any proofsheet attribute,
+ with expection to the special <code>xpath=</code> attribute, is a <i>die</i>.</li>
+
+ <li>A die may also be refered to as a <i>cast</i> and the act of writing or applying them, <i>casting</i>.</li>
+
+ <li>A die consists of an unordered list of <i>markers</i> seperated by whitespace.</li>
+
+ </ol>
+
+</li>
+
+<li><b>Markers</b>
+
+ <ol>
+
+ <li><b>Name Marker</b>
+
+ <ol>
+
+ <li>(syntax) <code><b>=<i>name</i>=</b></code></li>
+
+ <li>A <i>name marker</i> is a symbol, enclosed by equal signs, which identifies the die
+ such that it can be reused elsewhere in a the proofsheet.</li>
+
+ <li>Name markers provide a convenient means of die reuse.</li>
+
+ </ol>
+
+ </li>
+
+ <li><b>Regular Expression Marker</b>
+
+ <ol>
+
+ <li>(syntax) <code><b>/<i>regular expression</i>/</b></code></li>
+
+ <li>A <i>regular expression marker</i> is a syntatical structure conforming to the Regular Expression sepcification.
+ (see 10.1.4)</li>
+
+ <li>A <i>regular expression marker</i> dictates that the content of an element or attribute of the target document
+ must match against it.</li>
+
+ <li>If no regular expression marker is present in a die, the die's regular expression effectively
+ defaults to <code>/.*/</code></li>
+
+ </ol>
+
+ </li>
+
+ <li><b>Datatype Marker</b>
+
+ <ol>
+
+ <li>(syntax) <code><b>:<i>datatype</i>:</b></code></li>
+
+ <li>The <code><i>datatype marker</i></code> is an arbitrary symbol, enclosed by colons, naming the type of data to be contanied
+ by an element or attribute of the target document.</li>
+
+ <li>The xml:Proof specification does not dictate the selection of datatypes, this task is instead relinquished to the processor.</li>
+
+ <li>A <i>datatype marker</i> dictates that the content of an element or attribute of the target document
+ must conform to it.</li>
+
+ <li>Datatype markers allow an xml:Proof processor to typecast XML content into its underlying language of implementation.</li>
+
+ <li>An sufficiant xml:Proof processor should provide a means to add and alter its internal datatypes.</li>
+
+ <li>Any datatype not recognize by the xml:Proof processor shall be considered a <code>string</code>.</li>
+
+ </ol>
+
+ </li>
+
+ <li><b>Order Marker</b>
+
+ <ol>
+
+ <li>(syntax) <code><b>@<i>order</i>@</b></code></li>
+
+ <li>The <i>order marker</i> is a symbol enclosed in at-signs, which specifies the sort order of an element's child elements.</li>
+
+ <li>Valid values for <code><i>order</i></code> are <code>tag</code>, <code>content-a..z</code>,
+ <code>content-z..a</code> and <code>none</code>.</li>
+
+ <li>The <code>tag</code> value specifies that the child elements must be in the order as given within
+ the proofsheet.</li>
+
+ <li>The <code>content-a..z</code> and <code>content-z..a</code> values specify that the child element's
+ must appear in alphanumerical sequence, descending and ascending, respectively, by their first text node.</li>
+
+ <li>The <code>none</code> value specifies that the child elements need not appear in any particular order, and is the
+ default setting if no order marker is specified within a die.
+
+ <li>The order marker does not specify that each of the child elements must occur,
+ or that one and only one of each said children must appear. It only specifies that,
+ should they appear, they do so in the given order.</li>
+
+ <li>The order marker is only applicable to an element, not an attribute, and the element must have child elements.</li>
+
+ </ol>
+
+ </li>
+
+ <li><b>Set Marker</b>
+
+ <ol>
+
+ <li>(syntax) <code><b>+<i>set</i>+</b></code></li>
+
+ <li>The <i>set marker</i> is a symbol, enclosed in addition signs, which specifies the ... of an element's child elements,</li>
+
+ <li>Valid values for <code><i>set</i></code> are <code>inclusive</code>, <code>exclusive</code> and
+ <code>none</code>.</li>
+
+ <li>The <code>inclusive</code> value indicates that all the children elements must be present as given by the proofsheet,
+ but other elements may appear along with them.</li>
+
+ <li>The <code>exclusive</code> value indicates that all the children elements must be present as given by the proofsheet,
+ and that no other elements may appear along with them.</li>
+
+ <li>The value <code>none</code> indicates no requirments for the appearnece of child elements, and is the default
+ if no set marker is specified in the die.</li>
+
+ </ol>
+
+ </li>
+
+ <li><b>Range Marker</b>
+
+ <ol>
+
+ <li>(syntax) <code><b>#<i>range</i>#</b></code></li>
+
+ <li>The <i>range marker</i> is a symbol, enclosed by pound signs, which specifies the minumum and maximum number of a given
+ element or attribute that may appear within the target document.</li>
+
+ <li>For elements, a valid <code><i>range</i></code> can be <code>m..n</code> or <code>m...n</code>,
+ inclusive and exclusive of <code>n</code>, respectively, where <code>m</code> and <code>n</code> are unsigned integers
+ and <code>m</code> < <code>n</code>, such thah m is the minimum number and n is the maximum number.</li>
+
+ <li>An element may also a range marker of the form,<code>m..*</code>, equivalant to <code>m...*</code>
+ specifying a minimum number (m) and an unbound maximum number.</li>
+
+ <li>The default range marker for an element, if none is specified within the die, is <code>0..*</code>.</li>
+
+ <li>For attributes, a valid <code><i>range</i></code> can only be <code>0..1</code> or <code>1..1</code>.</li>
+
+ <li>The default range marker for an attribute, if none is specified within the die, is <code>0..1</code>.</li>
+
+ </ol>
+
+ </li>
+
+ <li><b>Option Marker</b>
+
+ <ol>
+
+ <li>(syntax) <code><b>?<i>option</i>?</b></code></li>
+
+ <li>The <i>option marker</i> is an arbitrary symbol, or unordered list of symbols seperated by commas, enclosed by question marks,
+ which specifies the element or attribute belongs to a group of simularly marked elements and attributes,
+ such that one and only one of such elements or attributes may appear within the target document.</li>
+
+ <li>Elements and/or attributes partaking of an identical <code><i>option</i></code> do not need to belong to the same parent, although
+ this can create a contridiction should an ancestor and one of its children partake of the same option group,
+ rendering a document invalid by definition.</li>
+
+ </ol>
+
+ </li>
+
+ <li><b>Collection Marker</b>
+
+ <ol>
+
+ <li>(syntax) <code><b>!<i>collection</i>!</b></code></li>
+
+ <li>A <i>collection marker</i> is an arbitrary symbol, enclosed by exlimation marks, which specifies
+ the element or attribute belongs to a group of simularly marked elements and attributes,
+ such that all of the elements and/or attributes sharing the same collection marker
+ must appear together within the target document.</li>
+
+ <li>Any given element or attribute can only belong to a single collection group.</li>
+
+ </ol>
+
+ </li>
+
+ <li><b>Track Marker</b>
+
+ <ol>
+
+ <li>(syntax) <code><b>*<i>track</i>*</b></code></li>
+
+ <li>The <i>track marker</i>, which is a boolean symbol enclosed by asterisks, is a special marker
+ which does not dictate structure or content. Rather it has a special purpose for XML datastores,
+ specifying that the element or attribute should be specifically indexed.</li>
+
+ <li>Valid boolean symbols for <code><i>track</i></code> are <code>yes</code>, <code>no</code>, <code>true</code>,
+ or <code>false</code>, with the negative notations being the default.</li>
+
+ <li>The tracking of particular XML elements in a datastore allows for fast search and retirieval,
+ and fast aggregate functions to be applied to their values.</li>
+
+ </ol>
+
+ </li>
+
+ </ol>
+
+</li>
+
+
+<li><b>File Extension and Namespace</b>
+
+ <ol>
+
+ <li>The file extension for a proofsheet is <code>.xps</code>.</li>
+
+ <li>xml:Proof is fully namespace aware, both in functionality and in application to an XML Document.
+ Since namespace prefixes serve as mere proxies to actual namespaces, any arbitrary prefix can be used,
+ but the namespace itself, i.e. the uri, must be unique and persistent.</li>
+
+ <li>The <i>xml:Proof namespace</i> shall be <code>http://www.transami.net/namespace/xmlproof</code>.</li>
+
+ <li>Within a proofsheet, the namespace of all of xml:Proof's special elements and attributes must
+ belong to the xml:Proof namespace.</li>
+
+ <li>Within a proofsheet, all general xml:Proof elements and attributes must partake of the
+ same namespace as their counterparts within the target document.</li>
+
+ </ol>
+
+</li>
+
+
+<li><b>Schema Declerations</b>
+
+ <ol>
+
+ <li>A proof-processor will recognize <i>schema declarations</i> made via XML processing instructions
+ within the target document.</li>
+
+ <li>(Syntax) <code><?xml:schema uri="<i>uri</i>" url="<i>url</i>" segment="<i>segment</i>"?></code></li>
+
+ <li>The <code>uri</code> attribute, or its synonym <code>space</code>, defines the kind of schema that is being utilized.
+ This is the specific namespace uri as defined by the schema's designers. In the case of xml:Proof, it
+ is "http://www.transami.net/namespace/xmlproof". It would be another string for, say, RELAX-NG or Schematron.</li>
+
+ <li>The <code>url</code> attribute, or its synonym <code>source</code> is a path to
+ the .xps file. The url can be a local path. The url is neccessary since proofsheets cannot be embedded in the
+ target document like DTDs can.</li>
+
+ <li>The <code>segment</code> attribute, or its synonym <code>fragment</code> is an optional attribute
+ specifying an XPath which selects only a portion of the .xps file to use as the proofsheet.</li>
+
+ <li>Interestingly, more than one schema can be declared within a given target document.
+ In so doing, schema declarations appearing earlier within the document have precedence
+ over those appering later. This allows for a means of cast overiding.</li>
+
+ <li>Note that one W3C reccomendation has been minorly violated by this schema declaration notation with
+ the reserved use of an instruction name matching /^xml/i.</li>
+
+ </ol>
+
+</li>
+
+
+<li><b>Namespace Declarations</b>
+
+ <ol>
+
+ <li>This xml:Proof specification offers a variant notation for namespace declarations, differing
+ from the W3C recommendation. The W3C's recommendation is here considered somewhat nebulous and clumsy,
+ and further, clutters and obscures the information of relevance within an XML document.</li>
+
+ <li>A proof-processor will recognize <i>namespace declarations</i> made via XML processing instructions
+ within the target document.</li>
+
+ <li>(Syntax) <code><?xml:ns prefix="<i>prefix</i>" uri="<i>uri</i>"?></li>
+
+ <li>The <code>prefix</code> and <code>uri</code> attribute tags can also be labeled
+ <code>name</code> and <code>space</code>, respectively.</li>
+
+ <li>This specification recommends this use namespace declarations via document level processing instructions,
+ instead of within general element tags as recommended by the W3C.</li>
+
+ <li>This notation can coexist with the standrard notation because, in effect, all the namespace processing
+ instruction specifies is insertion of a document level ATTRLIST for the namespaces thus defined.</li>
+
+ <pre>
+ <!DocType <i>docname</i> [
+ <!ATTLIST <i>docname</i> xmlns:<i>prefix</i> '<i>uri</i>' CDATA>
+ ]>
+ </pre>
+
+ <li>Obviously, many XML processors do not support this processing instruction. It is hoped that they will
+ adopt this improved notation over time as it is a very simple and useful addition.</li>
+
+ <li>A proof-processor will provide the means to convert between this notation and the standard notation.</li>
+
+ </ol>
+
+</li>
+
+<li><b>Functionality</b>
+
+ <ol>
+
+ <li>A proof-processor validates a target document by matching namespaces and XPaths between the proofsheet
+ and the target document, such that all target document elements and attributes are validated
+ againt their corresponding proofsheet's dies.
+
+ <li>Any possible absolute XPath within a proofsheet should only be accounted for once.
+ If this is not adhered to it is not likely to cause a error. The proof-processor should only match against
+ the first occurance of an absolute die within the proofsheet.</li>
+
+ <li>The special <arbit> element overlaps in application with the general elements and attributes.
+ In other words, a target document's element or attribute must conform to both an artbitrary die and a general die
+ should both be applicable.</li>
+
+ <li>The special <arbit> element overlaps in application with other arbitrary assignments.
+ In other words, a target document's element or attribute must conform to all applicable artbitrary die.</li>
+
+ </ol>
+
+</li>
+
+
+<li><b>Appendix</b>
+
+ <ol>
+
+ <li><b>References</b>
+
+ <ol>
+
+ <li><a href="http://www.w3.org/TR/REC-xml">W3C XML Recommendation</a></li>
+
+ <li><a href="http://www.w3.org/TR/REC-xml-names/">W3C Namespacee Recommendation</a></li>
+
+ <li><a href="http://www.w3.org/TR/xpath">W3C XPath Recommendation</a></li>
+
+ <li><a href="http://www.opengroup.org/onlinepubs/007908799/xbd/re.html">Regular Expressions Specification </a></li>
+
+ </ol>
+
+ </li>
+
+ </ol>
+
+</li>
+
+</ol>
+
+<br/>
+<br/>
+<br/>
+
+</body>
+
+</html>
+
diff --git a/doc/xmlproof.html b/doc/xmlproof.html
new file mode 100644
index 0000000..33e5fe9
--- /dev/null
+++ b/doc/xmlproof.html
@@ -0,0 +1,501 @@
+<html>
+
+<head>
+
+ <title>XML:Proof Documentation</title>
+
+ <style>
+
+ p { font-family: arial }
+
+ h1 { font-family: arial }
+
+ h2 { font-family: arial }
+
+ h3 { font-family: arial }
+
+ h4 { font-family: arial }
+
+ </style>
+
+</head>
+
+<body>
+
+<table width="100%" cellspacing="0" cellpadding="0">
+<tr>
+ <td>
+ <font size="10">xml:Proof</font><br/>
+ <font size="4"><a schema for the rest of us/></font>
+ </td>
+ <td align="right">
+ <font size="2">v.02.06.10 Beta</font><br/>
+ <font size="2"> Thomas Sawyer (c)2002</font>
+ </td>
+</tr>
+</table>
+
+<br/>
+
+<h1>Introduction</h1>
+<p>A standard extensible and potable data language is extremely important to the IT community, thus
+the importance of XML technology, and its oft mention, as it has become that defacto standard in this regard.
+Yet it is widely held that XML is a bulky, less than optimal, implementation of such a standard. Fortunately
+there are ways in which the community itself can go about improving XML. xml:Proof is, in part, such an improvement.</p>
+
+<p>XML, in and of itself, is simply a general data/metadata format --a way to organize data such that both the content and
+description of that content are bound together. But in itself it does not dictate the validity of that data.
+To patch this "hole" in XML, DTD or the Document Type Definition was made part of the XML specification.
+DTD has advantages. It is actually broader in applicability as it's syntax is not XML, but a superset, SGML.
+Yet this is also its disadvantage. The optimal solution would use XML itself as the base syntax,
+so that the same tools can be utilized for both the data/metadata markup and the validity markup.
+This is where schemas come into play. Schemas are XML document validity definitions, just as DTDs are,
+but they keep to the boundries of XML itself, i.e. schemas are marked-up with XML.</p>
+
+<p>There are a number of schemas already available for XML, like TREX, RELAX, RELAX-NG, and Schematron.
+Offically the W3C has offered up their own XML-Schema. Should you place examples of all of these schemas side-by-side,
+along with an example of xml:Proof, xml:Proof will immediately distinguish itself from the rest.
+This is due to the fact that xml:Proof, unlike the others, actually utilizes the very tag names it intends to formalize,
+rather then invent a whole new set of its own. In fact xml:Proof has only two specially defined elements, the root tag
+and the arbit tag. As you can imagine this makes xml:Proof mark-up rather trivial to read and write.
+Additionly xml:Proof manages to do with so few speciality tags and attributes because it utilizes existing standard
+technologies to do much of its dirty work, that is Regular Expressions. Regular Expressions are well battle tested
+in the field, and there is little good reason to reinvent the wheel. Regular Expressions are a schema, using a
+broader sense of the word, in their own right, applicable to strings of text. As there are plenty of strings of
+text in XML documents, it isn't too hard to see how this might be useful. xml:Proof intends useage of
+Regular Expression insofar as is applicable in the context of XML. Utilizing this well known pre-existing technology,
+among its other features, XMProof is able to offer a unique and powerful schema to the XML community.</p>
+
+
+<br/>
+<h1>Overview</h1>
+
+<h2>File Extension and Namespace</h2>
+<p>Personally I hate file extensions. Why file systems do not include a place for this description as they do for
+the file name and last modified date is beyond me. I tend to blame MS-DOS. Oh well.
+The extension for xml:Proof proofsheets, as they are called, is <code>.xps</code>.</p>
+
+<p>xml:Proof is fully namespace aware, both in functionality and in application to an XML Document.
+This requires further explination. Namespace prefixes serve as mere proxies to actual namespaces.
+So while any arbitrary prefix can be used, a namespace itself, i.e. the uri, must be unique and persistent.
+The namespace uri for xml:Proof is <code>http://www.transami.net/namespace/xmlproof</code>.
+This namespace must be used on all of xml:Proof's special tags in order for any xml:Proof processor to function.
+Further, when creating xml:Proof proofsheets, the namespaces of the elements and attributes being described must also be
+taken into consideration with regards to the target XML document's. The elements and attributes of the XML document,
+in other words, must partake of the same namespaces as their counterparts within the proofsheet.
+This will become clearer as you read the rest of this document.
+</p>
+
+
+<br/>
+<h2>Root and Arbit Tags</h2>
+<p>There are only two special tags in xml:Proof.</p>
+
+<p>The first is the <code><proofsheet></code> tag. It is the root element of any xml:Proof schema document,
+i.e. the proofsheet. The special root tag can take the alternate form of <code><schema></code>.
+Both serve the same purpose.</p>
+
+<p>The second special tag is the <code><arbit></code> tag. This tag is used to indicate an arbitrary
+location in the XML document. It has a single valid attribute, <code>xpath</code>, which specifies the
+the matching XML document nodes to which its <i>die</i> corresponds (see below).
+</p>
+
+<p>Both of these special element tags and the special attrribute should always be prefixed with
+reference to the xml:Proof namespace. While any arbitrary, but valid, prefix will do,
+it is recommended that you use <code>xp:</code> for consistancy and clearity.</p>
+
+
+<br/>
+<h2>The Die is Cast</h2>
+<p>Witch exception to the special tags, all other tag and attribute names of an xml:Proof proofsheet
+are the same as those of the target XML documents it intends to model. The hiearchy of those elements
+are also the same. Thus the proofsheet is nearly as readable as any applicable target document.
+The text, or content, of elements and attributes is, in xml:Proof nomeclature, called a <i>die</i>.
+It may also be refered to as a <i>cast</i> and the act of writing or applying them, <i>casting</i>.
+A die consists of the following optional <i>markers</i> seperated by spaces:
+
+<ul>
+
+ <li><code><b>=<i>name</i>=</b></code>
+ <p>The <code><i>name</i></code> is an identitier which names the die
+ such that it can be reused later in the proofsheet. This
+ provides a convenient means of die reuse. An element or attribute having
+ only this marker and no other will gain its die characteristics from any other die
+ identically named which has other markers within its die.</p>
+ </li>
+
+ <li><code><b>/<i>regular expression</i>/</b></code>
+ <p>The <code><i>regular expression</i></code> marker is dictates that the content
+ of an element or attribute must match against it to be considered valid.
+ The regular expression of a die effectively defaults to <code>.*</code> if excluded.
+ </li>
+
+ <li><code><b>:<i>datatype</i>:</b></code>
+ <p>The <code><i>datatype</i></code> name is actually arbitrary, and can be anything desired.
+ xml:Proof itself dosen't care, but the utilization of an xml:Proof processor will!
+ Any given xml:Proof processor will generally "understand" the majority of common datatypes
+ and thus is able to typecast XML content into its underlying language of implementation.
+ Such is the main intent of datatype names in addition to validating content in simular fashion
+ to regular expressions. A good xml:Proof processor should also provide a means to add and alter
+ its internally recogzined datatypes. Any datatype it does not recognize will be treated as
+ <code>CDATA</code>, otherwise known as <code>string</code> or <code>text</code>.</p>
+ </li>
+
+ <li><code><b>@<i>order</i>@</b></code>
+ <p>The value of order may be <code>tag</code>, <code>content-a..z</code>, </code>content-z..a></code>, or <code>none</code>.
+ If <code>tag</code> then all child elements of the casted element must appear in sequence as given within the proofsheet.
+ if <code>content-a..z</code> or <code>content-z..a</code>, then the content of all child elements of the casted element
+ must appear in alpahnumrical order, descending or ascending, respectively. The value <code>none</code> specifies that
+ no specific sort order is required and is the default if the marker is not given within the die.
+ Keep in mind this marker does not specify that each of the children elements must occur or that
+ one and only one of said children may appear. Rather, it only specifies that, should they appear,
+ they do so in the given order. An element thus cast must have child elements.
+ This marker is not applicable to attributes and will be ignored if used thus.</p>
+ </li>
+
+ <li><code><b>+<i>closure</i>+</b></code>
+ <p>The value of closure can be <code>inclusive</code>, <code>exclusive</code>, or <code>none</code>.
+ Inclusivity means that all child elements of the cast element must appear as given in the proofsheet,
+ but other elements may appear as thier siblings. Exclusivity means that all child elements of the cast element
+ must appear as given in the proofsheet and that no other elements may appear as thier siblings.
+ If this marker is not present within the die, the default value of <code>none</code> is assumed, which
+ relinquishes any neccessary closure on an elements child elemets. An element thus cast as
+ <code>inclusive</code> or <code>exclusive</code> must have children elements.
+ This marker is not applicable to attributes and will be ignored if used thus.</p>
+ </li>
+
+ <li><code><b>#<i>range</i>#</b></code>
+ <p>Specifies a <code><i>range</i></code> of how many of a given element or attribute may appear.
+ For elements, a valid <code><i>range</i></code> can be <code>m..n</code> or <code>m...n</code>,
+ inclusive and exclusive of <code>n</code>, respectively,
+ where <code>m</code> and <code>n</code> are unsigned integers
+ and <code>m</code> < <code>n</code>. This notation was borrowed from the Ruby programming language.
+ There is also the special case <code>m..*</code> (same as <code>m...*</code>) which of course means unbound.
+ <code>0..*</code> is the default, meaning none or any number of the element may appear within the document.
+ For attributes, only <code>0..1</code> and <code>1..1</code> are valid, as an attribute may appear no more
+ than once in any given element, with 0..1 being the default.</p>
+ </li>
+
+ <li><code><b>?<i>option1,option2,...</i>?</b></code>
+ <p>Where <code><i>optionN</i></code> is set to an arbitrary group name.
+ This option name defines an option group to which the the element belongs.
+ This specifies that one and only one of the elements sharing the same option group name
+ may appear within the target document. This can provide interesting relationships in that
+ elements and/or attributes having the same group names do not need to belong to the same parent!
+ But be warned: this can create a contridiction should an ancestor and one of its
+ children partake of the same group. Do not do this as it will render your documents
+ invalid by definition.</p>
+ </li>
+
+ <li><code><b>!<i>collection</i>!</b></code>
+ <p>Where <code><i>collection</i></code> is set to an arbitrary collection name.
+ This collection name defines a collective group to which the element belongs,
+ and specifies that all of suich elements and/or attrributes must appear together
+ within the document. Any given attribute or element can only belong to a single collection.</p>
+ </li>
+
+ <li><b><code>*<i>track</i>*</code></b>
+ <p>This is a special marker which does not dictate structure or content.
+ It has a special purpose for XML datastores, like that implemented in DBXML.
+ It specifies that this element should be specifically indexed.
+ Tracking of particular XML elements in a datastore
+ allows for fast search and retirieval, and more importantly
+ fast aggregate functions to be applied to their values.</p>
+ </li>
+
+</ul>
+
+<br/>
+<p>Here's an example of a die:
+<pre>
+ <Nword> =nword= #1..2# /^N/ :varchar: </Nword>
+</pre>
+</p>
+
+<p>This die defines an xml tag named "Nword" to be any varchar beginning with the letter N and ooccuring
+only once or twice.</p>
+
+
+<br/>
+<h1>Namespaces and Schema Declerations</h1>
+<p>We have mentioned above xml:Proof's use of namespaces. In fact they are so fundemental, xml:Proof offers
+a variant notation for namespace declarations differing from the one recommened by the W3C.
+The W3C's recommendation is rather nebulous and clumsy, and further, clutters and obscures the information
+of relevance in an XML document. Therefore namespace declarations can be defined by document level
+processing instructions instead of within general element tags. Because XML processing instructions can
+be freely defined we have not violated any of the W3C standard by doing this,
+yet we have made our lives much improved!* This notation actually peacefully coexists with the
+standrard notation because it in effect does nothing more then insert a document level
+ATTRLIST for the namespaces defined.</p>
+
+<p>Here is the top of an XML document using this alternate notation:
+
+<pre>
+ <?xml version="1.0" encoding="ISO-8859-1"?>
+ <?xml:ns prefix="example" uri="http://www.transami.net/namespace/testing"?>
+</pre>
+
+This is effectively translated by the XML Processor into:
+
+<pre>
+ <?xml version="1.0" encoding="ISO-8859-1"?>
+ <!DocType <i>docname</i> [
+ <!ATTLIST <i>docname</i> xmlns:example 'http://www.transami.net/namespace/testing' CDATA>
+ ]>
+</pre>
+
+Thus the processing instruction <code>xml:ns</code> defines a namespace. The <code>prefix</code> and <code>uri</code>
+attributes can also be labeled <code>name</code> and <code>space</code>, respectively. Subsequently any tag
+or attribute prefixed with the <code>prefix</code> or <code>name</code> value will
+thus be associated to this declared namespace.
+
+<p>Obviously, to date, XML Processors generally do not support
+this processing instruction, but it is hoped that this alternate notation will
+catch on in the XML community and will be generally adopted as a new standard.
+In the mean time all xml:Proof processors should provide a means to convert between
+the two different notations.</p>
+
+<p>Schema declarations are similar to namespace declarations.
+They are declared via processing instructions as well. For example:
+
+<xmp>
+ <?xml version="1.0" encoding="ISO-8859-1"?>
+ <?xml:ns prefix="example" uri="http://www.transami.net/namespace/testing"?>
+ <?xml:schema uri="http://www.transami.net/namespace/xmlproof" url="example1.xsp"?>
+ <?xml:schema uri="http://www.transami.net/namespace/xmlproof" url="example2.xsp"?>
+</xmp>
+
+The <code>uri</code> attribute, or its synonym </code>space</code>, define the kind of schema that is being utilized.
+This is the specific namespace uri as defined by the schema's designers. In the case of xml:Proof, it
+is "http://www.transami.net/namespace/xmlproof". It would be another string for, say, RELAX-NG or Schematron.</p>
+
+<p>The <code>url</code> attribute, or its synonym </code>source</code> is a path name to
+the .xps file. In this example case, it is a local file in the same location as the XML document itself.
+This is neccessary since proofsheets can not be embedded in the document like DTDs can.</p>
+
+<p>Interestingly, more than one schema can be declared. In so doing, schema declarations appearing higher
+in the document have precedence over the later. This allows for a means of cast overiding. In our example,
+for any given tag within the document, a matching die will first be searched for in </code>example1.xps</code>.
+Only if it is not found there will <code>example2.xps</code> be searched. This can be quite useful in using borrowed
+schemas. You can add new entries or overide existing entries without actually changing the original's.</p>
+
+<p><font size="2">*Note: In fact one rule has been violated: the reserved use of an instruction name matching /^xml/i. well, :-p</font></p>
+
+
+<br/>
+<h1> Example </h1>
+<p>First let us look at a "traditional", "simple" XML-Schema example:</p>
+
+<xmp>
+
+ <?xml version="1.0" encoding="ISO-8859-1"?>
+
+ <shiporder orderid="889923"
+ xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
+ xsi:noNamespaceSchemaLocation="shiporder.xsd">
+ <orderperson>John Smith</orderperson>
+ <shipto>
+ <name>Ola Nordmann</name>
+ <address>Langgt 23</address>
+ <city>4000 Stavanger</city>
+ <country>Norway</country>
+ </shipto>
+ <item>
+ <title>Empire Burlesque</title>
+ <note>Special Edition</note>
+ <quantity>1</quantity>
+ <price>10.90</price>
+ </item>
+ <item>
+ <title>Hide your heart</title>
+ <quantity>1</quantity>
+ <price>9.90</price>
+ </item>
+ </shiporder>
+
+</xmp>
+
+<xmp>
+
+ <?xml version="1.0" encoding="ISO-8859-1" ?>
+ <xs:schema xmlns:xs="http://www.w3.org/2001/XMLSchema">
+ <xs:element name="shiporder">
+ <xs:complexType>
+ <xs:sequence>
+ <xs:element name="orderperson" type="xs:string"/>
+ <xs:element name="shipto">
+ <xs:complexType>
+ <xs:sequence>
+ <xs:element name="name" type="xs:string"/>
+ <xs:element name="address" type="xs:string"/>
+ <xs:element name="city" type="xs:string"/>
+ <xs:element name="country" type="xs:string"/>
+ </xs:sequence>
+ </xs:complexType>
+ </xs:element>
+ <xs:element name="item" maxOccurs="unbounded">
+ <xs:complexType>
+ <xs:sequence>
+ <xs:element name="title" type="xs:string"/>
+ <xs:element name="note" type="xs:string" minOccurs="0"/>
+ <xs:element name="quantity" type="xs:positiveInteger"/>
+ <xs:element name="prize" type="xs:decimal"/>
+ </xs:sequence>
+ </xs:complexType>
+ </xs:element>
+ </xs:sequence>
+ <xs:attribute name="orderid" type="xs:string" use="required"/>
+ </xs:complexType>
+ </xs:element>
+ </xs:schema>
+
+</xmp>
+
+<br/>
+<p>Now here's the near equivalent in xml:Proof, with a little extra added to show-off:</p>
+
+<xmp>
+
+ <?xml version="1.0" encoding="ISO-8859-1"?>
+ <?xml:ns name="example" space="http://www.transami.net/namespace/testing"?>
+ <?xml:schema source="example1.xsp" space="http://www.transami.net/namespace/xmlproof"?>
+
+ <example:shiporder orderid="889923">
+ <orderperson>John Smith</orderperson>
+ <shipto>
+ <name>Ola Nordmann</name>
+ <address>Langgt 23</address>
+ <city>4000 Stavanger</city>
+ <country>Norway</country>
+ </shipto>
+ <item>
+ <title>Empire Burlesque</title>
+ <note>Special Edition</note>
+ <quantity>1</quantity>
+ <price>10.90</price>
+ </item>
+ <item>
+ <title>Hide your heart</title>
+ <quantity>1</quantity>
+ <price>9.90</price>
+ </item>
+ </example:shiporder>
+
+</xmp>
+
+<xmp>
+
+ <?xml version="1.0" encoding="ISO-8859-1" ?>
+ <?xml:ns name="example" space="http://www.transami.net/namespace/testing" ?>
+ <?xml:ns name="xp" space="http://www.transami.net/namespace/xmlproof" ?>
+
+ <xp:proofsheet>
+ <example:shiporder orderid=":int:">
+ <orderperson> :text: ?bywho? </orderperson>
+ <orderclerk> :text: ?bywho? </orderclerk>
+ <shipto> #1..1# @true@
+ <name> :text: </name>
+ <address> :text: </address>
+ <city> :text: </city>
+ <country> :text: </country>
+ </shipto>
+ <item> #1..*# @true@
+ <title> :text: </title>
+ <note> :text: </note>
+ <quantity> =use_again= :unsigned: </quantity>
+ <overstock> =use_again= </overstock>
+ <price> :float: </price>
+ </item>
+ </example:shiporder>
+ </xp:proofsheet>
+
+</xmp>
+
+<br/>
+<p>Notice the difference in the way namespaces are used. XML-Schema has its own namespace for every tag,
+seperate from the XML document's, which makes sense, since it uses its own set of tag names. Furthermore the document
+itself is forced to use an "instance" of the schema as the namespace of its elements and attributes.
+Thus the document is "confined" to the schema. xml:Proof on the other hand uses the same tag and attribute names
+as the document itself and thus the same freely defined namespace. Using the same namespace gives the two sets of
+data a greater association, without the limitations imposed by XML-Schema, and, last but certainly not least,
+is far easier to comprehend.</p>
+
+
+<br/>
+<h1>Functionality</h1>
+<p>So all this is well and fine, but how does xml:Proof actually work? Well, that is farily simple really.
+xml:Proof simply matches XPaths between the proofsheet and the document sharing the same namespace,
+such that a particular die is applied to any corresponding document element or attribute. From the example given above,
+you'll notice that the <code>item</code> element appears twice within the XML document. These two elements
+match to the single proofsheet element of the same name. For instance the absolute XPath,
+<code>example:shiporder/item/quantity</code>, containing <code>1</code> in the document, matches the same
+absolute XPath, <code>example:shiporder/item/quantity</code>, containing <code>=use_again= :unsigned:</code> in the proofsheet.
+This points out an important restriction to proofsheets: any possible absolute XPath within a proofsheet should only
+be accounted for once.*</p>
+
+<p>Arbitrary dies, cast via the <code><arbit></code> tag, overlap in applicability with the general absolute dies.
+Thus if an element or attribute in a target XML document matches against an absolute XPath in the proofsheet and also
+matches against an arbitrary XPath, it must conform to both dies. Further arbitrary dies themselves may overlap in
+applicability.</p>
+
+<p><font size="2">*Note: If this is not adhered to it is not likely to cause a problem.
+The first occurance of a die will be matched and that will be that.</font></p>
+
+
+<br/>
+<h1>XMLProof/Ruby API</h1>
+
+<p>The XMLProof/Ruby API is a Ruby library for using xml:Proof.
+You can find documentation for its use here: <a href="doc/index.html">xml:Proof/Ruby API Documentation</a>.</p>
+
+
+<br/>
+<h1>Conclusion</h1>
+<p>xml:Proof, like all other schemas, is not a cure all for schema definition.
+It has its strengths and weaknesses. But no other schema, of which we are aware,
+matches its capabilites or ease of use. In the end, we believe, and we hope others will agree,
+xml:Proof is by far and away a better way to schema XML. It solves the majority of the requirements of a
+schematic meta-language while minimizing the complexity assocciated with them.
+Best of all it won't give you headaches.</p>
+
+
+ <li>xml:Proof, like all other schemas, is not a cure all for schema definition.
+It has its strengths and weaknesses. But we beleive that an analysis of the schematic problem set indicates
+that no other schema matches xml:Proofs capabilites or ease of use.</li>
+
+ <li>xml:Proof is a better way to schema XML. It solves the majority of the requirements of a
+schematic meta-language while minimizing the complexity assocciated with them.</li>
+
+
+<br/>
+<hr/>
+<h1>After Thoughts</h1>
+Honestly I wish prefixes and namespaces were inherited, such that a non-prefixed tag inherits the prefix of its
+closest prefixed ancestor. Thus in the example:
+
+<pre>
+ <p:a>
+ <b/>
+ </p:a>
+</pre>
+
+<code><b></code> inherits the prefix <code>p</code> from <code><p:a></code>.</p>
+
+<p>Further, the root tag of a document, without a given prefix, would inherit the prefix of the first appearing namespace.
+Thus, with this new notation, there is no such beast called the <i>default namespace</i>.
+All tags and attributes, in the same fashion, either have a prefix or inherit one. The only exception is when no
+namespaces are declared. In this case all tags and attributes, "erroneously" prefixed or not
+belong to the <i>null-namespace</i>, or <i>empty-namespace</i>. Effectively this means no namespace.
+The null-namespace can be referenced by a prefix by setting the namespace uri to an empty string.</p>
+
+<p>Dosen't this just make more sense? This seems so appealing to me that I almost made this
+a requirement of xml:Proof!. Oh well, the W3C keeps us working hard.</p>
+
+<br/>
+<br/>
+<br/>
+
+</body>
+
+</html>
+
diff --git a/lib/xmlproof/about.rb b/lib/xmlproof/about.rb
new file mode 100644
index 0000000..df7b83e
--- /dev/null
+++ b/lib/xmlproof/about.rb
@@ -0,0 +1,30 @@
+# XMLProof/Ruby - About
+# <a schema for the rest of us/>
+# Copyright (c) Thomas Sawyer, Ruby License
+
+module XMLProof
+
+ TITLE = "XMLProof/Ruby"
+ RELEASE = "02.06.13"
+ STATUS = "RC1"
+ AUTHOR = "Thomas Sawyer"
+ EMAIL = "transami@transami.net"
+
+ Package = "#{TITLE}"
+ Version = "v#{RELEASE} #{STATUS}"
+ Copyright = "Copyright © 2002 #{AUTHOR}, #{EMAIL}"
+
+ def XMLProof.about
+ puts
+ puts XMLProof::Package
+ puts XMLProof::Version
+ puts XMLProof::Copyright
+ puts
+ end
+
+end
+
+# Write about info to standerd out
+if $0 == __FILE__
+ XMLProof.about
+end
\ No newline at end of file
diff --git a/lib/xmlproof/datatypes.rb b/lib/xmlproof/datatypes.rb
new file mode 100644
index 0000000..5cb3e36
--- /dev/null
+++ b/lib/xmlproof/datatypes.rb
@@ -0,0 +1,109 @@
+# XMLProof/Ruby - Datatypes
+# <a schema for the rest of us/>
+# Copyright (c) Thomas Sawyer, Ruby License
+
+require 'parsedate'
+
+module XMLProof
+
+ #
+ class Datatypes
+
+ # REGULAR EXPRESSION CONSTANTS
+ SYMBOL = /^[a-zA-Z0-9_]+$/
+ BOOLEAN = /^(yes|true|1|on|no|false|0|off|-1|nil)$/
+ BOOLEAN_TRUE = /^(yes|true|1|on)$/
+ BOOLEAN_FALSE = /^(no|false|0|off|-1|nil)$/
+ STRING = /.*/
+ INTEGER = /^([+]|[-])?\d+$/
+ UNSIGNED_INTEGER = /^\d+$/
+ FLOAT = /^[-+]?\d*\.?\d+$/
+ TIME = /\d+:\d+/
+ TIMESTAMP = /20\d{2}(-|\/)((0[1-9])|(1[0-2]))(-|\/)((0[1-9])|([1-2][0-9])|(3[0-1]))(T|\s)(([0-1][0-9])|(2[0-3])):([0-5][0-9]):([0-5][0-9])/
+ EMAIL = /^([a-zA-Z0-9_\-\.]+)@((\[[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.)|(([a-zA-Z0-9\-]+\.)+))([a-zA-Z]{2,4}|[0-9]{1,3})(\]?)$/
+ UK_POSTCODE = /^[a-zA-Z]{1,2}[0-9][0-9A-Za-z]{0,1} {0,1}[0-9][A-Za-z]{2}$/
+ US_POSTCODE = /^\d{5}-\d{4}|\d{5}|[A-Z]\d[A-Z] \d[A-Z]\d$/
+ INDIAN_POSTCODE = /^\d{3}\s?\d{3}$/
+ US_PHONE = /^\D?(\d{3})\D?\D?(\d{3})\D?(\d{4})$/
+ DUTCH_PHONE = /(^\+[0-9]{2}|^\+[0-9]{2}\(0\)|^\(\+[0-9]{2}\)\(0\)|^00[0-9]{2}|^0)([0-9]{9}$|[0-9\-\s]{10}$)/
+ IP4 = /^(25[0-5]|2[0-4][0-9]|[0-1]{1}[0-9]{2}|[1-9]{1}[0-9]{1}|[1-9])\.(25[0-5]|2[0-4][0-9]|[0-1]{1}[0-9]{2}|[1-9]{1}[0-9]{1}|[1-9]|0)\.(25[0-5]|2[0-4][0-9]|[0-1]{1}[0-9]{2}|[1-9]{1}[0-9]{1}|[1-9]|0)\.(25[0-5]|2[0-4][0-9]|[0-1]{1}[0-9]{2}|[1-9]{1}[0-9]{1}|[0-9])$/
+ CREDITCARD = /^(\d{4}[- ]){3}\d{4}|\d{16}$/
+ CREDITCARD_MAJOR = /^((4\d{3})|(5[1-5]\d{2})|(6011))-?\d{4}-?\d{4}-?\d{4}|3[4,7]\d{13}$/
+ UK_INSURANCE = /^[A-Za-z]{2}[0-9]{6}[A-Za-z]{1}$/
+ ISBN = /^\d{9}[\d|X]$/
+ US_CURRENCY = /^\$?([0-9]{1,3},([0-9]{3},)*[0-9]{3}|[0-9]+)(.[0-9][0-9])?$/
+ SSN = /^\d{3}-\d{2}-\d{4}$/
+
+
+ attr_reader :datatypes
+
+ def initialize
+
+ @datatypes = {
+ 'symbol' => { 'valid' => proc {|x| SYMBOL.match(x)}, 'typecast' => proc {|x| x.to_s} },
+ 'boolean' => { 'valid' => proc {|x| BOOLEAN.match(x)}, 'typecast' => proc {|x| x === BOOLEAN_TRUE} },
+ 'bool' => { 'valid' => proc {|x| BOOLEAN.match(x)}, 'typecast' => proc {|x| x === BOOLEAN_TRUE} },
+ 'yesno' => { 'valid' => proc {|x| BOOLEAN.match(x)}, 'typecast' => proc {|x| x === BOOLEAN_TRUE} },
+ 'string' => { 'valid' => proc {|x| STRING.match(x)}, 'typecast' => proc {|x| x.to_s} },
+ 'text' => { 'valid' => proc {|x| STRING.match(x)}, 'typecast' => proc {|x| x.to_s} },
+ 'varchar' => { 'valid' => proc {|x| STRING.match(x)}, 'typecast' => proc {|x| x.to_s} },
+ 'var' => { 'valid' => proc {|x| STRING.match(x)}, 'typecast' => proc {|x| x.to_s} },
+ 'char' => { 'valid' => proc {|x| STRING.match(x)}, 'typecast' => proc {|x| x.to_s} },
+ 'int' => { 'valid' => proc {|x| INTEGER.match(x)}, 'typecast' => proc {|x| x.to_i} },
+ 'integer' => { 'valid' => proc {|x| INTEGER.match(x)}, 'typecast' => proc {|x| x.to_i} },
+ 'serial' => { 'valid' => proc {|x| INTEGER.match(x)}, 'typecast' => proc {|x| x.to_i} },
+ 'unsigned' => { 'valid' => proc {|x| UNSIGNED_INTEGER.match(x)}, 'typecast' => proc {|x| x.to_i} },
+ 'float' => { 'valid' => proc {|x| FLOAT.match(x)}, 'typecast' => proc {|x| x.to_f} },
+ 'double' => { 'valid' => proc {|x| FLOAT.match(x)}, 'typecast' => proc {|x| x.to_f} },
+ 'decimal' => { 'valid' => proc {|x| FLOAT.match(x)}, 'typecast' => proc {|x| x.to_f} },
+ 'numeric' => { 'valid' => proc {|x| FLOAT.match(x)}, 'typecast' => proc {|x| x.to_f} },
+ 'time' => { 'valid' => proc {|x| TIME.match(x)}, 'typecast' => proc {|x| x.to_s} },
+ 'timestamp' => { 'valid' => proc {|x| TIMESTAMP.match(x)}, 'typecast' => proc {|x| x.to_s} },
+ 'date' => { 'valid' => proc {|x| complex_date_valid?(x)}, 'typecast' => proc {|x| x.to_s} },
+ 'ssn' => { 'valid' => proc {|x| SSN.match(x)}, 'typecast' => proc {|x| x.to_s} }
+ }
+
+ end
+
+ #
+ def add_datatype(datatype_name, valid_proc, typecast_proc)
+ @datatypes[datatype_name] = { 'valid' => valid_proc, 'typecast' => typecast_proc }
+ end
+
+ #
+ def remove_datatype(datatype_name)
+ @datatypes[datatype_name].remove
+ end
+
+ #
+ def valid?(datatype_name, value)
+ validity = true # assume cdata, so true
+ if @datatypes.has_key?(datatype_name)
+ if not @datatypes[datatype_name]['valid'].call(value.to_s)
+ validity = false
+ end
+ end
+ return validity
+ end
+
+ #
+ def typecast(datatype_name, value)
+ if @datatypes.has_key?(datatype_name)
+ return @datatypes[datatype_name]['typecast'].call(value.to_s)
+ end
+ end
+
+ #
+ def complex_date_valid?(date_string)
+ pd = ParseDate.parsedate(date_string)
+ if pd[0]
+ return true
+ else
+ return false
+ end
+ end
+
+ end # Datatypes
+
+end # XMLProof
+
diff --git a/lib/xmlproof/die.rb b/lib/xmlproof/die.rb
new file mode 100644
index 0000000..35b9f25
--- /dev/null
+++ b/lib/xmlproof/die.rb
@@ -0,0 +1,162 @@
+# XMLProof/Ruby - Die
+# <a schema for the rest of us/>
+# Copyright (c) 2002 Thomas Sawyer, Ruby License
+
+require 'xmlproof/datatypes'
+
+module XMLProof
+
+ # Represents a single parsed proof entry in a tag or attribute
+ class Die
+
+ attr_reader :arbitrary, :namespace, :xpath
+ attr_reader :name, :regexp, :datatype, :range
+ attr_reader :order, :closure, :option, :collection, :track
+
+ #
+ def initialize(cast, elora)
+
+ case elora
+ when String
+ @arbitrary = true
+ @namespace = nil
+ @xpath = elora
+ else
+ @arbitrary = false
+ @namespace = elora.inherited_namespace
+ @xpath = elora.absolute_xpath(true)
+ end
+
+ # primary
+ @name = nil
+ @regexp = nil
+ @datatype = nil
+ @range = nil
+
+ # secondary
+ @order = nil
+ @closure = nil
+ @option = nil
+ @collection = nil
+ @track = nil
+
+ # parse
+ parse_primary(cast)
+ parse_secondary(cast, elora) if not @arbitrary
+
+ end
+
+ # Parses the primary parts of a die
+ def parse_primary(cast)
+
+ entry = cast
+
+ re_name = Regexp.new(/\s+=(.*)=\s+/)
+ re_regexp = Regexp.new(/\s*\/(.*)\/\s+/)
+ re_datatype = Regexp.new(/\s+:(.*):\s+/)
+ re_range = Regexp.new(/\s+#(\d+)(\.\.\.?)([\d]+|[*]?)#\s*/)
+
+ md = re_name.match(entry)
+ @name = md[1].strip if md
+
+ md = re_regexp.match(entry)
+ @regexp = Regexp.new(md[1].strip) if md
+
+ md = re_datatype.match(entry)
+ @datatype = md[1].strip if md
+
+ md = re_range.match(entry)
+ if md
+ # continue parsing range
+ atleast = md[1].to_i
+ if md[3] == "*" # open ended
+ atmost = (1.0/0.0) # infinite
+ else
+ if md[2] == ".."
+ atmost = md[3].to_i
+ else #...
+ atmost = md[3].to_i - 1
+ end
+ end
+ if not atleast > atmost then # good range
+ @range = Range.new(atleast, atmost)
+ end
+ end
+
+ end
+
+ # Parses the secondary parts of a die
+ def parse_secondary(cast, elora)
+
+ entry = cast
+
+ re_order = Regexp.new(/\s+@(tag|content-a..z|content-z..a)@\s+/, Regexp::IGNORECASE)
+ re_closure = Regexp.new(/\s+\+(inclusive|exclusive)\+\s+/, Regexp::IGNORECASE)
+ re_option = Regexp.new(/\s+\?(.*)\?\s+/)
+ re_collection = Regexp.new(/\s+\!(.*)\!\s+/)
+ re_track = Regexp.new(/\s+\*(yes|true)\*\s+/, Regexp::IGNORECASE)
+
+ md = re_order.match(entry)
+ if md and elora.is_a?(REXML::Element)
+ contains = []
+ elora.elements.each do |el|
+ contains << el.name
+ end
+ @order = [ md[1].strip, contains ]
+ end
+
+ md = re_closure.match(entry)
+ if md and elora.is_a?(REXML::Element)
+ contains = []
+ elora.elements.each do |el|
+ contains << el.name
+ end
+ @closure = [ md[1].strip, contains ]
+ end
+
+ md = re_option.match(entry)
+ @option = md[1].strip.split(',') if md
+
+ md = re_collection.match(entry)
+ @collection = md[1].strip if md
+
+ md = re_track.match(entry)
+ @track = true if md
+
+ end
+
+ # Returns given cast to die's datatype
+ def typecast(x)
+ dts = DataTypes.new
+ return nil if x == nil # if nil
+ if @datatype
+ if dts.valid?(@datatype, x)
+ return dts.typecast(@datatype, x)
+ end
+ end
+ return x
+ end
+
+ # Returns whether this control has only a name and is therefore a link
+ def link?
+ #puts "--", @node.name, @name, @regexp, @datatype, @option, @collection, @count, @order, @set, @track
+ return (@name and not @regexp and not @datatype and not @option and not @collection and not @count and not @order and not @track)
+ end
+
+ # Class method to link one die to another
+ def link(to_die)
+ @regexp = to_die.regexp
+ @datatype = to_die.datatype
+ @range = to_die.range
+ if not @arbitrary
+ @order = to_die.order
+ @closure = to_die.closure
+ @option = to_die.option
+ @collection = to_die.collection
+ @track = to_die.track
+ end
+ end
+
+ end # Die
+
+end # XMLProof
diff --git a/lib/xmlproof/proof.rb b/lib/xmlproof/proof.rb
new file mode 100644
index 0000000..42d1fa4
--- /dev/null
+++ b/lib/xmlproof/proof.rb
@@ -0,0 +1,148 @@
+# XMLProof/Ruby - Proof
+# <a schema for the rest of us/>
+# Copyright (c)2002 Thomas Sawyer, Ruby License
+
+require 'xmltoolkit/rerexml/rerexml'
+require 'xmltoolkit/xmlproof/die'
+require 'xmltoolkit/xmlproof/proofsheet'
+
+module XMLProof
+
+ # A whole Proof created by parsing a Proofsheets object
+ class Proof
+
+ attr_reader :proofsheets, :absolute_dies, :arbitrary_dies, :options, :collections
+
+ # Loads and parses proofsheet schema.
+ def initialize(proofsheets)
+
+ if proofsheets.is_a?(Proofsheets)
+ @proofsheets = proofsheets
+ else
+ @proofsheets = Proofsheets.new
+ @proofsheets.load_proofsheets(proofsheets)
+ end
+
+ @absolute_dies = {} # absolute_dies[xpath] = die
+ @arbitrary_dies = {} # arbitrary_dies[namespace, xpath] = die
+
+ @options = Hash.new([]) # options[option] = [ [namespace, xpath], ... ]
+ @collections = Hash.new([]) # collections[collection] = [ [ namespace, xpath ], ... ]
+
+ parse_proofsheet
+ end
+
+ # Returns the absolute die given the namespace and xpath.
+ def die(namespace, xpath)
+ return @absolute_dies[[namespace, xpath]]
+ end
+
+
+ private # --------------------------------------------
+
+ # parses the proofsheets to create a parsed proofsheet
+ def parse_proofsheet
+
+ @proofsheets.each do |proofsheet|
+
+ xps_document = proofsheet.document
+
+ # load all casted general elements
+ REXML::XPath.each(xps_document.root,'descendant::*') do |element|
+ namespace = element.inherited_namespace
+ # absolute dies
+ if namespace != "http://www.transami.net/namespace/xmlproof" and namespace != "http://transami.net/namespace/xmlproof"
+ if element.has_text?
+ if not element.text.strip.empty?
+ xpath = element.absolute_xpath(true)
+ cast = " " + element.text.strip + " "
+ @absolute_dies[[namespace, xpath]] = Die.new(cast, element) if not @absolute_dies.has_key?([namespace, xpath])
+ end
+ end
+ end
+ end
+
+ # load all casted general attributes
+ REXML::XPath.each(xps_document.root,'descendant::*[@]') do |element|
+ element.attributes.each_attribute do |attribute|
+ prefix = attribute.inherited_prefix
+ name = attribute.name
+ namespace = attribute.inherited_namespace
+ if namespace != "http://www.w3.org/2000/xmlns/" and prefix != 'xmlns' and name != 'xmlns'
+ if namespace != "http://www.transami.net/namespace/xmlproof" and namespace != "http://transami.net/namespace/xmlproof"
+ if not attribute.value.strip.empty?
+ xpath = attribute.absolute_xpath(true)
+ cast = " " + attribute.value.strip + " "
+ @absolute_dies[[namespace, xpath]] = Die.new(cast, attribute) if not @absolute_dies.has_key?([namespace, xpath])
+ end
+ end
+ end
+ end
+ end
+
+ # load all casted arbitrary dies
+ REXML::XPath.each(xps_document.root,'descendant::x:arbitrary', {'x' => 'http://www.transami.net/namespace/xmlproof'}) do |element|
+ if element.has_text?
+ if not element.text.strip.empty?
+ # get xpath
+ if element.attributes.has_key?('xpath')
+ xpath = element.attributes['xpath']
+ else
+ raise "no xpath attribute given for arbitrary"
+ end
+ cast = " " + element.text.strip + " "
+ @arbitrary_dies[xpath] = Die.new(cast, xpath) if not @arbitrary_dies.has_key?(xpath)
+ end
+ end
+ end
+
+ # link dies
+ defined_dies = @absolute_dies.select { |key, die| die.link? == false } | @arbitrary_dies.select { |key, die| die.link? == false }
+ defined_dies_lookup = {}
+ defined_dies.each do |key, die|
+ defined_dies_lookup[die.name] = die
+ end
+ undefined_absolute_dies = @absolute_dies.select { |key, die| die.link? == true }
+ undefined_absolute_dies.each do |key, die|
+ if defined_dies_lookup.has_key?(die.name)
+ @absolute_dies[key].link(defined_dies_lookup[die.name])
+ end
+ end
+ undefined_arbitrary_dies = @arbitrary_dies.select { |key, die| die.link? == true }
+ undefined_arbitrary_dies.each do |key, die|
+ if defined_dies_lookup.has_key?(die.name)
+ @arbitrary_dies[key].link(defined_dies_lookup[die.name])
+ end
+ end
+
+ # load options and collections
+ @absolute_dies.each do |key, die|
+ # options
+ if die.option
+ die.option.each do |opt|
+ @options[opt] = @options[opt] | [key]
+ end
+ end
+ # collections
+ if die.collection
+ @collections[die.collection] = @collections[die.collection] | [key]
+ end
+ end
+
+ end
+
+ end
+
+ # Returns an element's first text node or an attribute's value.
+ def content(element_or_attribute)
+ case element_or_attribute
+ when REXML::Attribute
+ return element_or_attribute.value
+ when REXML::Element
+ return element_or_attribute.text
+ end
+ end
+
+ end # Proof
+
+end # XMLProof
diff --git a/lib/xmlproof/proofreader.rb b/lib/xmlproof/proofreader.rb
new file mode 100644
index 0000000..df4572f
--- /dev/null
+++ b/lib/xmlproof/proofreader.rb
@@ -0,0 +1,357 @@
+# XMLProof/Ruby - Proofreader
+# <a schema for the rest of us/>
+# Copyright (c) 2002 Thomas Sawyer, Ruby License
+
+require 'xmlproof/proof'
+require 'tomslib/communication'
+
+include TomsLib::Communication
+
+module XMLProof
+
+ # Validator class
+ class Proofreader
+
+ include TomsLib::Communication
+
+ attr_reader :errors, :undefined, :xml_document, :proof #, :proofsheets
+
+ def initialize(proof=nil)
+ @errors = nil
+ @undefined = nil
+ @xml_document = nil
+ @proof = proof
+ #@proofsheets = nil
+ end
+
+ # Reinitializes the errors array
+ def reset
+ @errors = nil
+ @undefined = nil
+ end
+
+ # Returns whether the last document proofread was valid
+ def valid?
+ if @errors
+ return @errors.empty?
+ else
+ return nil # no validation has been run
+ end
+ end
+
+ # Set proof to use for validation. This also allows for adhoc validation.
+ def use_proof(proof)
+ @proof = proof
+ #@proofsheets = @proof.proofsheets
+ end
+
+ # Set target document to be validated. This also allows for adhoc validation.
+ # if use_internal_proof is false (default), xml can be a string, url, local path or REXML::Document
+ # if use_internal_proof is true, xml must be a url or local path
+ def use_document(xml, use_internal_proof=false)
+ if use_internal_proof
+ proofsheets = Proofsheets.new
+ @xml_document = proofsheets.load_document_proofsheets(xml)
+ @proof = Proof.new(proofsheets)
+ else
+ if xml.is_a?(REXML::Document)
+ @xml_document = xml_document
+ else
+ @xml_document = REXML::Document.new(fetch_xml(xml))
+ end
+ end
+ end
+
+
+ # Validates a given XML document against set proof.
+ def proofread_document(xml)
+ use_document(xml)
+ proofread # returns valid?
+ end
+
+ # Validates a given XML document against internal schema instructions.
+ def proofread_document_internal(xml_url)
+ use_document(xml_url, true)
+ proofread # returns valid?
+ end
+
+
+ # Returns validity of element or attribute against regexp.
+ def regexp?(elora, die=nil)
+ die = @proofsheet.die(elora.inherited_namespace, elora.absolute_xpath) if not die
+ validity = true # default /.*/ so default valid is true
+ if die
+ if die.regexp
+ if not md = die.regexp.match(content(elora))
+ validity = false
+ error = "REGEXP '#{content(elora)}' /#{die.regexp.source}/"
+ @errors << [die.namespace, die.xpath, error]
+ end
+ end
+ end
+ return validity
+ end
+
+ # Returns validity of element or attribute against datatype.
+ def datatype?(elora, die=nil)
+ die = @proofsheet.die(elora.inherited_namespace, elora.absolute_xpath) if not die
+ validity = true # default cdata, so valid
+ if die
+ if die.datatype
+ dts = Datatypes.new
+ if not dts.valid?(die.datatype, content(elora))
+ validity = false
+ error = "DATATYPE '#{content(elora)}' :#{die.datatype}:"
+ @errors << [die.namespace, die.xpath, error]
+ end
+ end
+ end
+ return validity
+ end
+
+ # Returns validity of element against order.
+ def order?(element, die=nil)
+ die = @proofsheet.die(element.inherited_namespace, element.absolute_xpath) if not die
+ validity = true # default unordered, so valid
+ if die
+ if die.order and element.has_elements?
+ order_how = die.order[0]
+ order_has = die.order[1]
+ # load childern element tag names into an array
+ children = []
+ element.elements.each do |child_element|
+ children << child_element.name
+ end
+ # how do you want it?
+ case order_how
+ when 'tag'
+ # remove adjacent duplicates
+ packed_children = []
+ children.each do |c|
+ packed_children << c if packed_children.last != c
+ end
+ # remove non-intersecting children items
+ same_children = packed_children
+ diff = children - order_has
+ diff.each do |d|
+ same_children.delete(d)
+ end
+ # remove non-intersecting ordered items
+ same_has = order_has.dup
+ diff = order_has - same_children
+ diff.each do |d|
+ same_has.delete(d)
+ end
+ # correct?
+ if same_children != same_has
+ validity = false
+ error = "ORDER tag"
+ @errors << [die.namespace, die.xpath, error]
+ end
+ when 'content-a..z'
+ sorted_children = children.sort
+ if not sorted_children == ordered_children
+ validity = false
+ error = "ORDER content-a..z"
+ @errors << [die.namespace, die.xpath, error]
+ end
+ when 'content-z..a'
+ sorted_children = children.sort.reverse
+ if not sorted_children == ordered_children
+ validity = false
+ error = "ORDER content-z..a"
+ @errors << [die.namespace, die.xpath, error]
+ end
+ end
+ end
+ end
+ return validity
+ end
+
+ # Returns validity of element against closure.
+ def closure?(element, die=nil)
+ die = @proofsheet.die(element.inherited_namespace, element.absolute_xpath) if not die
+ validity = true # default unordered, so valid
+ if die
+ if die.closure and element.has_elements?
+ closure_how = die.closure[0]
+ closure_has = die.closure[1]
+ # load childern element tag names into an array
+ children = []
+ element.elements.each do |child_element|
+ children << child_element.name
+ end
+ case closure_how
+ when 'inclusive'
+ diff = closure_has - children
+ if not diff.empty?
+ validity = false
+ error = "CLOSURE inclusive"
+ @errors << [die.namespace, die.xpath, error]
+ end
+ when 'exclusive'
+ diff = closure_has - children
+ if not diff.empty?
+ validity = false
+ error = "CLOSURE exclusive"
+ @errors << [die.namespace, die.xpath, error]
+ end
+ diff = children - closure_has
+ if not diff.empty?
+ validity = false
+ error = "CLOSURE exclusive"
+ @errors << [die.namespace, die.xpath, error]
+ end
+ end
+ end
+ end
+ end
+
+
+ # Returns validity of the applicable array of elements or attributes against range.
+ # This markers validates in quasi-relation to the whole document.
+ def range?(eloras, die)
+ validity = true # default range, 0..*
+ if die
+ if die.range
+ # divide and count eloras by parent groups
+ parent_groups = Hash.new(0)
+ eloras.each do |elora|
+ case elora
+ when REXML::Attribute
+ parent_groups[elora.element] += 1
+ when REXML::Element
+ parent_groups[elora.parent] += 1
+ end
+ end
+ parent_groups.each do |parent, tally|
+ if not die.range === tally
+ validity = false
+ error = "RANGE #{tally} ##{die.range}#"
+ @errors << [die.namespace, die.xpath, error]
+ end
+ end
+ end
+ end
+ return validity
+ end
+
+
+ # Returns validity against option.
+ # This markers validate in relation to the whole document.
+ def option?
+ if @xml_document
+ validity = true
+ count = {}
+ @proof.options.each do |option, locator_array|
+ count[option] = []
+ locator_array.each do |locator|
+ namespace = locator[0]
+ xpath = locator[1]
+ REXML::XPath.each(@xml_document, xpath) do |elora|
+ count[option] << locator if elora.inherited_namespace == namespace
+ end
+ end
+ end
+ count.each do |option, locator_array|
+ if locator_array.size > 1 then
+ validity = false
+ error = "OPTION #{option}"
+ @errors << [locator_array[0][0], locator_array[0][1], error]
+ end
+ end
+ return validity
+ else
+ raise "no XML document provided to validate options against"
+ end
+ end
+
+ # Returns validity against collection.
+ # This markers validate in relation to the whole document.
+ def collection?
+ if @xml_document
+ validity = true
+ missing = {}
+ @proof.collections.each do |collection, locator_array|
+ missing[collection] = locator_array.dup
+ locator_array.each do |locator|
+ namespace = locator[0]
+ xpath = locator[1]
+ REXML::XPath.each(@xml_document, xpath) do |element_or_attribute|
+ missing[collection].delete([namespace, xpath]) if element_or_attribute.inherited_namespace == namespace
+ end
+ end
+ end
+ missing.each do |collection, locator_array|
+ if not locator_array.empty?
+ validity = false
+ error = "COLLECTION #{collection}"
+ @errors << [locator_array[0][0], locator_array[0][1], error]
+ end
+ end
+ return validity
+ else
+ raise "no XML document provided to validate collections against"
+ end
+ end
+
+
+ # Validates entire document against the proofsheet.
+ def proofread
+
+ @errors = []
+
+ # absolute dies
+ @proof.absolute_dies.each do |key, die|
+ applicable_eloras = REXML::XPath.match(@xml_document, die.xpath).select do |elora|
+ die.namespace == elora.inherited_namespace
+ end
+ validate_eloras(applicable_eloras, die)
+ end
+
+ # arbitrary dies
+ @proof.arbitrary_dies.each do |key, die|
+ applicable_eloras = REXML::XPath.match(@xml_document, die.xpath)
+ validate_eloras(applicable_eloras, die)
+ end
+
+ option?
+ collection?
+
+ return @errors.empty? # valid?
+
+ end
+
+
+ private # --------------------------------------------------
+
+ #
+ def validate_eloras(applicable_eloras, die)
+ applicable_eloras.each do |elora|
+ if elora.is_a?(REXML::Attribute)
+ regexp?(elora, die)
+ datatype?(elora, die)
+ elsif elora.is_a?(REXML::Element)
+ regexp?(elora, die)
+ datatype?(elora, die)
+ order?(elora, die)
+ closure?(elora, die)
+ end
+ end
+ range?(applicable_eloras, die)
+ end
+
+
+ # Returns an element's first text node or an attribute's value.
+ def content(element_or_attribute)
+ case element_or_attribute
+ when REXML::Attribute
+ return element_or_attribute.value
+ when REXML::Element
+ return element_or_attribute.text
+ end
+ end
+
+ end # Proofreader
+
+end # XMLProof
diff --git a/lib/xmlproof/proofsheet.rb b/lib/xmlproof/proofsheet.rb
new file mode 100644
index 0000000..9b09dcc
--- /dev/null
+++ b/lib/xmlproof/proofsheet.rb
@@ -0,0 +1,59 @@
+# XMLProof/Ruby - Proofsheet
+# <a schema for the rest of us/>
+# Copyright (c) 2002 Thomas Sawyer, Ruby License
+
+require 'tomslib/rerexml'
+require 'tomslib/communication'
+
+include TomsLib::Communication
+
+module XMLProof
+
+ # A Proofsheet is single proofsheet of a Proof
+ class Proofsheet
+
+ attr_reader :url, :document
+
+ def initialize(url)
+ @url = url
+ @document = REXML::Document.new(fetch_xml(@url))
+ end
+
+ end # Proofsheet
+
+
+ # Proofsheets is an array of Proofsheet objects (tied together they become a whole Proof)
+ class Proofsheets < Array
+
+ # Loads proofsheets.
+ def load_proofsheets(*xps_sources)
+ xps_sources.each do |url|
+ # we assume this is a valid xps
+ self << Proofsheet.new(url)
+ end
+ end
+
+ # Loads an XML document's proofsheets as given by its internal schema processing instructions.
+ # It subsequently returns the REXML::Document.
+ # This only takes a url (or local file path).
+ def load_document_proofsheets(xml_url)
+ xml_document = REXML::Document.new(fetch_xml(xml_url))
+ xml_document.schema_instructions.each do |si|
+ uri = si.attributes['uri'].downcase
+ url = si.attributes['url']
+ # be sure we only get relavent schema types
+ if uri.downcase == "http://www.transami.net/namespace/xmlproof"
+ if is_relative?(url)
+ relative_to_xml_url = File.dirname(xml_url) + '/' + url
+ else
+ relative_to_xml_url = url
+ end
+ self << Proofsheet.new(relative_to_xml_url)
+ end
+ end
+ return xml_document
+ end
+
+ end # Proofsheets
+
+end # XMLProof
diff --git a/work/xmlproof-spec.html b/work/xmlproof-spec.html
new file mode 100644
index 0000000..59b3e79
--- /dev/null
+++ b/work/xmlproof-spec.html
@@ -0,0 +1,474 @@
+<html>
+
+<head>
+
+ <title>XML:Proof Specifications</title>
+
+ <style>
+
+ span.n { font-size: 8pt; font-family: helvetica; font-weight: bold }
+
+ p { font-size: 10pt; font-family: arial }
+
+ h1 { font-family: arial }
+
+ h2 { font-family: arial }
+
+ h3 { font-family: arial }
+
+ h4 { font-family: arial }
+
+ </style>
+
+</head>
+
+<body>
+
+<table width="100%" cellspacing="0" cellpadding="0">
+<tr>
+ <td>
+ <font size="10">xml:Proof</font><br/>
+ <font size="4"><a schema for the rest of us/></font>
+ </td>
+ <td align="right">
+ <font size="2">v.02.06.10 Beta</font><br/>
+ <font size="2"> Thomas Sawyer (c)2002</font>
+ </td>
+</tr>
+</table>
+
+<br/>
+<br/>
+
+<center>
+<h1>Specification</h1>
+</center>
+
+<ol>
+
+<li><b>Prologue</b>
+
+ <ol>
+
+ <li>General comprehension of the W3C XML, Namespace, and XPath Recommendations,
+ and the Regular Expression Specification (see 10.1) is presumed by this document.</li>
+
+ <li><i>xml:Proof</i> is an XML schema. It was desgined to be easy to use
+ and to cover a vast portion of the XML schematic problem set.</li>
+
+ <li>A <i>proofsheet</i> is a valid XML document conforming to the xml:Proof specification.</li>
+
+ <li>A <i>target document</i> is a XML document to which a proofsheet is intended to be applied.</li>
+
+ <li>A <i>proof</i> is a parsed ordered set of proofsheets used to validate a target document.</li>
+
+ <li>A <i>proof-processor</i> is a program able to parse proofsheets and validate XML documents against such proofsheets.
+ The term <i>processor</i>, when unqualified, shall refer to this special case, proof-processor, in contrast to
+ the more general case, XML processor, throughout this document.</li>
+
+ <li>A <i>symbol</i> or <i>symbolic name</i> is a string of characters, matching against the regular expression /\w*/.</li>
+
+ <li>For the purposes of this specification, a <i>tag</i> will be the symbolic name of an XML element or attribute.
+ Element tags will be notated as <code><<i>tagname</i>></code> and attrtibute tags will be notated as <code><i>tagname</i>=</code></li>
+
+ </ol>
+
+</li>
+
+
+<li><b>Special Tags</b>
+
+ <ol>
+
+ <li><i>Special tags</i> are proofsheet tags defined by the xml:Proof specification, in contrast to
+ <i>general tags</i> which instead derive from a target document.</li>
+
+ <li>The special root tag of a proofsheet is <code><proofsheet></code>.
+ The root tag can take the alternate form of <code><schema></code>.
+ Both forms of the root tag serve the exact same purpose.</li>
+
+ <li>The <code><arbit></code> tag is a special xml:Proof tag used to indicate arbitrary
+ location within the target document. It has single valid attribute, <code>xpath=</code>,
+ which specifies the valid XPath to be matched against in the target document.</li>
+
+ <li>Both the root tag and the arbit tag, and its xpath attribute tag, must be prefixed in reference to the
+ xml:Proof namespace (3.5). While any arbitrary, but valid, prefix can be used to accomplish this,
+ it is recommended that you use <code>xp:</code> for consistancy and clearity.</li>
+
+ <li>All the general tags in a proofsheet are the same as those of the target document's
+ it intends to model. The hiearchy of those elements are also the same.</li>
+
+ </ol>
+
+</li>
+
+
+<li><b>Die</b>
+
+ <ol>
+
+ <li>A <i>Die</i> is a syntatical contruction which defines contstraints on a target document.</li>
+
+ <li>The sole text node of any proofsheet element and the value of any proofsheet attribute,
+ with expection to the special <code>xpath=</code> attribute, is a <i>die</i>.</li>
+
+ <li>A die may also be refered to as a <i>cast</i> and the act of writing or applying them, <i>casting</i>.</li>
+
+ <li>A die consists of an unordered list of <i>markers</i> seperated by whitespace.</li>
+
+ </ol>
+
+</li>
+
+<li><b>Markers</b>
+
+ <ol>
+
+ <li><b>Name Marker</b>
+
+ <ol>
+
+ <li>(syntax) <code><b>=<i>name</i>=</b></code></li>
+
+ <li>A <i>name marker</i> is a symbol, enclosed by equal signs, which identifies the die
+ such that it can be reused elsewhere in a the proofsheet.</li>
+
+ <li>Name markers provide a convenient means of die reuse.</li>
+
+ </ol>
+
+ </li>
+
+ <li><b>Regular Expression Marker</b>
+
+ <ol>
+
+ <li>(syntax) <code><b>/<i>regular expression</i>/</b></code></li>
+
+ <li>A <i>regular expression marker</i> is a syntatical structure conforming to the Regular Expression sepcification.
+ (see 10.1.4)</li>
+
+ <li>A <i>regular expression marker</i> dictates that the content of an element or attribute of the target document
+ must match against it.</li>
+
+ <li>If no regular expression marker is present in a die, the die's regular expression effectively
+ defaults to <code>/.*/</code></li>
+
+ </ol>
+
+ </li>
+
+ <li><b>Datatype Marker</b>
+
+ <ol>
+
+ <li>(syntax) <code><b>:<i>datatype</i>:</b></code></li>
+
+ <li>The <code><i>datatype marker</i></code> is an arbitrary symbol, enclosed by colons, naming the type of data to be contanied
+ by an element or attribute of the target document.</li>
+
+ <li>The xml:Proof specification does not dictate the selection of datatypes, this task is instead relinquished to the processor.</li>
+
+ <li>A <i>datatype marker</i> dictates that the content of an element or attribute of the target document
+ must conform to it.</li>
+
+ <li>Datatype markers allow an xml:Proof processor to typecast XML content into its underlying language of implementation.</li>
+
+ <li>An sufficiant xml:Proof processor should provide a means to add and alter its internal datatypes.</li>
+
+ <li>Any datatype not recognize by the xml:Proof processor shall be considered a <code>string</code>.</li>
+
+ </ol>
+
+ </li>
+
+ <li><b>Order Marker</b>
+
+ <ol>
+
+ <li>(syntax) <code><b>@<i>order</i>@</b></code></li>
+
+ <li>The <i>order marker</i> is a symbol enclosed in at-signs, which specifies the sort order of an element's child elements.</li>
+
+ <li>Valid values for <code><i>order</i></code> are <code>tag</code>, <code>content-a..z</code>,
+ <code>content-z..a</code> and <code>none</code>.</li>
+
+ <li>The <code>tag</code> value specifies that the child elements must be in the order as given within
+ the proofsheet.</li>
+
+ <li>The <code>content-a..z</code> and <code>content-z..a</code> values specify that the child element's
+ must appear in alphanumerical sequence, descending and ascending, respectively, by their first text node.</li>
+
+ <li>The <code>none</code> value specifies that the child elements need not appear in any particular order, and is the
+ default setting if no order marker is specified within a die.
+
+ <li>The order marker does not specify that each of the child elements must occur,
+ or that one and only one of each said children must appear. It only specifies that,
+ should they appear, they do so in the given order.</li>
+
+ <li>The order marker is only applicable to an element, not an attribute, and the element must have child elements.</li>
+
+ </ol>
+
+ </li>
+
+ <li><b>Set Marker</b>
+
+ <ol>
+
+ <li>(syntax) <code><b>+<i>set</i>+</b></code></li>
+
+ <li>The <i>set marker</i> is a symbol, enclosed in addition signs, which specifies the ... of an element's child elements,</li>
+
+ <li>Valid values for <code><i>set</i></code> are <code>inclusive</code>, <code>exclusive</code> and
+ <code>none</code>.</li>
+
+ <li>The <code>inclusive</code> value indicates that all the children elements must be present as given by the proofsheet,
+ but other elements may appear along with them.</li>
+
+ <li>The <code>exclusive</code> value indicates that all the children elements must be present as given by the proofsheet,
+ and that no other elements may appear along with them.</li>
+
+ <li>The value <code>none</code> indicates no requirments for the appearnece of child elements, and is the default
+ if no set marker is specified in the die.</li>
+
+ </ol>
+
+ </li>
+
+ <li><b>Range Marker</b>
+
+ <ol>
+
+ <li>(syntax) <code><b>#<i>range</i>#</b></code></li>
+
+ <li>The <i>range marker</i> is a symbol, enclosed by pound signs, which specifies the minumum and maximum number of a given
+ element or attribute that may appear within the target document.</li>
+
+ <li>For elements, a valid <code><i>range</i></code> can be <code>m..n</code> or <code>m...n</code>,
+ inclusive and exclusive of <code>n</code>, respectively, where <code>m</code> and <code>n</code> are unsigned integers
+ and <code>m</code> < <code>n</code>, such thah m is the minimum number and n is the maximum number.</li>
+
+ <li>An element may also a range marker of the form,<code>m..*</code>, equivalant to <code>m...*</code>
+ specifying a minimum number (m) and an unbound maximum number.</li>
+
+ <li>The default range marker for an element, if none is specified within the die, is <code>0..*</code>.</li>
+
+ <li>For attributes, a valid <code><i>range</i></code> can only be <code>0..1</code> or <code>1..1</code>.</li>
+
+ <li>The default range marker for an attribute, if none is specified within the die, is <code>0..1</code>.</li>
+
+ </ol>
+
+ </li>
+
+ <li><b>Option Marker</b>
+
+ <ol>
+
+ <li>(syntax) <code><b>?<i>option</i>?</b></code></li>
+
+ <li>The <i>option marker</i> is an arbitrary symbol, or unordered list of symbols seperated by commas, enclosed by question marks,
+ which specifies the element or attribute belongs to a group of simularly marked elements and attributes,
+ such that one and only one of such elements or attributes may appear within the target document.</li>
+
+ <li>Elements and/or attributes partaking of an identical <code><i>option</i></code> do not need to belong to the same parent, although
+ this can create a contridiction should an ancestor and one of its children partake of the same option group,
+ rendering a document invalid by definition.</li>
+
+ </ol>
+
+ </li>
+
+ <li><b>Collection Marker</b>
+
+ <ol>
+
+ <li>(syntax) <code><b>!<i>collection</i>!</b></code></li>
+
+ <li>A <i>collection marker</i> is an arbitrary symbol, enclosed by exlimation marks, which specifies
+ the element or attribute belongs to a group of simularly marked elements and attributes,
+ such that all of the elements and/or attributes sharing the same collection marker
+ must appear together within the target document.</li>
+
+ <li>Any given element or attribute can only belong to a single collection group.</li>
+
+ </ol>
+
+ </li>
+
+ <li><b>Track Marker</b>
+
+ <ol>
+
+ <li>(syntax) <code><b>*<i>track</i>*</b></code></li>
+
+ <li>The <i>track marker</i>, which is a boolean symbol enclosed by asterisks, is a special marker
+ which does not dictate structure or content. Rather it has a special purpose for XML datastores,
+ specifying that the element or attribute should be specifically indexed.</li>
+
+ <li>Valid boolean symbols for <code><i>track</i></code> are <code>yes</code>, <code>no</code>, <code>true</code>,
+ or <code>false</code>, with the negative notations being the default.</li>
+
+ <li>The tracking of particular XML elements in a datastore allows for fast search and retirieval,
+ and fast aggregate functions to be applied to their values.</li>
+
+ </ol>
+
+ </li>
+
+ </ol>
+
+</li>
+
+
+<li><b>File Extension and Namespace</b>
+
+ <ol>
+
+ <li>The file extension for a proofsheet is <code>.xps</code>.</li>
+
+ <li>xml:Proof is fully namespace aware, both in functionality and in application to an XML Document.
+ Since namespace prefixes serve as mere proxies to actual namespaces, any arbitrary prefix can be used,
+ but the namespace itself, i.e. the uri, must be unique and persistent.</li>
+
+ <li>The <i>xml:Proof namespace</i> shall be <code>http://www.transami.net/namespace/xmlproof</code>.</li>
+
+ <li>Within a proofsheet, the namespace of all of xml:Proof's special elements and attributes must
+ belong to the xml:Proof namespace.</li>
+
+ <li>Within a proofsheet, all general xml:Proof elements and attributes must partake of the
+ same namespace as their counterparts within the target document.</li>
+
+ </ol>
+
+</li>
+
+
+<li><b>Schema Declerations</b>
+
+ <ol>
+
+ <li>A proof-processor will recognize <i>schema declarations</i> made via XML processing instructions
+ within the target document.</li>
+
+ <li>(Syntax) <code><?xml:schema uri="<i>uri</i>" url="<i>url</i>" segment="<i>segment</i>"?></code></li>
+
+ <li>The <code>uri</code> attribute, or its synonym <code>space</code>, defines the kind of schema that is being utilized.
+ This is the specific namespace uri as defined by the schema's designers. In the case of xml:Proof, it
+ is "http://www.transami.net/namespace/xmlproof". It would be another string for, say, RELAX-NG or Schematron.</li>
+
+ <li>The <code>url</code> attribute, or its synonym <code>source</code> is a path to
+ the .xps file. The url can be a local path. The url is neccessary since proofsheets cannot be embedded in the
+ target document like DTDs can.</li>
+
+ <li>The <code>segment</code> attribute, or its synonym <code>fragment</code> is an optional attribute
+ specifying an XPath which selects only a portion of the .xps file to use as the proofsheet.</li>
+
+ <li>Interestingly, more than one schema can be declared within a given target document.
+ In so doing, schema declarations appearing earlier within the document have precedence
+ over those appering later. This allows for a means of cast overiding.</li>
+
+ <li>Note that one W3C reccomendation has been minorly violated by this schema declaration notation with
+ the reserved use of an instruction name matching /^xml/i.</li>
+
+ </ol>
+
+</li>
+
+
+<li><b>Namespace Declarations</b>
+
+ <ol>
+
+ <li>This xml:Proof specification offers a variant notation for namespace declarations, differing
+ from the W3C recommendation. The W3C's recommendation is here considered somewhat nebulous and clumsy,
+ and further, clutters and obscures the information of relevance within an XML document.</li>
+
+ <li>A proof-processor will recognize <i>namespace declarations</i> made via XML processing instructions
+ within the target document.</li>
+
+ <li>(Syntax) <code><?xml:ns prefix="<i>prefix</i>" uri="<i>uri</i>"?></li>
+
+ <li>The <code>prefix</code> and <code>uri</code> attribute tags can also be labeled
+ <code>name</code> and <code>space</code>, respectively.</li>
+
+ <li>This specification recommends this use namespace declarations via document level processing instructions,
+ instead of within general element tags as recommended by the W3C.</li>
+
+ <li>This notation can coexist with the standrard notation because, in effect, all the namespace processing
+ instruction specifies is insertion of a document level ATTRLIST for the namespaces thus defined.</li>
+
+ <pre>
+ <!DocType <i>docname</i> [
+ <!ATTLIST <i>docname</i> xmlns:<i>prefix</i> '<i>uri</i>' CDATA>
+ ]>
+ </pre>
+
+ <li>Obviously, many XML processors do not support this processing instruction. It is hoped that they will
+ adopt this improved notation over time as it is a very simple and useful addition.</li>
+
+ <li>A proof-processor will provide the means to convert between this notation and the standard notation.</li>
+
+ </ol>
+
+</li>
+
+<li><b>Functionality</b>
+
+ <ol>
+
+ <li>A proof-processor validates a target document by matching namespaces and XPaths between the proofsheet
+ and the target document, such that all target document elements and attributes are validated
+ againt their corresponding proofsheet's dies.
+
+ <li>Any possible absolute XPath within a proofsheet should only be accounted for once.
+ If this is not adhered to it is not likely to cause a error. The proof-processor should only match against
+ the first occurance of an absolute die within the proofsheet.</li>
+
+ <li>The special <arbit> element overlaps in application with the general elements and attributes.
+ In other words, a target document's element or attribute must conform to both an artbitrary die and a general die
+ should both be applicable.</li>
+
+ <li>The special <arbit> element overlaps in application with other arbitrary assignments.
+ In other words, a target document's element or attribute must conform to all applicable artbitrary die.</li>
+
+ </ol>
+
+</li>
+
+
+<li><b>Appendix</b>
+
+ <ol>
+
+ <li><b>References</b>
+
+ <ol>
+
+ <li><a href="http://www.w3.org/TR/REC-xml">W3C XML Recommendation</a></li>
+
+ <li><a href="http://www.w3.org/TR/REC-xml-names/">W3C Namespacee Recommendation</a></li>
+
+ <li><a href="http://www.w3.org/TR/xpath">W3C XPath Recommendation</a></li>
+
+ <li><a href="http://www.opengroup.org/onlinepubs/007908799/xbd/re.html">Regular Expressions Specification </a></li>
+
+ </ol>
+
+ </li>
+
+ </ol>
+
+</li>
+
+</ol>
+
+<br/>
+<br/>
+<br/>
+
+</body>
+
+</html>
+
diff --git a/work/xmlproof.html b/work/xmlproof.html
new file mode 100644
index 0000000..33e5fe9
--- /dev/null
+++ b/work/xmlproof.html
@@ -0,0 +1,501 @@
+<html>
+
+<head>
+
+ <title>XML:Proof Documentation</title>
+
+ <style>
+
+ p { font-family: arial }
+
+ h1 { font-family: arial }
+
+ h2 { font-family: arial }
+
+ h3 { font-family: arial }
+
+ h4 { font-family: arial }
+
+ </style>
+
+</head>
+
+<body>
+
+<table width="100%" cellspacing="0" cellpadding="0">
+<tr>
+ <td>
+ <font size="10">xml:Proof</font><br/>
+ <font size="4"><a schema for the rest of us/></font>
+ </td>
+ <td align="right">
+ <font size="2">v.02.06.10 Beta</font><br/>
+ <font size="2"> Thomas Sawyer (c)2002</font>
+ </td>
+</tr>
+</table>
+
+<br/>
+
+<h1>Introduction</h1>
+<p>A standard extensible and potable data language is extremely important to the IT community, thus
+the importance of XML technology, and its oft mention, as it has become that defacto standard in this regard.
+Yet it is widely held that XML is a bulky, less than optimal, implementation of such a standard. Fortunately
+there are ways in which the community itself can go about improving XML. xml:Proof is, in part, such an improvement.</p>
+
+<p>XML, in and of itself, is simply a general data/metadata format --a way to organize data such that both the content and
+description of that content are bound together. But in itself it does not dictate the validity of that data.
+To patch this "hole" in XML, DTD or the Document Type Definition was made part of the XML specification.
+DTD has advantages. It is actually broader in applicability as it's syntax is not XML, but a superset, SGML.
+Yet this is also its disadvantage. The optimal solution would use XML itself as the base syntax,
+so that the same tools can be utilized for both the data/metadata markup and the validity markup.
+This is where schemas come into play. Schemas are XML document validity definitions, just as DTDs are,
+but they keep to the boundries of XML itself, i.e. schemas are marked-up with XML.</p>
+
+<p>There are a number of schemas already available for XML, like TREX, RELAX, RELAX-NG, and Schematron.
+Offically the W3C has offered up their own XML-Schema. Should you place examples of all of these schemas side-by-side,
+along with an example of xml:Proof, xml:Proof will immediately distinguish itself from the rest.
+This is due to the fact that xml:Proof, unlike the others, actually utilizes the very tag names it intends to formalize,
+rather then invent a whole new set of its own. In fact xml:Proof has only two specially defined elements, the root tag
+and the arbit tag. As you can imagine this makes xml:Proof mark-up rather trivial to read and write.
+Additionly xml:Proof manages to do with so few speciality tags and attributes because it utilizes existing standard
+technologies to do much of its dirty work, that is Regular Expressions. Regular Expressions are well battle tested
+in the field, and there is little good reason to reinvent the wheel. Regular Expressions are a schema, using a
+broader sense of the word, in their own right, applicable to strings of text. As there are plenty of strings of
+text in XML documents, it isn't too hard to see how this might be useful. xml:Proof intends useage of
+Regular Expression insofar as is applicable in the context of XML. Utilizing this well known pre-existing technology,
+among its other features, XMProof is able to offer a unique and powerful schema to the XML community.</p>
+
+
+<br/>
+<h1>Overview</h1>
+
+<h2>File Extension and Namespace</h2>
+<p>Personally I hate file extensions. Why file systems do not include a place for this description as they do for
+the file name and last modified date is beyond me. I tend to blame MS-DOS. Oh well.
+The extension for xml:Proof proofsheets, as they are called, is <code>.xps</code>.</p>
+
+<p>xml:Proof is fully namespace aware, both in functionality and in application to an XML Document.
+This requires further explination. Namespace prefixes serve as mere proxies to actual namespaces.
+So while any arbitrary prefix can be used, a namespace itself, i.e. the uri, must be unique and persistent.
+The namespace uri for xml:Proof is <code>http://www.transami.net/namespace/xmlproof</code>.
+This namespace must be used on all of xml:Proof's special tags in order for any xml:Proof processor to function.
+Further, when creating xml:Proof proofsheets, the namespaces of the elements and attributes being described must also be
+taken into consideration with regards to the target XML document's. The elements and attributes of the XML document,
+in other words, must partake of the same namespaces as their counterparts within the proofsheet.
+This will become clearer as you read the rest of this document.
+</p>
+
+
+<br/>
+<h2>Root and Arbit Tags</h2>
+<p>There are only two special tags in xml:Proof.</p>
+
+<p>The first is the <code><proofsheet></code> tag. It is the root element of any xml:Proof schema document,
+i.e. the proofsheet. The special root tag can take the alternate form of <code><schema></code>.
+Both serve the same purpose.</p>
+
+<p>The second special tag is the <code><arbit></code> tag. This tag is used to indicate an arbitrary
+location in the XML document. It has a single valid attribute, <code>xpath</code>, which specifies the
+the matching XML document nodes to which its <i>die</i> corresponds (see below).
+</p>
+
+<p>Both of these special element tags and the special attrribute should always be prefixed with
+reference to the xml:Proof namespace. While any arbitrary, but valid, prefix will do,
+it is recommended that you use <code>xp:</code> for consistancy and clearity.</p>
+
+
+<br/>
+<h2>The Die is Cast</h2>
+<p>Witch exception to the special tags, all other tag and attribute names of an xml:Proof proofsheet
+are the same as those of the target XML documents it intends to model. The hiearchy of those elements
+are also the same. Thus the proofsheet is nearly as readable as any applicable target document.
+The text, or content, of elements and attributes is, in xml:Proof nomeclature, called a <i>die</i>.
+It may also be refered to as a <i>cast</i> and the act of writing or applying them, <i>casting</i>.
+A die consists of the following optional <i>markers</i> seperated by spaces:
+
+<ul>
+
+ <li><code><b>=<i>name</i>=</b></code>
+ <p>The <code><i>name</i></code> is an identitier which names the die
+ such that it can be reused later in the proofsheet. This
+ provides a convenient means of die reuse. An element or attribute having
+ only this marker and no other will gain its die characteristics from any other die
+ identically named which has other markers within its die.</p>
+ </li>
+
+ <li><code><b>/<i>regular expression</i>/</b></code>
+ <p>The <code><i>regular expression</i></code> marker is dictates that the content
+ of an element or attribute must match against it to be considered valid.
+ The regular expression of a die effectively defaults to <code>.*</code> if excluded.
+ </li>
+
+ <li><code><b>:<i>datatype</i>:</b></code>
+ <p>The <code><i>datatype</i></code> name is actually arbitrary, and can be anything desired.
+ xml:Proof itself dosen't care, but the utilization of an xml:Proof processor will!
+ Any given xml:Proof processor will generally "understand" the majority of common datatypes
+ and thus is able to typecast XML content into its underlying language of implementation.
+ Such is the main intent of datatype names in addition to validating content in simular fashion
+ to regular expressions. A good xml:Proof processor should also provide a means to add and alter
+ its internally recogzined datatypes. Any datatype it does not recognize will be treated as
+ <code>CDATA</code>, otherwise known as <code>string</code> or <code>text</code>.</p>
+ </li>
+
+ <li><code><b>@<i>order</i>@</b></code>
+ <p>The value of order may be <code>tag</code>, <code>content-a..z</code>, </code>content-z..a></code>, or <code>none</code>.
+ If <code>tag</code> then all child elements of the casted element must appear in sequence as given within the proofsheet.
+ if <code>content-a..z</code> or <code>content-z..a</code>, then the content of all child elements of the casted element
+ must appear in alpahnumrical order, descending or ascending, respectively. The value <code>none</code> specifies that
+ no specific sort order is required and is the default if the marker is not given within the die.
+ Keep in mind this marker does not specify that each of the children elements must occur or that
+ one and only one of said children may appear. Rather, it only specifies that, should they appear,
+ they do so in the given order. An element thus cast must have child elements.
+ This marker is not applicable to attributes and will be ignored if used thus.</p>
+ </li>
+
+ <li><code><b>+<i>closure</i>+</b></code>
+ <p>The value of closure can be <code>inclusive</code>, <code>exclusive</code>, or <code>none</code>.
+ Inclusivity means that all child elements of the cast element must appear as given in the proofsheet,
+ but other elements may appear as thier siblings. Exclusivity means that all child elements of the cast element
+ must appear as given in the proofsheet and that no other elements may appear as thier siblings.
+ If this marker is not present within the die, the default value of <code>none</code> is assumed, which
+ relinquishes any neccessary closure on an elements child elemets. An element thus cast as
+ <code>inclusive</code> or <code>exclusive</code> must have children elements.
+ This marker is not applicable to attributes and will be ignored if used thus.</p>
+ </li>
+
+ <li><code><b>#<i>range</i>#</b></code>
+ <p>Specifies a <code><i>range</i></code> of how many of a given element or attribute may appear.
+ For elements, a valid <code><i>range</i></code> can be <code>m..n</code> or <code>m...n</code>,
+ inclusive and exclusive of <code>n</code>, respectively,
+ where <code>m</code> and <code>n</code> are unsigned integers
+ and <code>m</code> < <code>n</code>. This notation was borrowed from the Ruby programming language.
+ There is also the special case <code>m..*</code> (same as <code>m...*</code>) which of course means unbound.
+ <code>0..*</code> is the default, meaning none or any number of the element may appear within the document.
+ For attributes, only <code>0..1</code> and <code>1..1</code> are valid, as an attribute may appear no more
+ than once in any given element, with 0..1 being the default.</p>
+ </li>
+
+ <li><code><b>?<i>option1,option2,...</i>?</b></code>
+ <p>Where <code><i>optionN</i></code> is set to an arbitrary group name.
+ This option name defines an option group to which the the element belongs.
+ This specifies that one and only one of the elements sharing the same option group name
+ may appear within the target document. This can provide interesting relationships in that
+ elements and/or attributes having the same group names do not need to belong to the same parent!
+ But be warned: this can create a contridiction should an ancestor and one of its
+ children partake of the same group. Do not do this as it will render your documents
+ invalid by definition.</p>
+ </li>
+
+ <li><code><b>!<i>collection</i>!</b></code>
+ <p>Where <code><i>collection</i></code> is set to an arbitrary collection name.
+ This collection name defines a collective group to which the element belongs,
+ and specifies that all of suich elements and/or attrributes must appear together
+ within the document. Any given attribute or element can only belong to a single collection.</p>
+ </li>
+
+ <li><b><code>*<i>track</i>*</code></b>
+ <p>This is a special marker which does not dictate structure or content.
+ It has a special purpose for XML datastores, like that implemented in DBXML.
+ It specifies that this element should be specifically indexed.
+ Tracking of particular XML elements in a datastore
+ allows for fast search and retirieval, and more importantly
+ fast aggregate functions to be applied to their values.</p>
+ </li>
+
+</ul>
+
+<br/>
+<p>Here's an example of a die:
+<pre>
+ <Nword> =nword= #1..2# /^N/ :varchar: </Nword>
+</pre>
+</p>
+
+<p>This die defines an xml tag named "Nword" to be any varchar beginning with the letter N and ooccuring
+only once or twice.</p>
+
+
+<br/>
+<h1>Namespaces and Schema Declerations</h1>
+<p>We have mentioned above xml:Proof's use of namespaces. In fact they are so fundemental, xml:Proof offers
+a variant notation for namespace declarations differing from the one recommened by the W3C.
+The W3C's recommendation is rather nebulous and clumsy, and further, clutters and obscures the information
+of relevance in an XML document. Therefore namespace declarations can be defined by document level
+processing instructions instead of within general element tags. Because XML processing instructions can
+be freely defined we have not violated any of the W3C standard by doing this,
+yet we have made our lives much improved!* This notation actually peacefully coexists with the
+standrard notation because it in effect does nothing more then insert a document level
+ATTRLIST for the namespaces defined.</p>
+
+<p>Here is the top of an XML document using this alternate notation:
+
+<pre>
+ <?xml version="1.0" encoding="ISO-8859-1"?>
+ <?xml:ns prefix="example" uri="http://www.transami.net/namespace/testing"?>
+</pre>
+
+This is effectively translated by the XML Processor into:
+
+<pre>
+ <?xml version="1.0" encoding="ISO-8859-1"?>
+ <!DocType <i>docname</i> [
+ <!ATTLIST <i>docname</i> xmlns:example 'http://www.transami.net/namespace/testing' CDATA>
+ ]>
+</pre>
+
+Thus the processing instruction <code>xml:ns</code> defines a namespace. The <code>prefix</code> and <code>uri</code>
+attributes can also be labeled <code>name</code> and <code>space</code>, respectively. Subsequently any tag
+or attribute prefixed with the <code>prefix</code> or <code>name</code> value will
+thus be associated to this declared namespace.
+
+<p>Obviously, to date, XML Processors generally do not support
+this processing instruction, but it is hoped that this alternate notation will
+catch on in the XML community and will be generally adopted as a new standard.
+In the mean time all xml:Proof processors should provide a means to convert between
+the two different notations.</p>
+
+<p>Schema declarations are similar to namespace declarations.
+They are declared via processing instructions as well. For example:
+
+<xmp>
+ <?xml version="1.0" encoding="ISO-8859-1"?>
+ <?xml:ns prefix="example" uri="http://www.transami.net/namespace/testing"?>
+ <?xml:schema uri="http://www.transami.net/namespace/xmlproof" url="example1.xsp"?>
+ <?xml:schema uri="http://www.transami.net/namespace/xmlproof" url="example2.xsp"?>
+</xmp>
+
+The <code>uri</code> attribute, or its synonym </code>space</code>, define the kind of schema that is being utilized.
+This is the specific namespace uri as defined by the schema's designers. In the case of xml:Proof, it
+is "http://www.transami.net/namespace/xmlproof". It would be another string for, say, RELAX-NG or Schematron.</p>
+
+<p>The <code>url</code> attribute, or its synonym </code>source</code> is a path name to
+the .xps file. In this example case, it is a local file in the same location as the XML document itself.
+This is neccessary since proofsheets can not be embedded in the document like DTDs can.</p>
+
+<p>Interestingly, more than one schema can be declared. In so doing, schema declarations appearing higher
+in the document have precedence over the later. This allows for a means of cast overiding. In our example,
+for any given tag within the document, a matching die will first be searched for in </code>example1.xps</code>.
+Only if it is not found there will <code>example2.xps</code> be searched. This can be quite useful in using borrowed
+schemas. You can add new entries or overide existing entries without actually changing the original's.</p>
+
+<p><font size="2">*Note: In fact one rule has been violated: the reserved use of an instruction name matching /^xml/i. well, :-p</font></p>
+
+
+<br/>
+<h1> Example </h1>
+<p>First let us look at a "traditional", "simple" XML-Schema example:</p>
+
+<xmp>
+
+ <?xml version="1.0" encoding="ISO-8859-1"?>
+
+ <shiporder orderid="889923"
+ xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
+ xsi:noNamespaceSchemaLocation="shiporder.xsd">
+ <orderperson>John Smith</orderperson>
+ <shipto>
+ <name>Ola Nordmann</name>
+ <address>Langgt 23</address>
+ <city>4000 Stavanger</city>
+ <country>Norway</country>
+ </shipto>
+ <item>
+ <title>Empire Burlesque</title>
+ <note>Special Edition</note>
+ <quantity>1</quantity>
+ <price>10.90</price>
+ </item>
+ <item>
+ <title>Hide your heart</title>
+ <quantity>1</quantity>
+ <price>9.90</price>
+ </item>
+ </shiporder>
+
+</xmp>
+
+<xmp>
+
+ <?xml version="1.0" encoding="ISO-8859-1" ?>
+ <xs:schema xmlns:xs="http://www.w3.org/2001/XMLSchema">
+ <xs:element name="shiporder">
+ <xs:complexType>
+ <xs:sequence>
+ <xs:element name="orderperson" type="xs:string"/>
+ <xs:element name="shipto">
+ <xs:complexType>
+ <xs:sequence>
+ <xs:element name="name" type="xs:string"/>
+ <xs:element name="address" type="xs:string"/>
+ <xs:element name="city" type="xs:string"/>
+ <xs:element name="country" type="xs:string"/>
+ </xs:sequence>
+ </xs:complexType>
+ </xs:element>
+ <xs:element name="item" maxOccurs="unbounded">
+ <xs:complexType>
+ <xs:sequence>
+ <xs:element name="title" type="xs:string"/>
+ <xs:element name="note" type="xs:string" minOccurs="0"/>
+ <xs:element name="quantity" type="xs:positiveInteger"/>
+ <xs:element name="prize" type="xs:decimal"/>
+ </xs:sequence>
+ </xs:complexType>
+ </xs:element>
+ </xs:sequence>
+ <xs:attribute name="orderid" type="xs:string" use="required"/>
+ </xs:complexType>
+ </xs:element>
+ </xs:schema>
+
+</xmp>
+
+<br/>
+<p>Now here's the near equivalent in xml:Proof, with a little extra added to show-off:</p>
+
+<xmp>
+
+ <?xml version="1.0" encoding="ISO-8859-1"?>
+ <?xml:ns name="example" space="http://www.transami.net/namespace/testing"?>
+ <?xml:schema source="example1.xsp" space="http://www.transami.net/namespace/xmlproof"?>
+
+ <example:shiporder orderid="889923">
+ <orderperson>John Smith</orderperson>
+ <shipto>
+ <name>Ola Nordmann</name>
+ <address>Langgt 23</address>
+ <city>4000 Stavanger</city>
+ <country>Norway</country>
+ </shipto>
+ <item>
+ <title>Empire Burlesque</title>
+ <note>Special Edition</note>
+ <quantity>1</quantity>
+ <price>10.90</price>
+ </item>
+ <item>
+ <title>Hide your heart</title>
+ <quantity>1</quantity>
+ <price>9.90</price>
+ </item>
+ </example:shiporder>
+
+</xmp>
+
+<xmp>
+
+ <?xml version="1.0" encoding="ISO-8859-1" ?>
+ <?xml:ns name="example" space="http://www.transami.net/namespace/testing" ?>
+ <?xml:ns name="xp" space="http://www.transami.net/namespace/xmlproof" ?>
+
+ <xp:proofsheet>
+ <example:shiporder orderid=":int:">
+ <orderperson> :text: ?bywho? </orderperson>
+ <orderclerk> :text: ?bywho? </orderclerk>
+ <shipto> #1..1# @true@
+ <name> :text: </name>
+ <address> :text: </address>
+ <city> :text: </city>
+ <country> :text: </country>
+ </shipto>
+ <item> #1..*# @true@
+ <title> :text: </title>
+ <note> :text: </note>
+ <quantity> =use_again= :unsigned: </quantity>
+ <overstock> =use_again= </overstock>
+ <price> :float: </price>
+ </item>
+ </example:shiporder>
+ </xp:proofsheet>
+
+</xmp>
+
+<br/>
+<p>Notice the difference in the way namespaces are used. XML-Schema has its own namespace for every tag,
+seperate from the XML document's, which makes sense, since it uses its own set of tag names. Furthermore the document
+itself is forced to use an "instance" of the schema as the namespace of its elements and attributes.
+Thus the document is "confined" to the schema. xml:Proof on the other hand uses the same tag and attribute names
+as the document itself and thus the same freely defined namespace. Using the same namespace gives the two sets of
+data a greater association, without the limitations imposed by XML-Schema, and, last but certainly not least,
+is far easier to comprehend.</p>
+
+
+<br/>
+<h1>Functionality</h1>
+<p>So all this is well and fine, but how does xml:Proof actually work? Well, that is farily simple really.
+xml:Proof simply matches XPaths between the proofsheet and the document sharing the same namespace,
+such that a particular die is applied to any corresponding document element or attribute. From the example given above,
+you'll notice that the <code>item</code> element appears twice within the XML document. These two elements
+match to the single proofsheet element of the same name. For instance the absolute XPath,
+<code>example:shiporder/item/quantity</code>, containing <code>1</code> in the document, matches the same
+absolute XPath, <code>example:shiporder/item/quantity</code>, containing <code>=use_again= :unsigned:</code> in the proofsheet.
+This points out an important restriction to proofsheets: any possible absolute XPath within a proofsheet should only
+be accounted for once.*</p>
+
+<p>Arbitrary dies, cast via the <code><arbit></code> tag, overlap in applicability with the general absolute dies.
+Thus if an element or attribute in a target XML document matches against an absolute XPath in the proofsheet and also
+matches against an arbitrary XPath, it must conform to both dies. Further arbitrary dies themselves may overlap in
+applicability.</p>
+
+<p><font size="2">*Note: If this is not adhered to it is not likely to cause a problem.
+The first occurance of a die will be matched and that will be that.</font></p>
+
+
+<br/>
+<h1>XMLProof/Ruby API</h1>
+
+<p>The XMLProof/Ruby API is a Ruby library for using xml:Proof.
+You can find documentation for its use here: <a href="doc/index.html">xml:Proof/Ruby API Documentation</a>.</p>
+
+
+<br/>
+<h1>Conclusion</h1>
+<p>xml:Proof, like all other schemas, is not a cure all for schema definition.
+It has its strengths and weaknesses. But no other schema, of which we are aware,
+matches its capabilites or ease of use. In the end, we believe, and we hope others will agree,
+xml:Proof is by far and away a better way to schema XML. It solves the majority of the requirements of a
+schematic meta-language while minimizing the complexity assocciated with them.
+Best of all it won't give you headaches.</p>
+
+
+ <li>xml:Proof, like all other schemas, is not a cure all for schema definition.
+It has its strengths and weaknesses. But we beleive that an analysis of the schematic problem set indicates
+that no other schema matches xml:Proofs capabilites or ease of use.</li>
+
+ <li>xml:Proof is a better way to schema XML. It solves the majority of the requirements of a
+schematic meta-language while minimizing the complexity assocciated with them.</li>
+
+
+<br/>
+<hr/>
+<h1>After Thoughts</h1>
+Honestly I wish prefixes and namespaces were inherited, such that a non-prefixed tag inherits the prefix of its
+closest prefixed ancestor. Thus in the example:
+
+<pre>
+ <p:a>
+ <b/>
+ </p:a>
+</pre>
+
+<code><b></code> inherits the prefix <code>p</code> from <code><p:a></code>.</p>
+
+<p>Further, the root tag of a document, without a given prefix, would inherit the prefix of the first appearing namespace.
+Thus, with this new notation, there is no such beast called the <i>default namespace</i>.
+All tags and attributes, in the same fashion, either have a prefix or inherit one. The only exception is when no
+namespaces are declared. In this case all tags and attributes, "erroneously" prefixed or not
+belong to the <i>null-namespace</i>, or <i>empty-namespace</i>. Effectively this means no namespace.
+The null-namespace can be referenced by a prefix by setting the namespace uri to an empty string.</p>
+
+<p>Dosen't this just make more sense? This seems so appealing to me that I almost made this
+a requirement of xml:Proof!. Oh well, the W3C keeps us working hard.</p>
+
+<br/>
+<br/>
+<br/>
+
+</body>
+
+</html>
+
diff --git a/work/xmlproof.tgz b/work/xmlproof.tgz
new file mode 100644
index 0000000..79f1c59
Binary files /dev/null and b/work/xmlproof.tgz differ
|
riemann42/Music-Tag-OGG
|
40f07f902e9fe968579c701481faed31152a7d1f
|
Version 0.4101
|
diff --git a/lib/Music/Tag/OGG.pm b/lib/Music/Tag/OGG.pm
index 51086b1..10b8f16 100644
--- a/lib/Music/Tag/OGG.pm
+++ b/lib/Music/Tag/OGG.pm
@@ -1,252 +1,252 @@
package Music::Tag::OGG;
use strict; use warnings; use utf8;
-our $VERSION = '.4101';
+our $VERSION = '0.4101';
# Copyright © 2007,2008,2010 Edward Allen III. Some rights reserved.
#
# You may distribute under the terms of either the GNU General Public
# License or the Artistic License, as specified in the README file.
use Ogg::Vorbis::Header::PurePerl;
use base qw(Music::Tag::Generic);
our %tagmap = (
TITLE => 'title',
TRACKNUMBER => 'track',
TRACKTOTAL => 'totaltracks',
ARTIST => 'artist',
ALBUM => 'album',
COMMENT => 'comment',
DATE => 'releasedate',
GENRE => 'genre',
DISC => 'disc',
LABEL => 'label',
ASIN => 'asin',
MUSICBRAINZ_ARTISTID => 'mb_artistid',
MUSICBRAINZ_ALBUMID => 'mb_albumid',
MUSICBRAINZ_TRACKID => 'mb_trackid',
MUSICBRAINZ_SORTNAME => 'sortname',
RELEASECOUNTRY => 'countrycode',
MUSICIP_PUID => 'mip_puid',
MUSICBRAINZ_ALBUMARTIST => 'albumartist'
);
sub default_options {
{ vorbiscomment => "vorbiscomment" }
}
sub set_values {
return ( values %tagmap, 'picture');
}
sub saved_values {
return ( values %tagmap, 'picture');
}
sub ogg {
my $self = shift;
unless ((exists $self->{_OGG}) && (ref $self->{_OGG})) {
if ($self->info->get_data('filename')) {
$self->{_OGG} = Ogg::Vorbis::Header::PurePerl->new($self->info->get_data('filename'));
#$self->{_OGG}->load();
}
else {
return undef;
}
}
return $self->{_OGG};
}
sub get_tag {
my $self = shift;
if ( $self->ogg ) {
foreach ($self->ogg->comment_tags) {
my $comment = uc($_);
if (exists $tagmap{$comment}) {
my $method = $tagmap{$comment};
$self->info->set_data($method, $self->ogg->comment($comment));
}
else {
$self->status("Unknown comment: $comment");
}
}
$self->info->set_data('secs',$self->ogg->info->{"length"});
$self->info->set_data('bitrate',$self->ogg->info->{"bitrate_nominal"});
$self->info->set_data('frequency',$self->ogg->info->{"rate"});
}
else {
print STDERR "No ogg object created\n";
}
return $self;
}
sub set_tag {
my $self = shift;
unless (open(COMMENT, "|-", $self->options->{vorbiscomment} ." -w ". "\"". $self->info->get_data('filename') . "\"")) {
$self->status("Failed to open ", $self->options->{vorbiscomment}, ". Not writing tag.\n");
return undef;
}
while (my ($t, $m) = each %tagmap) {
if (defined $self->info->get_data($m)) {
print COMMENT $t, "=", $self->info->get_data($m), "\n";
}
}
close (COMMENT);
return $self;
}
sub close {
my $self = shift;
$self->{_OGG} = undef;
}
1;
__END__
=pod
=head1 NAME
Music::Tag::OGG - Plugin module for Music::Tag to get information from ogg-vorbis headers.
=head1 SYNOPSIS
use Music::Tag
my $filename = "/var/lib/music/artist/album/track.ogg";
my $info = Music::Tag->new($filename, { quiet => 1 }, "OGG");
$info->get_info();
print "Artist is ", $info->artist;
=head1 DESCRIPTION
Music::Tag::OGG is used to read ogg-vorbis header information. It uses Ogg::Vorbis::Header::PurePerl. I have gone back and forth with using this
and Ogg::Vorbis::Header. Finally I have settled on Ogg::Vorbis::Header::PurePerl, because the autoload for Ogg::Vorbis::Header was a pain to work with.
To write Ogg::Vorbis headers I use the program vorbiscomment. It looks for this in the path, or in the option variable "vorbiscomment." This tool
is available from L<http://www.xiph.org/> as part of the vorbis-tools distribution.
Music::Tag::Ogg objects should be created by Music::Tag.
=head1 REQUIRED DATA VALUES
No values are required (except filename, which is usually provided on object creation).
=head1 SET DATA VALUES
=over 4
=item B<title, track, totaltracks, artist, album, comment, releasedate, genre, disc, label>
Uses standard tags for these
=item B<asin>
Uses custom tag "ASIN" for this
=item B<mb_artistid, mb_albumid, mb_trackid, mip_puid, countrycode, albumartist>
Uses MusicBrainz recommended tags for these.
=back
=head1 METHODS
=over 4
=item B<default_options()>
Returns the default options for the plugin.
=item B<set_tag()>
Save info from object back to ogg vorbis file using L<vorbiscomment>
=item B<get_tag()>
Get info for object from ogg vorbis header using Ogg::Vorbis::Header::PurePerl
=item B<set_values()>
A list of values that can be set by this module.
=item B<saved_values()>
A list of values that can be saved by this module.
=item B<close()>
Close the file and destroy the Ogg::Vorbis::Header::PurePerl object.
=item B<ogg()>
Returns the Ogg::Vorbis::Header::PurePerl object.
=back
=head1 OPTIONS
=over 4
=item B<vorbiscomment>
The full path to the vorbiscomment program. Defaults to just "vorbiscomment", which assumes that vorbiscomment is in your path.
=back
=head1 BUGS
No known additional bugs provided by this Module
Please use github for bug tracking: L<http://github.com/riemann42/Music-Tag-OGG/issues|http://github.com/riemann42/Music-Tag-OGG/issues>.
=head1 SEE ALSO
L<Ogg::Vorbis::Header::PurePerl>, L<Music::Tag>, L<http://www.xiph.org/>
=head1 SOURCE
Source is available at github: L<http://github.com/riemann42/Music-Tag-OGG|http://github.com/riemann42/Music-Tag-OGG>.
=head1 AUTHOR
Edward Allen III <ealleniii _at_ cpan _dot_ org>
=head1 COPYRIGHT
Copyright © 2007,2008,2010 Edward Allen III. Some rights reserved.
=head1 LICENSE
This program is free software; you can redistribute it and/or modify
it under the same terms as Perl itself, either:
a) the GNU General Public License as published by the Free
Software Foundation; either version 1, or (at your option) any
later version, or
b) the "Artistic License" which comes with Perl.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See either
the GNU General Public License or the Artistic License for more details.
You should have received a copy of the Artistic License with this
Kit, in the file named "Artistic". If not, I'll be glad to provide one.
You should also have received a copy of the GNU General Public License
along with this program in the file named "Copying". If not, write to the
Free Software Foundation, Inc., 51 Franklin Street, Fifth Floor,
Boston, MA 02110-1301, USA or visit their web page on the Internet at
http://www.gnu.org/copyleft/gpl.html.
# vim: tabstop=4
|
riemann42/Music-Tag-OGG
|
b7f97e9f3b588251b7dcae769eb4770a3fb15b1f
|
Dated Changes
|
diff --git a/CHANGES b/CHANGES
index 865d4a5..f28044e 100644
--- a/CHANGES
+++ b/CHANGES
@@ -1,33 +1,33 @@
CHANGES
- Release Name: 0.4101
+ Release Name: 0.4101 2010-08-16
* Updated to use new method system
* POD Changes
Release Name: 0.4001 2010-07-27
* Started using git and github
* Normalized version accross plugins.
* POD Changes
* Revised Testing
Release Name: 0.35 2010-05-14
* Updated to work with Ogg::Vorbis::Header::PurePerl to fix bug 43789
Release Name: 0.33 2008-02-23
* Removed write from test
Release Name: 0.32
* Added Music::Tag prereq (was incorrect!)
Release Name: 0.31
* pod improvments
Release Name: 0.30
* Kwalitee and pod improvments
Release Name: 0.29 2008-02-11
* Fixed type-o in synapsis (OGG was ogg)
* Now requires Music::Tag .29
diff --git a/MANIFEST b/MANIFEST
index 79abd47..e9f4627 100644
--- a/MANIFEST
+++ b/MANIFEST
@@ -1,14 +1,13 @@
CHANGES
lib/Music/Tag/OGG.pm
Makefile.PL
MANIFEST This list of files
META.yml Module meta-data (added by MakeMaker)
README
t/1-ogg.t
t/97-pod.t
t/98-pod-coverage.t
t/elise.ogg
-t/MusicTagTest.pm
t/options.conf
Copying
Artistic
diff --git a/t/MusicTagTest.pm b/t/MusicTagTest.pm
deleted file mode 100644
index 2c48b6a..0000000
--- a/t/MusicTagTest.pm
+++ /dev/null
@@ -1,213 +0,0 @@
-package MusicTagTest;
-use base 'Exporter';
-use vars '@EXPORT';
-use strict;
-use Test::More;
-use Digest::SHA1;
-use File::Copy;
-use 5.006;
-
-@EXPORT = qw(create_tag read_tag random_write random_read random_write_num random_read_num random_write_date random_read_date filetest);
-
-my %values = ();
-
-sub create_tag {
- my $filetest = shift;
- my $tagoptions = shift;
- my $testoptions = shift;
- return 0 unless (-f $filetest);
- my $tag = Music::Tag->new($filetest, $tagoptions, $testoptions->{plugin} || 'Auto');
- ok($tag, 'Object created: ' . $filetest);
- die unless $tag;
- ok($tag->get_tag, 'get_tag called: ' . $filetest);
- ok($tag->isa('Music::Tag'), 'Correct Class: ' . $filetest);
- return $tag;
-}
-
-sub read_tag {
- my $tag = shift;
- my $testoptions = shift;
- return 0 if (! exists $testoptions->{values_in});
- my $c=0;
- foreach my $meth (keys %{$testoptions->{values_in}}) {
- SKIP: {
- skip "$meth test skipped", 1 if (! $testoptions->{values_in}->{$meth});
- $c++;
- cmp_ok($tag->$meth, 'eq', $testoptions->{values_in}->{$meth});
- }
- }
- return $c;
-}
-
-sub random_write {
- my $tag = shift;
- my $testoptions = shift;
- return 0 if (! exists $testoptions->{random_write});
- my $c = 0;
- foreach my $meth (@{$testoptions->{random_write}}) {
- my $val = "test" . $meth . int(rand(1000));
- $values{$meth} = $val;
- ok($tag->$meth($val), 'auto write to ' . $meth);
- $c++;
- }
- return $c;
-}
-
-sub random_write_num {
- my $tag = shift;
- my $testoptions = shift;
- return 0 if (! exists $testoptions->{random_write_num});
- my $c = 0;
- foreach my $meth (@{$testoptions->{random_write_num}}) {
- my $val = int(rand(10))+1;
- $values{$meth} = $val;
- ok($tag->$meth($val), 'auto write to ' . $meth);
- $c++;
- }
- return $c;
-}
-
-sub random_write_date {
- my $tag = shift;
- my $testoptions = shift;
- return 0 if (! exists $testoptions->{random_write_date});
- my $c = 0;
- foreach my $meth (@{$testoptions->{random_write_date}}) {
- my $val = int(rand(1_800_000_000));
- $values{$meth} = $val;
- ok($tag->$meth($val), 'auto write to '. $meth);
- $c++;
- }
- return $c;
-}
-
-sub random_read {
- my $tag = shift;
- my $testoptions = shift;
- return 0 if (! exists $testoptions->{random_write});
- my $c = 0;
- foreach my $meth (@{$testoptions->{random_write}}) {
- cmp_ok($tag->$meth, 'eq', $values{$meth}, 'auto read of ' . $meth);
- $c++;
- }
- return $c;
-}
-
-sub random_read_num {
- my $tag = shift;
- my $testoptions = shift;
- return 0 if (! exists $testoptions->{random_write_num});
- my $c = 0;
- foreach my $meth (@{$testoptions->{random_write_num}}) {
- cmp_ok($tag->$meth, '==', $values{$meth}, 'auto read of ' . $meth);
- $c++;
- }
- return $c;
-}
-
-sub random_read_date {
- my $tag = shift;
- my $testoptions = shift;
- return 0 if (! exists $testoptions->{random_write_date});
- my $c = 0;
- foreach my $meth (@{$testoptions->{random_write_date}}) {
- my $meth_t = $meth;
- $meth_t =~ s/epoch/time/;
- my $meth_d = $meth;
- $meth_d =~ s/epoch/date/;
- $meth_d =~ s/_date//;
- my @tm = gmtime($values{$meth});
- cmp_ok(substr($tag->$meth_t,0,16), 'eq', substr(sprintf('%04d-%02d-%02d %02d:%02d:%02d', $tm[5]+1900, $tm[4]+1, $tm[3], $tm[2], $tm[1], $tm[0]),0,16), 'auto read from '. $meth_t);
- cmp_ok($tag->$meth_d, 'eq', sprintf('%04d-%02d-%02d', $tm[5]+1900, $tm[4]+1, $tm[3]), 'auto read from '. $meth_d);
- $c+=2;
- }
- return $c;
-}
-
-sub read_picture {
- my $tag = shift;
- my $testoptions = shift;
- my $c = 0;
- return 0 if (! $testoptions->{picture_read});
- ok($tag->picture_exists, 'Picture Exists');
- $c+=2;
- if ($testoptions->{picture_sha1}) {
- my $sha1 = Digest::SHA1->new();
- $sha1->add($tag->picture->{_Data});
- cmp_ok($sha1->hexdigest, 'eq', $testoptions->{picture_sha1}, 'digest of picture matches during read');
- $c++;
- }
-}
-
-sub write_picture {
- my $tag = shift;
- my $testoptions = shift;
- my $c = 0;
- return 0 if (! $testoptions->{picture_file});
- ok($tag->picture_filename($testoptions->{picture_file}), 'add picture');
- ok($tag->picture_exists, 'Picture Exists after write');
- $c+=2;
- if ($testoptions->{picture_sha1}) {
- my $sha1 = Digest::SHA1->new();
- $sha1->add($tag->picture->{_Data});
- cmp_ok($sha1->hexdigest, 'eq', $testoptions->{picture_sha1}, 'digest of picture matches after write');
- $c++;
- }
- return $c;
-}
-
-sub filetest {
- my $file = shift;
- my $filetest = shift;
- my $tagoptions = shift;
- my $testoptions = shift;
- my $c = 0;
-
- SKIP: {
- skip ("File: $file does not exists", $testoptions->{count} || 1) if (! -f $file);
- return unless (-f $file);
- copy($file, $filetest);
-
- my $tag = create_tag($filetest,$tagoptions,$testoptions);
- $c+=3;
- die unless $tag;
-
-
- read_tag($tag,$testoptions);
- if ($testoptions->{picture_in}) {
- ok($tag->picture_exists, 'Picture should exists');
- }
- else {
- ok(! $tag->picture_exists, 'Picture should not exist');
- }
- $c++;
-
- if ($testoptions->{skip_write_tests}) {
- $tag->close();
- $tag = undef;
- }
- else {
- $c+= random_write($tag,$testoptions);
- $c+= random_write_num($tag,$testoptions);
- $c+= random_write_date($tag,$testoptions);
- $c+= write_picture($tag,$testoptions);
- ok($tag->set_tag, 'set_tag: ' . $filetest);
- $c++;
- $tag->close();
- $tag = undef;
- my $tag2 = create_tag($filetest,$tagoptions,$testoptions);
- $c+=3;
- $c+= random_read($tag2,$testoptions);
- $c+= random_read_num($tag2,$testoptions);
- $c+= random_read_date($tag2,$testoptions);
- $c+= read_picture($tag2,$testoptions);
- $tag2->close();
- }
- unlink($filetest);
- return $c;
- }
-}
-
-
-1;
-
|
riemann42/Music-Tag-OGG
|
5696e83d2700f02ea5992410eb8dcaa97c659520
|
Cleanup before merge
|
diff --git a/CHANGES b/CHANGES
index 833ed9a..865d4a5 100644
--- a/CHANGES
+++ b/CHANGES
@@ -1,28 +1,33 @@
CHANGES
- Release Name: 0.40_01
+ Release Name: 0.4101
+ * Updated to use new method system
+
+ * POD Changes
+
+ Release Name: 0.4001 2010-07-27
* Started using git and github
* Normalized version accross plugins.
* POD Changes
* Revised Testing
- Release Name: 0.35
+ Release Name: 0.35 2010-05-14
* Updated to work with Ogg::Vorbis::Header::PurePerl to fix bug 43789
- Release Name: 0.33
+ Release Name: 0.33 2008-02-23
* Removed write from test
Release Name: 0.32
* Added Music::Tag prereq (was incorrect!)
Release Name: 0.31
* pod improvments
- Release Name: 0.30
+ Release Name: 0.30
* Kwalitee and pod improvments
- Release Name: 0.29
+ Release Name: 0.29 2008-02-11
* Fixed type-o in synapsis (OGG was ogg)
* Now requires Music::Tag .29
diff --git a/Makefile.PL b/Makefile.PL
index 3c3377a..c9a5586 100644
--- a/Makefile.PL
+++ b/Makefile.PL
@@ -1,10 +1,10 @@
use ExtUtils::MakeMaker;
WriteMakefile( NAME => "Music::Tag::OGG",
VERSION_FROM => "lib/Music/Tag/OGG.pm",
ABSTRACT_FROM => "lib/Music/Tag/OGG.pm",
AUTHOR => 'Edward Allen (ealleniii _at_ cpan _dot_ org)',
LICENSE => 'perl',
- PREREQ_PM => { 'Music::Tag' => 0.40_01,
+ PREREQ_PM => { 'Music::Tag' => 0.4101,
'Ogg::Vorbis::Header::PurePerl' => 1,
},
);
diff --git a/lib/Music/Tag/OGG.pm b/lib/Music/Tag/OGG.pm
index cb0788b..51086b1 100644
--- a/lib/Music/Tag/OGG.pm
+++ b/lib/Music/Tag/OGG.pm
@@ -1,258 +1,252 @@
package Music::Tag::OGG;
-use strict;
-use warnings;
-our $VERSION = .40_02;
-
-# Copyright (c) 2007,2008 Edward Allen III. Some rights reserved.
+use strict; use warnings; use utf8;
+our $VERSION = '.4101';
+# Copyright © 2007,2008,2010 Edward Allen III. Some rights reserved.
#
# You may distribute under the terms of either the GNU General Public
# License or the Artistic License, as specified in the README file.
-#
use Ogg::Vorbis::Header::PurePerl;
use base qw(Music::Tag::Generic);
-
our %tagmap = (
TITLE => 'title',
TRACKNUMBER => 'track',
TRACKTOTAL => 'totaltracks',
ARTIST => 'artist',
ALBUM => 'album',
COMMENT => 'comment',
DATE => 'releasedate',
GENRE => 'genre',
DISC => 'disc',
LABEL => 'label',
ASIN => 'asin',
MUSICBRAINZ_ARTISTID => 'mb_artistid',
MUSICBRAINZ_ALBUMID => 'mb_albumid',
MUSICBRAINZ_TRACKID => 'mb_trackid',
MUSICBRAINZ_SORTNAME => 'sortname',
RELEASECOUNTRY => 'countrycode',
MUSICIP_PUID => 'mip_puid',
MUSICBRAINZ_ALBUMARTIST => 'albumartist'
);
sub default_options {
{ vorbiscomment => "vorbiscomment" }
}
sub set_values {
return ( values %tagmap, 'picture');
}
sub saved_values {
return ( values %tagmap, 'picture');
}
sub ogg {
my $self = shift;
unless ((exists $self->{_OGG}) && (ref $self->{_OGG})) {
- if ($self->info->filename) {
- $self->{_OGG} = Ogg::Vorbis::Header::PurePerl->new($self->info->filename);
+ if ($self->info->get_data('filename')) {
+ $self->{_OGG} = Ogg::Vorbis::Header::PurePerl->new($self->info->get_data('filename'));
#$self->{_OGG}->load();
}
else {
return undef;
}
}
return $self->{_OGG};
}
sub get_tag {
my $self = shift;
if ( $self->ogg ) {
foreach ($self->ogg->comment_tags) {
my $comment = uc($_);
if (exists $tagmap{$comment}) {
my $method = $tagmap{$comment};
- $self->info->$method($self->ogg->comment($comment));
+ $self->info->set_data($method, $self->ogg->comment($comment));
}
else {
$self->status("Unknown comment: $comment");
}
}
- $self->info->secs( $self->ogg->info->{"length"});
- $self->info->bitrate( $self->ogg->info->{"bitrate_nominal"});
- $self->info->frequency( $self->ogg->info->{"rate"});
+ $self->info->set_data('secs',$self->ogg->info->{"length"});
+ $self->info->set_data('bitrate',$self->ogg->info->{"bitrate_nominal"});
+ $self->info->set_data('frequency',$self->ogg->info->{"rate"});
}
else {
print STDERR "No ogg object created\n";
}
return $self;
}
sub set_tag {
my $self = shift;
- unless (open(COMMENT, "|-", $self->options->{vorbiscomment} ." -w ". "\"". $self->info->filename . "\"")) {
+ unless (open(COMMENT, "|-", $self->options->{vorbiscomment} ." -w ". "\"". $self->info->get_data('filename') . "\"")) {
$self->status("Failed to open ", $self->options->{vorbiscomment}, ". Not writing tag.\n");
return undef;
}
while (my ($t, $m) = each %tagmap) {
- if (defined $self->info->$m) {
- print COMMENT $t, "=", $self->info->$m, "\n";
+ if (defined $self->info->get_data($m)) {
+ print COMMENT $t, "=", $self->info->get_data($m), "\n";
}
}
close (COMMENT);
return $self;
}
sub close {
my $self = shift;
$self->{_OGG} = undef;
}
1;
__END__
=pod
=head1 NAME
Music::Tag::OGG - Plugin module for Music::Tag to get information from ogg-vorbis headers.
=head1 SYNOPSIS
use Music::Tag
my $filename = "/var/lib/music/artist/album/track.ogg";
my $info = Music::Tag->new($filename, { quiet => 1 }, "OGG");
$info->get_info();
print "Artist is ", $info->artist;
=head1 DESCRIPTION
Music::Tag::OGG is used to read ogg-vorbis header information. It uses Ogg::Vorbis::Header::PurePerl. I have gone back and forth with using this
and Ogg::Vorbis::Header. Finally I have settled on Ogg::Vorbis::Header::PurePerl, because the autoload for Ogg::Vorbis::Header was a pain to work with.
To write Ogg::Vorbis headers I use the program vorbiscomment. It looks for this in the path, or in the option variable "vorbiscomment." This tool
is available from L<http://www.xiph.org/> as part of the vorbis-tools distribution.
Music::Tag::Ogg objects should be created by Music::Tag.
=head1 REQUIRED DATA VALUES
No values are required (except filename, which is usually provided on object creation).
=head1 SET DATA VALUES
=over 4
=item B<title, track, totaltracks, artist, album, comment, releasedate, genre, disc, label>
Uses standard tags for these
=item B<asin>
Uses custom tag "ASIN" for this
=item B<mb_artistid, mb_albumid, mb_trackid, mip_puid, countrycode, albumartist>
Uses MusicBrainz recommended tags for these.
=back
=head1 METHODS
=over 4
=item B<default_options()>
Returns the default options for the plugin.
=item B<set_tag()>
Save info from object back to ogg vorbis file using L<vorbiscomment>
=item B<get_tag()>
Get info for object from ogg vorbis header using Ogg::Vorbis::Header::PurePerl
=item B<set_values()>
A list of values that can be set by this module.
=item B<saved_values()>
A list of values that can be saved by this module.
=item B<close()>
Close the file and destroy the Ogg::Vorbis::Header::PurePerl object.
=item B<ogg()>
Returns the Ogg::Vorbis::Header::PurePerl object.
=back
=head1 OPTIONS
=over 4
=item B<vorbiscomment>
The full path to the vorbiscomment program. Defaults to just "vorbiscomment", which assumes that vorbiscomment is in your path.
=back
=head1 BUGS
No known additional bugs provided by this Module
+Please use github for bug tracking: L<http://github.com/riemann42/Music-Tag-OGG/issues|http://github.com/riemann42/Music-Tag-OGG/issues>.
+
=head1 SEE ALSO
L<Ogg::Vorbis::Header::PurePerl>, L<Music::Tag>, L<http://www.xiph.org/>
=head1 SOURCE
Source is available at github: L<http://github.com/riemann42/Music-Tag-OGG|http://github.com/riemann42/Music-Tag-OGG>.
-=head1 BUG TRACKING
-
-Please use github for bug tracking: L<http://github.com/riemann42/Music-Tag-OGG/issues|http://github.com/riemann42/Music-Tag-OGG/issues>.
-
=head1 AUTHOR
Edward Allen III <ealleniii _at_ cpan _dot_ org>
=head1 COPYRIGHT
-Copyright (c) 2007,2008 Edward Allen III. Some rights reserved.
+Copyright © 2007,2008,2010 Edward Allen III. Some rights reserved.
=head1 LICENSE
This program is free software; you can redistribute it and/or modify
it under the same terms as Perl itself, either:
a) the GNU General Public License as published by the Free
Software Foundation; either version 1, or (at your option) any
later version, or
b) the "Artistic License" which comes with Perl.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See either
the GNU General Public License or the Artistic License for more details.
You should have received a copy of the Artistic License with this
Kit, in the file named "Artistic". If not, I'll be glad to provide one.
You should also have received a copy of the GNU General Public License
along with this program in the file named "Copying". If not, write to the
Free Software Foundation, Inc., 51 Franklin Street, Fifth Floor,
Boston, MA 02110-1301, USA or visit their web page on the Internet at
http://www.gnu.org/copyleft/gpl.html.
# vim: tabstop=4
|
riemann42/Music-Tag-OGG
|
128acd9e18c49dfb432b833d5e8144c45d98b44f
|
Adding MANIFEST.SKIP
|
diff --git a/MANIFEST.SKIP b/MANIFEST.SKIP
new file mode 100644
index 0000000..e0e49dd
--- /dev/null
+++ b/MANIFEST.SKIP
@@ -0,0 +1,9 @@
+^\.git\/
+Makefile$
+^blib
+^pm_to_blib
+^.*.bak
+^.*.old
+^cover_db
+^.*\.log
+^.*\.swp$
|
riemann42/Music-Tag-OGG
|
777d79c93a49e5165fe32291195da0865d365793
|
Moving to Music::Tag::Test for testing
|
diff --git a/MANIFEST b/MANIFEST
index 8922fb4..79abd47 100644
--- a/MANIFEST
+++ b/MANIFEST
@@ -1,15 +1,14 @@
CHANGES
-lib/Music/Tag/.OGG.pm.swp
lib/Music/Tag/OGG.pm
Makefile.PL
MANIFEST This list of files
META.yml Module meta-data (added by MakeMaker)
README
t/1-ogg.t
t/97-pod.t
t/98-pod-coverage.t
t/elise.ogg
t/MusicTagTest.pm
t/options.conf
Copying
Artistic
diff --git a/lib/Music/Tag/OGG.pm b/lib/Music/Tag/OGG.pm
index e1c4cbc..cb0788b 100644
--- a/lib/Music/Tag/OGG.pm
+++ b/lib/Music/Tag/OGG.pm
@@ -1,258 +1,258 @@
package Music::Tag::OGG;
use strict;
use warnings;
-our $VERSION = .40_01;
+our $VERSION = .40_02;
# Copyright (c) 2007,2008 Edward Allen III. Some rights reserved.
#
# You may distribute under the terms of either the GNU General Public
# License or the Artistic License, as specified in the README file.
#
use Ogg::Vorbis::Header::PurePerl;
use base qw(Music::Tag::Generic);
our %tagmap = (
TITLE => 'title',
TRACKNUMBER => 'track',
TRACKTOTAL => 'totaltracks',
ARTIST => 'artist',
ALBUM => 'album',
COMMENT => 'comment',
DATE => 'releasedate',
GENRE => 'genre',
DISC => 'disc',
LABEL => 'label',
ASIN => 'asin',
MUSICBRAINZ_ARTISTID => 'mb_artistid',
MUSICBRAINZ_ALBUMID => 'mb_albumid',
MUSICBRAINZ_TRACKID => 'mb_trackid',
MUSICBRAINZ_SORTNAME => 'sortname',
RELEASECOUNTRY => 'countrycode',
MUSICIP_PUID => 'mip_puid',
MUSICBRAINZ_ALBUMARTIST => 'albumartist'
);
sub default_options {
{ vorbiscomment => "vorbiscomment" }
}
sub set_values {
return ( values %tagmap, 'picture');
}
sub saved_values {
return ( values %tagmap, 'picture');
}
sub ogg {
my $self = shift;
unless ((exists $self->{_OGG}) && (ref $self->{_OGG})) {
if ($self->info->filename) {
$self->{_OGG} = Ogg::Vorbis::Header::PurePerl->new($self->info->filename);
#$self->{_OGG}->load();
}
else {
return undef;
}
}
return $self->{_OGG};
}
sub get_tag {
my $self = shift;
if ( $self->ogg ) {
foreach ($self->ogg->comment_tags) {
my $comment = uc($_);
if (exists $tagmap{$comment}) {
my $method = $tagmap{$comment};
$self->info->$method($self->ogg->comment($comment));
}
else {
$self->status("Unknown comment: $comment");
}
}
$self->info->secs( $self->ogg->info->{"length"});
$self->info->bitrate( $self->ogg->info->{"bitrate_nominal"});
$self->info->frequency( $self->ogg->info->{"rate"});
}
else {
print STDERR "No ogg object created\n";
}
return $self;
}
sub set_tag {
my $self = shift;
unless (open(COMMENT, "|-", $self->options->{vorbiscomment} ." -w ". "\"". $self->info->filename . "\"")) {
$self->status("Failed to open ", $self->options->{vorbiscomment}, ". Not writing tag.\n");
return undef;
}
while (my ($t, $m) = each %tagmap) {
if (defined $self->info->$m) {
print COMMENT $t, "=", $self->info->$m, "\n";
}
}
close (COMMENT);
return $self;
}
sub close {
my $self = shift;
$self->{_OGG} = undef;
}
1;
__END__
=pod
=head1 NAME
Music::Tag::OGG - Plugin module for Music::Tag to get information from ogg-vorbis headers.
=head1 SYNOPSIS
use Music::Tag
my $filename = "/var/lib/music/artist/album/track.ogg";
my $info = Music::Tag->new($filename, { quiet => 1 }, "OGG");
$info->get_info();
print "Artist is ", $info->artist;
=head1 DESCRIPTION
Music::Tag::OGG is used to read ogg-vorbis header information. It uses Ogg::Vorbis::Header::PurePerl. I have gone back and forth with using this
and Ogg::Vorbis::Header. Finally I have settled on Ogg::Vorbis::Header::PurePerl, because the autoload for Ogg::Vorbis::Header was a pain to work with.
To write Ogg::Vorbis headers I use the program vorbiscomment. It looks for this in the path, or in the option variable "vorbiscomment." This tool
is available from L<http://www.xiph.org/> as part of the vorbis-tools distribution.
Music::Tag::Ogg objects should be created by Music::Tag.
=head1 REQUIRED DATA VALUES
No values are required (except filename, which is usually provided on object creation).
=head1 SET DATA VALUES
=over 4
=item B<title, track, totaltracks, artist, album, comment, releasedate, genre, disc, label>
Uses standard tags for these
=item B<asin>
Uses custom tag "ASIN" for this
=item B<mb_artistid, mb_albumid, mb_trackid, mip_puid, countrycode, albumartist>
Uses MusicBrainz recommended tags for these.
=back
=head1 METHODS
=over 4
=item B<default_options()>
Returns the default options for the plugin.
=item B<set_tag()>
Save info from object back to ogg vorbis file using L<vorbiscomment>
=item B<get_tag()>
Get info for object from ogg vorbis header using Ogg::Vorbis::Header::PurePerl
=item B<set_values()>
A list of values that can be set by this module.
=item B<saved_values()>
A list of values that can be saved by this module.
=item B<close()>
Close the file and destroy the Ogg::Vorbis::Header::PurePerl object.
=item B<ogg()>
Returns the Ogg::Vorbis::Header::PurePerl object.
=back
=head1 OPTIONS
=over 4
=item B<vorbiscomment>
The full path to the vorbiscomment program. Defaults to just "vorbiscomment", which assumes that vorbiscomment is in your path.
=back
=head1 BUGS
No known additional bugs provided by this Module
=head1 SEE ALSO
L<Ogg::Vorbis::Header::PurePerl>, L<Music::Tag>, L<http://www.xiph.org/>
=head1 SOURCE
Source is available at github: L<http://github.com/riemann42/Music-Tag-OGG|http://github.com/riemann42/Music-Tag-OGG>.
=head1 BUG TRACKING
Please use github for bug tracking: L<http://github.com/riemann42/Music-Tag-OGG/issues|http://github.com/riemann42/Music-Tag-OGG/issues>.
=head1 AUTHOR
Edward Allen III <ealleniii _at_ cpan _dot_ org>
=head1 COPYRIGHT
Copyright (c) 2007,2008 Edward Allen III. Some rights reserved.
=head1 LICENSE
This program is free software; you can redistribute it and/or modify
it under the same terms as Perl itself, either:
a) the GNU General Public License as published by the Free
Software Foundation; either version 1, or (at your option) any
later version, or
b) the "Artistic License" which comes with Perl.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See either
the GNU General Public License or the Artistic License for more details.
You should have received a copy of the Artistic License with this
Kit, in the file named "Artistic". If not, I'll be glad to provide one.
You should also have received a copy of the GNU General Public License
along with this program in the file named "Copying". If not, write to the
Free Software Foundation, Inc., 51 Franklin Street, Fifth Floor,
Boston, MA 02110-1301, USA or visit their web page on the Internet at
http://www.gnu.org/copyleft/gpl.html.
# vim: tabstop=4
diff --git a/t/1-ogg.t b/t/1-ogg.t
index e6bd9d4..61c3a2b 100644
--- a/t/1-ogg.t
+++ b/t/1-ogg.t
@@ -1,25 +1,24 @@
#!/usr/bin/perl -w
use strict;
use Test::More tests => 9;
-use lib 't';
-use MusicTagTest;
+use Music::Tag::Test;
use 5.006;
BEGIN { use_ok('Music::Tag') }
ok(Music::Tag->LoadOptions("t/options.conf"), "Loading options file.");
my $c = filetest("t/elise.ogg", "t/elisetest.ogg", {},{
values_in => {
artist =>, "Beethoven",
album => "GPL",
title => "Elise",
},
skip_write_tests => 1,
random_write => [
qw(title artist album genre comment mb_trackid asin
mb_artistid mb_albumid albumartist ) ],
random_write_num => [ qw(track disc) ],
count => 7,
plugin => 'OGG'
});
|
riemann42/Music-Tag-OGG
|
99a77a0f20a15df586e13ccc1b0524d307743ce3
|
Added license files
|
diff --git a/Artistic b/Artistic
new file mode 100644
index 0000000..5f22124
--- /dev/null
+++ b/Artistic
@@ -0,0 +1,131 @@
+
+
+
+
+ The "Artistic License"
+
+ Preamble
+
+The intent of this document is to state the conditions under which a
+Package may be copied, such that the Copyright Holder maintains some
+semblance of artistic control over the development of the package,
+while giving the users of the package the right to use and distribute
+the Package in a more-or-less customary fashion, plus the right to make
+reasonable modifications.
+
+Definitions:
+
+ "Package" refers to the collection of files distributed by the
+ Copyright Holder, and derivatives of that collection of files
+ created through textual modification.
+
+ "Standard Version" refers to such a Package if it has not been
+ modified, or has been modified in accordance with the wishes
+ of the Copyright Holder as specified below.
+
+ "Copyright Holder" is whoever is named in the copyright or
+ copyrights for the package.
+
+ "You" is you, if you're thinking about copying or distributing
+ this Package.
+
+ "Reasonable copying fee" is whatever you can justify on the
+ basis of media cost, duplication charges, time of people involved,
+ and so on. (You will not be required to justify it to the
+ Copyright Holder, but only to the computing community at large
+ as a market that must bear the fee.)
+
+ "Freely Available" means that no fee is charged for the item
+ itself, though there may be fees involved in handling the item.
+ It also means that recipients of the item may redistribute it
+ under the same conditions they received it.
+
+1. You may make and give away verbatim copies of the source form of the
+Standard Version of this Package without restriction, provided that you
+duplicate all of the original copyright notices and associated disclaimers.
+
+2. You may apply bug fixes, portability fixes and other modifications
+derived from the Public Domain or from the Copyright Holder. A Package
+modified in such a way shall still be considered the Standard Version.
+
+3. You may otherwise modify your copy of this Package in any way, provided
+that you insert a prominent notice in each changed file stating how and
+when you changed that file, and provided that you do at least ONE of the
+following:
+
+ a) place your modifications in the Public Domain or otherwise make them
+ Freely Available, such as by posting said modifications to Usenet or
+ an equivalent medium, or placing the modifications on a major archive
+ site such as uunet.uu.net, or by allowing the Copyright Holder to include
+ your modifications in the Standard Version of the Package.
+
+ b) use the modified Package only within your corporation or organization.
+
+ c) rename any non-standard executables so the names do not conflict
+ with standard executables, which must also be provided, and provide
+ a separate manual page for each non-standard executable that clearly
+ documents how it differs from the Standard Version.
+
+ d) make other distribution arrangements with the Copyright Holder.
+
+4. You may distribute the programs of this Package in object code or
+executable form, provided that you do at least ONE of the following:
+
+ a) distribute a Standard Version of the executables and library files,
+ together with instructions (in the manual page or equivalent) on where
+ to get the Standard Version.
+
+ b) accompany the distribution with the machine-readable source of
+ the Package with your modifications.
+
+ c) give non-standard executables non-standard names, and clearly
+ document the differences in manual pages (or equivalent), together
+ with instructions on where to get the Standard Version.
+
+ d) make other distribution arrangements with the Copyright Holder.
+
+5. You may charge a reasonable copying fee for any distribution of this
+Package. You may charge any fee you choose for support of this
+Package. You may not charge a fee for this Package itself. However,
+you may distribute this Package in aggregate with other (possibly
+commercial) programs as part of a larger (possibly commercial) software
+distribution provided that you do not advertise this Package as a
+product of your own. You may embed this Package's interpreter within
+an executable of yours (by linking); this shall be construed as a mere
+form of aggregation, provided that the complete Standard Version of the
+interpreter is so embedded.
+
+6. The scripts and library files supplied as input to or produced as
+output from the programs of this Package do not automatically fall
+under the copyright of this Package, but belong to whoever generated
+them, and may be sold commercially, and may be aggregated with this
+Package. If such scripts or library files are aggregated with this
+Package via the so-called "undump" or "unexec" methods of producing a
+binary executable image, then distribution of such an image shall
+neither be construed as a distribution of this Package nor shall it
+fall under the restrictions of Paragraphs 3 and 4, provided that you do
+not represent such an executable image as a Standard Version of this
+Package.
+
+7. C subroutines (or comparably compiled subroutines in other
+languages) supplied by you and linked into this Package in order to
+emulate subroutines and variables of the language defined by this
+Package shall not be considered part of this Package, but are the
+equivalent of input as in Paragraph 6, provided these subroutines do
+not change the language in any way that would cause it to fail the
+regression tests for the language.
+
+8. Aggregation of this Package with a commercial distribution is always
+permitted provided that the use of this Package is embedded; that is,
+when no overt attempt is made to make this Package's interfaces visible
+to the end user of the commercial distribution. Such use shall not be
+construed as a distribution of this Package.
+
+9. The name of the Copyright Holder may not be used to endorse or promote
+products derived from this software without specific prior written permission.
+
+10. THIS PACKAGE IS PROVIDED "AS IS" AND WITHOUT ANY EXPRESS OR
+IMPLIED WARRANTIES, INCLUDING, WITHOUT LIMITATION, THE IMPLIED
+WARRANTIES OF MERCHANTIBILITY AND FITNESS FOR A PARTICULAR PURPOSE.
+
+ The End
diff --git a/Copying b/Copying
new file mode 100644
index 0000000..43cd72c
--- /dev/null
+++ b/Copying
@@ -0,0 +1,248 @@
+ GNU GENERAL PUBLIC LICENSE
+ Version 1, February 1989
+
+ Copyright (C) 1989 Free Software Foundation, Inc.
+ 59 Temple Place, Suite 330, Boston, MA 02111-1307, USA
+ Everyone is permitted to copy and distribute verbatim copies
+ of this license document, but changing it is not allowed.
+
+ Preamble
+
+ The license agreements of most software companies try to keep users
+at the mercy of those companies. By contrast, our General Public
+License is intended to guarantee your freedom to share and change free
+software--to make sure the software is free for all its users. The
+General Public License applies to the Free Software Foundation's
+software and to any other program whose authors commit to using it.
+You can use it for your programs, too.
+
+ When we speak of free software, we are referring to freedom, not
+price. Specifically, the General Public License is designed to make
+sure that you have the freedom to give away or sell copies of free
+software, that you receive source code or can get it if you want it,
+that you can change the software or use pieces of it in new free
+programs; and that you know you can do these things.
+
+ To protect your rights, we need to make restrictions that forbid
+anyone to deny you these rights or to ask you to surrender the rights.
+These restrictions translate to certain responsibilities for you if you
+distribute copies of the software, or if you modify it.
+
+ For example, if you distribute copies of a such a program, whether
+gratis or for a fee, you must give the recipients all the rights that
+you have. You must make sure that they, too, receive or can get the
+source code. And you must tell them their rights.
+
+ We protect your rights with two steps: (1) copyright the software, and
+(2) offer you this license which gives you legal permission to copy,
+distribute and/or modify the software.
+
+ Also, for each author's protection and ours, we want to make certain
+that everyone understands that there is no warranty for this free
+software. If the software is modified by someone else and passed on, we
+want its recipients to know that what they have is not the original, so
+that any problems introduced by others will not reflect on the original
+authors' reputations.
+
+ The precise terms and conditions for copying, distribution and
+modification follow.
+
+ GNU GENERAL PUBLIC LICENSE
+ TERMS AND CONDITIONS FOR COPYING, DISTRIBUTION AND MODIFICATION
+
+ 0. This License Agreement applies to any program or other work which
+contains a notice placed by the copyright holder saying it may be
+distributed under the terms of this General Public License. The
+"Program", below, refers to any such program or work, and a "work based
+on the Program" means either the Program or any work containing the
+Program or a portion of it, either verbatim or with modifications. Each
+licensee is addressed as "you".
+
+ 1. You may copy and distribute verbatim copies of the Program's source
+code as you receive it, in any medium, provided that you conspicuously and
+appropriately publish on each copy an appropriate copyright notice and
+disclaimer of warranty; keep intact all the notices that refer to this
+General Public License and to the absence of any warranty; and give any
+other recipients of the Program a copy of this General Public License
+along with the Program. You may charge a fee for the physical act of
+transferring a copy.
+
+ 2. You may modify your copy or copies of the Program or any portion of
+it, and copy and distribute such modifications under the terms of Paragraph
+1 above, provided that you also do the following:
+
+ a) cause the modified files to carry prominent notices stating that
+ you changed the files and the date of any change; and
+
+ b) cause the whole of any work that you distribute or publish, that
+ in whole or in part contains the Program or any part thereof, either
+ with or without modifications, to be licensed at no charge to all
+ third parties under the terms of this General Public License (except
+ that you may choose to grant warranty protection to some or all
+ third parties, at your option).
+
+ c) If the modified program normally reads commands interactively when
+ run, you must cause it, when started running for such interactive use
+ in the simplest and most usual way, to print or display an
+ announcement including an appropriate copyright notice and a notice
+ that there is no warranty (or else, saying that you provide a
+ warranty) and that users may redistribute the program under these
+ conditions, and telling the user how to view a copy of this General
+ Public License.
+
+ d) You may charge a fee for the physical act of transferring a
+ copy, and you may at your option offer warranty protection in
+ exchange for a fee.
+
+Mere aggregation of another independent work with the Program (or its
+derivative) on a volume of a storage or distribution medium does not bring
+the other work under the scope of these terms.
+
+ 3. You may copy and distribute the Program (or a portion or derivative of
+it, under Paragraph 2) in object code or executable form under the terms of
+Paragraphs 1 and 2 above provided that you also do one of the following:
+
+ a) accompany it with the complete corresponding machine-readable
+ source code, which must be distributed under the terms of
+ Paragraphs 1 and 2 above; or,
+
+ b) accompany it with a written offer, valid for at least three
+ years, to give any third party free (except for a nominal charge
+ for the cost of distribution) a complete machine-readable copy of the
+ corresponding source code, to be distributed under the terms of
+ Paragraphs 1 and 2 above; or,
+
+ c) accompany it with the information you received as to where the
+ corresponding source code may be obtained. (This alternative is
+ allowed only for noncommercial distribution and only if you
+ received the program in object code or executable form alone.)
+
+Source code for a work means the preferred form of the work for making
+modifications to it. For an executable file, complete source code means
+all the source code for all modules it contains; but, as a special
+exception, it need not include source code for modules which are standard
+libraries that accompany the operating system on which the executable
+file runs, or for standard header files or definitions files that
+accompany that operating system.
+
+ 4. You may not copy, modify, sublicense, distribute or transfer the
+Program except as expressly provided under this General Public License.
+Any attempt otherwise to copy, modify, sublicense, distribute or transfer
+the Program is void, and will automatically terminate your rights to use
+the Program under this License. However, parties who have received
+copies, or rights to use copies, from you under this General Public
+License will not have their licenses terminated so long as such parties
+remain in full compliance.
+
+ 5. By copying, distributing or modifying the Program (or any work based
+on the Program) you indicate your acceptance of this license to do so,
+and all its terms and conditions.
+
+ 6. Each time you redistribute the Program (or any work based on the
+Program), the recipient automatically receives a license from the original
+licensor to copy, distribute or modify the Program subject to these
+terms and conditions. You may not impose any further restrictions on the
+recipients' exercise of the rights granted herein.
+
+ 7. The Free Software Foundation may publish revised and/or new versions
+of the General Public License from time to time. Such new versions will
+be similar in spirit to the present version, but may differ in detail to
+address new problems or concerns.
+
+Each version is given a distinguishing version number. If the Program
+specifies a version number of the license which applies to it and "any
+later version", you have the option of following the terms and conditions
+either of that version or of any later version published by the Free
+Software Foundation. If the Program does not specify a version number of
+the license, you may choose any version ever published by the Free Software
+Foundation.
+
+ 8. If you wish to incorporate parts of the Program into other free
+programs whose distribution conditions are different, write to the author
+to ask for permission. For software which is copyrighted by the Free
+Software Foundation, write to the Free Software Foundation; we sometimes
+make exceptions for this. Our decision will be guided by the two goals
+of preserving the free status of all derivatives of our free software and
+of promoting the sharing and reuse of software generally.
+
+ NO WARRANTY
+
+ 9. BECAUSE THE PROGRAM IS LICENSED FREE OF CHARGE, THERE IS NO WARRANTY
+FOR THE PROGRAM, TO THE EXTENT PERMITTED BY APPLICABLE LAW. EXCEPT WHEN
+OTHERWISE STATED IN WRITING THE COPYRIGHT HOLDERS AND/OR OTHER PARTIES
+PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESSED
+OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
+MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. THE ENTIRE RISK AS
+TO THE QUALITY AND PERFORMANCE OF THE PROGRAM IS WITH YOU. SHOULD THE
+PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF ALL NECESSARY SERVICING,
+REPAIR OR CORRECTION.
+
+ 10. IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING
+WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MAY MODIFY AND/OR
+REDISTRIBUTE THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES,
+INCLUDING ANY GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING
+OUT OF THE USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED
+TO LOSS OF DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY
+YOU OR THIRD PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER
+PROGRAMS), EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE
+POSSIBILITY OF SUCH DAMAGES.
+
+ END OF TERMS AND CONDITIONS
+
+ Appendix: How to Apply These Terms to Your New Programs
+
+ If you develop a new program, and you want it to be of the greatest
+possible use to humanity, the best way to achieve this is to make it
+free software which everyone can redistribute and change under these
+terms.
+
+ To do so, attach the following notices to the program. It is safest to
+attach them to the start of each source file to most effectively convey
+the exclusion of warranty; and each file should have at least the
+"copyright" line and a pointer to where the full notice is found.
+
+ <one line to give the program's name and a brief idea of what it does.>
+ Copyright (C) 19yy <name of author>
+
+ This program is free software; you can redistribute it and/or modify
+ it under the terms of the GNU General Public License as published by
+ the Free Software Foundation; either version 1, or (at your option)
+ any later version.
+
+ This program is distributed in the hope that it will be useful,
+ but WITHOUT ANY WARRANTY; without even the implied warranty of
+ MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ GNU General Public License for more details.
+
+ You should have received a copy of the GNU General Public License
+ along with this program; if not, write to the Free Software Foundation,
+ Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307, USA.
+
+Also add information on how to contact you by electronic and paper mail.
+
+If the program is interactive, make it output a short notice like this
+when it starts in an interactive mode:
+
+ Gnomovision version 69, Copyright (C) 19xx name of author
+ Gnomovision comes with ABSOLUTELY NO WARRANTY; for details type `show w'.
+ This is free software, and you are welcome to redistribute it
+ under certain conditions; type `show c' for details.
+
+The hypothetical commands `show w' and `show c' should show the
+appropriate parts of the General Public License. Of course, the
+commands you use may be called something other than `show w' and `show
+c'; they could even be mouse-clicks or menu items--whatever suits your
+program.
+
+You should also get your employer (if you work as a programmer) or your
+school, if any, to sign a "copyright disclaimer" for the program, if
+necessary. Here a sample; alter the names:
+
+ Yoyodyne, Inc., hereby disclaims all copyright interest in the
+ program `Gnomovision' (a program to direct compilers to make passes
+ at assemblers) written by James Hacker.
+
+ <signature of Ty Coon>, 1 April 1989
+ Ty Coon, President of Vice
+
+That's all there is to it!
diff --git a/MANIFEST b/MANIFEST
index 4c2e85a..8922fb4 100644
--- a/MANIFEST
+++ b/MANIFEST
@@ -1,13 +1,15 @@
CHANGES
lib/Music/Tag/.OGG.pm.swp
lib/Music/Tag/OGG.pm
Makefile.PL
MANIFEST This list of files
META.yml Module meta-data (added by MakeMaker)
README
t/1-ogg.t
t/97-pod.t
t/98-pod-coverage.t
t/elise.ogg
t/MusicTagTest.pm
t/options.conf
+Copying
+Artistic
|
riemann42/Music-Tag-OGG
|
58afc94cf73490b7ddbbf39f68cc0cd633983dc5
|
Cleanup before merge
|
diff --git a/CHANGES b/CHANGES
new file mode 100644
index 0000000..833ed9a
--- /dev/null
+++ b/CHANGES
@@ -0,0 +1,28 @@
+CHANGES
+ Release Name: 0.40_01
+ * Started using git and github
+
+ * Normalized version accross plugins.
+
+ * POD Changes
+
+ * Revised Testing
+
+ Release Name: 0.35
+ * Updated to work with Ogg::Vorbis::Header::PurePerl to fix bug 43789
+
+ Release Name: 0.33
+ * Removed write from test
+
+ Release Name: 0.32
+ * Added Music::Tag prereq (was incorrect!)
+
+ Release Name: 0.31
+ * pod improvments
+
+ Release Name: 0.30
+ * Kwalitee and pod improvments
+
+ Release Name: 0.29
+ * Fixed type-o in synapsis (OGG was ogg)
+ * Now requires Music::Tag .29
|
riemann42/Music-Tag-OGG
|
104e76c38bbd206fa971ab6b0a67a4340167b62f
|
Documented changes
|
diff --git a/Changes b/Changes
deleted file mode 100644
index f0e7d91..0000000
--- a/Changes
+++ /dev/null
@@ -1,65 +0,0 @@
-Release Name: 0.35
-===========================
-* Updated to work with Ogg::Vorbis::Header::PurePerl to fix bug 43789
-
-Release Name: 0.33
-===========================
-* Removed write from test
-
-Release Name: 0.32
-===========================
-* Added Music::Tag prereq (was incorrect!)
-
-Release Name: 0.31
-===========================
-* pod improvments
-
-Release Name: 0.30
-===========================
-* Kwalitee and pod improvments
-
-Release Name: 0.29
-===========================
-* Fixed type-o in synapsis (OGG was ogg)
-* Now requires Music::Tag .29
-
-Release Name: 0.28
-===========================
-* Split off from Music::Tag distribution
-
-Release Name: 0.27
-============================
-* More documentation and tested POD.
-* datamethods method now can be used to add new datamethods
-* Added test for MusicBrainz and Amazon plugins
-* Revised releasedate and recorddate internal storage to store as releasetime
- and recordtime -- with full timestamps.
-* Added releasetime, recordtime, releaseepoch, and recordepoech datamethods.
-* Support for TIME ID3v2 tag.
-* After much thought, replaced Ogg::Vorbis::Header with
- Ogg::Vorbis::Header::PurePerl and added vorbiscomment to write tags.
-* Revised OGG and FLAC plugins to clean up code (much slicker now).
-
-Release Name: 0.26
-============================
-* Removed several prerequistes that weren't used
-* Fixed error in README about prerequisite
-
-Release Name: 0.25
-============================
-* Support many more tags for flac, ogg, and m4a
-* Removed autotag safetag quicktag musictag musicsort musicinfo scripts.
- All is done by musictag now.
-* Added tests for some plugins. More to do!
-* Bug Fixes
-* Documentation improvments
-* Added preset option for musictag
-
-Release Name: 0.24
-============================
-* Bug Fixes
-* Revised MP3 Tags to read Picard tags
-
-Release Name: 0.23
-============================
-* Initial Public Release
diff --git a/MANIFEST b/MANIFEST
index 480bc7f..4c2e85a 100644
--- a/MANIFEST
+++ b/MANIFEST
@@ -1,15 +1,13 @@
-Changes
+CHANGES
lib/Music/Tag/.OGG.pm.swp
lib/Music/Tag/OGG.pm
Makefile.PL
MANIFEST This list of files
META.yml Module meta-data (added by MakeMaker)
README
t/1-ogg.t
-t/2-pod.t
-t/3-pod-coverage.t
t/97-pod.t
t/98-pod-coverage.t
t/elise.ogg
t/MusicTagTest.pm
t/options.conf
diff --git a/Makefile.PL b/Makefile.PL
index e6748f1..3c3377a 100644
--- a/Makefile.PL
+++ b/Makefile.PL
@@ -1,10 +1,10 @@
use ExtUtils::MakeMaker;
WriteMakefile( NAME => "Music::Tag::OGG",
VERSION_FROM => "lib/Music/Tag/OGG.pm",
ABSTRACT_FROM => "lib/Music/Tag/OGG.pm",
AUTHOR => 'Edward Allen (ealleniii _at_ cpan _dot_ org)',
LICENSE => 'perl',
- PREREQ_PM => { 'Music::Tag' => 0.29,
+ PREREQ_PM => { 'Music::Tag' => 0.40_01,
'Ogg::Vorbis::Header::PurePerl' => 1,
},
);
|
riemann42/Music-Tag-OGG
|
5666cb397c5de5b34cd5e3490b631f4dcefebdb2
|
Added cabability methods
|
diff --git a/lib/Music/Tag/OGG.pm b/lib/Music/Tag/OGG.pm
index 89e69b9..e1c4cbc 100644
--- a/lib/Music/Tag/OGG.pm
+++ b/lib/Music/Tag/OGG.pm
@@ -1,243 +1,258 @@
package Music::Tag::OGG;
use strict;
use warnings;
our $VERSION = .40_01;
# Copyright (c) 2007,2008 Edward Allen III. Some rights reserved.
#
# You may distribute under the terms of either the GNU General Public
# License or the Artistic License, as specified in the README file.
#
use Ogg::Vorbis::Header::PurePerl;
use base qw(Music::Tag::Generic);
our %tagmap = (
TITLE => 'title',
TRACKNUMBER => 'track',
TRACKTOTAL => 'totaltracks',
ARTIST => 'artist',
ALBUM => 'album',
COMMENT => 'comment',
DATE => 'releasedate',
GENRE => 'genre',
DISC => 'disc',
LABEL => 'label',
ASIN => 'asin',
MUSICBRAINZ_ARTISTID => 'mb_artistid',
MUSICBRAINZ_ALBUMID => 'mb_albumid',
MUSICBRAINZ_TRACKID => 'mb_trackid',
MUSICBRAINZ_SORTNAME => 'sortname',
RELEASECOUNTRY => 'countrycode',
MUSICIP_PUID => 'mip_puid',
MUSICBRAINZ_ALBUMARTIST => 'albumartist'
);
sub default_options {
{ vorbiscomment => "vorbiscomment" }
}
+sub set_values {
+ return ( values %tagmap, 'picture');
+}
+sub saved_values {
+ return ( values %tagmap, 'picture');
+}
+
sub ogg {
my $self = shift;
unless ((exists $self->{_OGG}) && (ref $self->{_OGG})) {
if ($self->info->filename) {
$self->{_OGG} = Ogg::Vorbis::Header::PurePerl->new($self->info->filename);
#$self->{_OGG}->load();
}
else {
return undef;
}
}
return $self->{_OGG};
}
sub get_tag {
my $self = shift;
if ( $self->ogg ) {
foreach ($self->ogg->comment_tags) {
my $comment = uc($_);
if (exists $tagmap{$comment}) {
my $method = $tagmap{$comment};
$self->info->$method($self->ogg->comment($comment));
}
else {
$self->status("Unknown comment: $comment");
}
}
$self->info->secs( $self->ogg->info->{"length"});
$self->info->bitrate( $self->ogg->info->{"bitrate_nominal"});
$self->info->frequency( $self->ogg->info->{"rate"});
}
else {
print STDERR "No ogg object created\n";
}
return $self;
}
sub set_tag {
my $self = shift;
unless (open(COMMENT, "|-", $self->options->{vorbiscomment} ." -w ". "\"". $self->info->filename . "\"")) {
$self->status("Failed to open ", $self->options->{vorbiscomment}, ". Not writing tag.\n");
return undef;
}
while (my ($t, $m) = each %tagmap) {
if (defined $self->info->$m) {
print COMMENT $t, "=", $self->info->$m, "\n";
}
}
close (COMMENT);
return $self;
}
sub close {
my $self = shift;
$self->{_OGG} = undef;
}
1;
__END__
=pod
=head1 NAME
Music::Tag::OGG - Plugin module for Music::Tag to get information from ogg-vorbis headers.
=head1 SYNOPSIS
use Music::Tag
my $filename = "/var/lib/music/artist/album/track.ogg";
my $info = Music::Tag->new($filename, { quiet => 1 }, "OGG");
$info->get_info();
print "Artist is ", $info->artist;
=head1 DESCRIPTION
Music::Tag::OGG is used to read ogg-vorbis header information. It uses Ogg::Vorbis::Header::PurePerl. I have gone back and forth with using this
and Ogg::Vorbis::Header. Finally I have settled on Ogg::Vorbis::Header::PurePerl, because the autoload for Ogg::Vorbis::Header was a pain to work with.
To write Ogg::Vorbis headers I use the program vorbiscomment. It looks for this in the path, or in the option variable "vorbiscomment." This tool
is available from L<http://www.xiph.org/> as part of the vorbis-tools distribution.
Music::Tag::Ogg objects should be created by Music::Tag.
=head1 REQUIRED DATA VALUES
No values are required (except filename, which is usually provided on object creation).
=head1 SET DATA VALUES
=over 4
=item B<title, track, totaltracks, artist, album, comment, releasedate, genre, disc, label>
Uses standard tags for these
=item B<asin>
Uses custom tag "ASIN" for this
=item B<mb_artistid, mb_albumid, mb_trackid, mip_puid, countrycode, albumartist>
Uses MusicBrainz recommended tags for these.
=back
=head1 METHODS
=over 4
-=item B<default_options>
+=item B<default_options()>
Returns the default options for the plugin.
-=item B<set_tag>
+=item B<set_tag()>
Save info from object back to ogg vorbis file using L<vorbiscomment>
-=item B<get_tag>
+=item B<get_tag()>
Get info for object from ogg vorbis header using Ogg::Vorbis::Header::PurePerl
-=item B<close>
+=item B<set_values()>
+
+A list of values that can be set by this module.
+
+=item B<saved_values()>
+
+A list of values that can be saved by this module.
+
+=item B<close()>
Close the file and destroy the Ogg::Vorbis::Header::PurePerl object.
-=item B<ogg>
+=item B<ogg()>
Returns the Ogg::Vorbis::Header::PurePerl object.
=back
=head1 OPTIONS
=over 4
=item B<vorbiscomment>
The full path to the vorbiscomment program. Defaults to just "vorbiscomment", which assumes that vorbiscomment is in your path.
=back
=head1 BUGS
No known additional bugs provided by this Module
=head1 SEE ALSO
L<Ogg::Vorbis::Header::PurePerl>, L<Music::Tag>, L<http://www.xiph.org/>
=head1 SOURCE
Source is available at github: L<http://github.com/riemann42/Music-Tag-OGG|http://github.com/riemann42/Music-Tag-OGG>.
=head1 BUG TRACKING
Please use github for bug tracking: L<http://github.com/riemann42/Music-Tag-OGG/issues|http://github.com/riemann42/Music-Tag-OGG/issues>.
=head1 AUTHOR
Edward Allen III <ealleniii _at_ cpan _dot_ org>
=head1 COPYRIGHT
Copyright (c) 2007,2008 Edward Allen III. Some rights reserved.
=head1 LICENSE
This program is free software; you can redistribute it and/or modify
it under the same terms as Perl itself, either:
a) the GNU General Public License as published by the Free
Software Foundation; either version 1, or (at your option) any
later version, or
b) the "Artistic License" which comes with Perl.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See either
the GNU General Public License or the Artistic License for more details.
You should have received a copy of the Artistic License with this
Kit, in the file named "Artistic". If not, I'll be glad to provide one.
You should also have received a copy of the GNU General Public License
along with this program in the file named "Copying". If not, write to the
Free Software Foundation, Inc., 51 Franklin Street, Fifth Floor,
Boston, MA 02110-1301, USA or visit their web page on the Internet at
http://www.gnu.org/copyleft/gpl.html.
# vim: tabstop=4
|
riemann42/Music-Tag-OGG
|
5f75c16d40c943e1fc4db06f1e2a0e9349dcd1bd
|
Normalized version to 0.40_01
|
diff --git a/lib/Music/Tag/OGG.pm b/lib/Music/Tag/OGG.pm
index 9bd8372..89e69b9 100644
--- a/lib/Music/Tag/OGG.pm
+++ b/lib/Music/Tag/OGG.pm
@@ -1,243 +1,243 @@
package Music::Tag::OGG;
use strict;
use warnings;
-our $VERSION = 0.35;
+our $VERSION = .40_01;
# Copyright (c) 2007,2008 Edward Allen III. Some rights reserved.
#
# You may distribute under the terms of either the GNU General Public
# License or the Artistic License, as specified in the README file.
#
use Ogg::Vorbis::Header::PurePerl;
use base qw(Music::Tag::Generic);
our %tagmap = (
TITLE => 'title',
TRACKNUMBER => 'track',
TRACKTOTAL => 'totaltracks',
ARTIST => 'artist',
ALBUM => 'album',
COMMENT => 'comment',
DATE => 'releasedate',
GENRE => 'genre',
DISC => 'disc',
LABEL => 'label',
ASIN => 'asin',
MUSICBRAINZ_ARTISTID => 'mb_artistid',
MUSICBRAINZ_ALBUMID => 'mb_albumid',
MUSICBRAINZ_TRACKID => 'mb_trackid',
MUSICBRAINZ_SORTNAME => 'sortname',
RELEASECOUNTRY => 'countrycode',
MUSICIP_PUID => 'mip_puid',
MUSICBRAINZ_ALBUMARTIST => 'albumartist'
);
sub default_options {
{ vorbiscomment => "vorbiscomment" }
}
sub ogg {
my $self = shift;
unless ((exists $self->{_OGG}) && (ref $self->{_OGG})) {
if ($self->info->filename) {
$self->{_OGG} = Ogg::Vorbis::Header::PurePerl->new($self->info->filename);
#$self->{_OGG}->load();
}
else {
return undef;
}
}
return $self->{_OGG};
}
sub get_tag {
my $self = shift;
if ( $self->ogg ) {
foreach ($self->ogg->comment_tags) {
my $comment = uc($_);
if (exists $tagmap{$comment}) {
my $method = $tagmap{$comment};
$self->info->$method($self->ogg->comment($comment));
}
else {
$self->status("Unknown comment: $comment");
}
}
$self->info->secs( $self->ogg->info->{"length"});
$self->info->bitrate( $self->ogg->info->{"bitrate_nominal"});
$self->info->frequency( $self->ogg->info->{"rate"});
}
else {
print STDERR "No ogg object created\n";
}
return $self;
}
sub set_tag {
my $self = shift;
unless (open(COMMENT, "|-", $self->options->{vorbiscomment} ." -w ". "\"". $self->info->filename . "\"")) {
$self->status("Failed to open ", $self->options->{vorbiscomment}, ". Not writing tag.\n");
return undef;
}
while (my ($t, $m) = each %tagmap) {
if (defined $self->info->$m) {
print COMMENT $t, "=", $self->info->$m, "\n";
}
}
close (COMMENT);
return $self;
}
sub close {
my $self = shift;
$self->{_OGG} = undef;
}
1;
__END__
=pod
=head1 NAME
Music::Tag::OGG - Plugin module for Music::Tag to get information from ogg-vorbis headers.
=head1 SYNOPSIS
use Music::Tag
my $filename = "/var/lib/music/artist/album/track.ogg";
my $info = Music::Tag->new($filename, { quiet => 1 }, "OGG");
$info->get_info();
print "Artist is ", $info->artist;
=head1 DESCRIPTION
Music::Tag::OGG is used to read ogg-vorbis header information. It uses Ogg::Vorbis::Header::PurePerl. I have gone back and forth with using this
and Ogg::Vorbis::Header. Finally I have settled on Ogg::Vorbis::Header::PurePerl, because the autoload for Ogg::Vorbis::Header was a pain to work with.
To write Ogg::Vorbis headers I use the program vorbiscomment. It looks for this in the path, or in the option variable "vorbiscomment." This tool
is available from L<http://www.xiph.org/> as part of the vorbis-tools distribution.
Music::Tag::Ogg objects should be created by Music::Tag.
=head1 REQUIRED DATA VALUES
No values are required (except filename, which is usually provided on object creation).
=head1 SET DATA VALUES
=over 4
=item B<title, track, totaltracks, artist, album, comment, releasedate, genre, disc, label>
Uses standard tags for these
=item B<asin>
Uses custom tag "ASIN" for this
=item B<mb_artistid, mb_albumid, mb_trackid, mip_puid, countrycode, albumartist>
Uses MusicBrainz recommended tags for these.
=back
=head1 METHODS
=over 4
=item B<default_options>
Returns the default options for the plugin.
=item B<set_tag>
Save info from object back to ogg vorbis file using L<vorbiscomment>
=item B<get_tag>
Get info for object from ogg vorbis header using Ogg::Vorbis::Header::PurePerl
=item B<close>
Close the file and destroy the Ogg::Vorbis::Header::PurePerl object.
=item B<ogg>
Returns the Ogg::Vorbis::Header::PurePerl object.
=back
=head1 OPTIONS
=over 4
=item B<vorbiscomment>
The full path to the vorbiscomment program. Defaults to just "vorbiscomment", which assumes that vorbiscomment is in your path.
=back
=head1 BUGS
No known additional bugs provided by this Module
=head1 SEE ALSO
L<Ogg::Vorbis::Header::PurePerl>, L<Music::Tag>, L<http://www.xiph.org/>
=head1 SOURCE
Source is available at github: L<http://github.com/riemann42/Music-Tag-OGG|http://github.com/riemann42/Music-Tag-OGG>.
=head1 BUG TRACKING
Please use github for bug tracking: L<http://github.com/riemann42/Music-Tag-OGG/issues|http://github.com/riemann42/Music-Tag-OGG/issues>.
=head1 AUTHOR
Edward Allen III <ealleniii _at_ cpan _dot_ org>
=head1 COPYRIGHT
Copyright (c) 2007,2008 Edward Allen III. Some rights reserved.
=head1 LICENSE
This program is free software; you can redistribute it and/or modify
it under the same terms as Perl itself, either:
a) the GNU General Public License as published by the Free
Software Foundation; either version 1, or (at your option) any
later version, or
b) the "Artistic License" which comes with Perl.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See either
the GNU General Public License or the Artistic License for more details.
You should have received a copy of the Artistic License with this
Kit, in the file named "Artistic". If not, I'll be glad to provide one.
You should also have received a copy of the GNU General Public License
along with this program in the file named "Copying". If not, write to the
Free Software Foundation, Inc., 51 Franklin Street, Fifth Floor,
Boston, MA 02110-1301, USA or visit their web page on the Internet at
http://www.gnu.org/copyleft/gpl.html.
# vim: tabstop=4
|
riemann42/Music-Tag-OGG
|
6baf6fc655dba19a7f9c9b52158bc8d70df40ba3
|
Maintenance changes
|
diff --git a/.gitignore b/.gitignore
new file mode 100644
index 0000000..8d89b36
--- /dev/null
+++ b/.gitignore
@@ -0,0 +1,16 @@
+blib
+Makefile
+*.bak
+.*.swp
+*~
+pm_to_blib
+Build
+_build
+Build.bat
+Makefile.old
+*.tmp
+*.o
+*.tgz
+*.tar.gz
+cover_db
+Debian_CPANTS.txt
diff --git a/t/MusicTagTest.pm b/t/MusicTagTest.pm
new file mode 100644
index 0000000..2c48b6a
--- /dev/null
+++ b/t/MusicTagTest.pm
@@ -0,0 +1,213 @@
+package MusicTagTest;
+use base 'Exporter';
+use vars '@EXPORT';
+use strict;
+use Test::More;
+use Digest::SHA1;
+use File::Copy;
+use 5.006;
+
+@EXPORT = qw(create_tag read_tag random_write random_read random_write_num random_read_num random_write_date random_read_date filetest);
+
+my %values = ();
+
+sub create_tag {
+ my $filetest = shift;
+ my $tagoptions = shift;
+ my $testoptions = shift;
+ return 0 unless (-f $filetest);
+ my $tag = Music::Tag->new($filetest, $tagoptions, $testoptions->{plugin} || 'Auto');
+ ok($tag, 'Object created: ' . $filetest);
+ die unless $tag;
+ ok($tag->get_tag, 'get_tag called: ' . $filetest);
+ ok($tag->isa('Music::Tag'), 'Correct Class: ' . $filetest);
+ return $tag;
+}
+
+sub read_tag {
+ my $tag = shift;
+ my $testoptions = shift;
+ return 0 if (! exists $testoptions->{values_in});
+ my $c=0;
+ foreach my $meth (keys %{$testoptions->{values_in}}) {
+ SKIP: {
+ skip "$meth test skipped", 1 if (! $testoptions->{values_in}->{$meth});
+ $c++;
+ cmp_ok($tag->$meth, 'eq', $testoptions->{values_in}->{$meth});
+ }
+ }
+ return $c;
+}
+
+sub random_write {
+ my $tag = shift;
+ my $testoptions = shift;
+ return 0 if (! exists $testoptions->{random_write});
+ my $c = 0;
+ foreach my $meth (@{$testoptions->{random_write}}) {
+ my $val = "test" . $meth . int(rand(1000));
+ $values{$meth} = $val;
+ ok($tag->$meth($val), 'auto write to ' . $meth);
+ $c++;
+ }
+ return $c;
+}
+
+sub random_write_num {
+ my $tag = shift;
+ my $testoptions = shift;
+ return 0 if (! exists $testoptions->{random_write_num});
+ my $c = 0;
+ foreach my $meth (@{$testoptions->{random_write_num}}) {
+ my $val = int(rand(10))+1;
+ $values{$meth} = $val;
+ ok($tag->$meth($val), 'auto write to ' . $meth);
+ $c++;
+ }
+ return $c;
+}
+
+sub random_write_date {
+ my $tag = shift;
+ my $testoptions = shift;
+ return 0 if (! exists $testoptions->{random_write_date});
+ my $c = 0;
+ foreach my $meth (@{$testoptions->{random_write_date}}) {
+ my $val = int(rand(1_800_000_000));
+ $values{$meth} = $val;
+ ok($tag->$meth($val), 'auto write to '. $meth);
+ $c++;
+ }
+ return $c;
+}
+
+sub random_read {
+ my $tag = shift;
+ my $testoptions = shift;
+ return 0 if (! exists $testoptions->{random_write});
+ my $c = 0;
+ foreach my $meth (@{$testoptions->{random_write}}) {
+ cmp_ok($tag->$meth, 'eq', $values{$meth}, 'auto read of ' . $meth);
+ $c++;
+ }
+ return $c;
+}
+
+sub random_read_num {
+ my $tag = shift;
+ my $testoptions = shift;
+ return 0 if (! exists $testoptions->{random_write_num});
+ my $c = 0;
+ foreach my $meth (@{$testoptions->{random_write_num}}) {
+ cmp_ok($tag->$meth, '==', $values{$meth}, 'auto read of ' . $meth);
+ $c++;
+ }
+ return $c;
+}
+
+sub random_read_date {
+ my $tag = shift;
+ my $testoptions = shift;
+ return 0 if (! exists $testoptions->{random_write_date});
+ my $c = 0;
+ foreach my $meth (@{$testoptions->{random_write_date}}) {
+ my $meth_t = $meth;
+ $meth_t =~ s/epoch/time/;
+ my $meth_d = $meth;
+ $meth_d =~ s/epoch/date/;
+ $meth_d =~ s/_date//;
+ my @tm = gmtime($values{$meth});
+ cmp_ok(substr($tag->$meth_t,0,16), 'eq', substr(sprintf('%04d-%02d-%02d %02d:%02d:%02d', $tm[5]+1900, $tm[4]+1, $tm[3], $tm[2], $tm[1], $tm[0]),0,16), 'auto read from '. $meth_t);
+ cmp_ok($tag->$meth_d, 'eq', sprintf('%04d-%02d-%02d', $tm[5]+1900, $tm[4]+1, $tm[3]), 'auto read from '. $meth_d);
+ $c+=2;
+ }
+ return $c;
+}
+
+sub read_picture {
+ my $tag = shift;
+ my $testoptions = shift;
+ my $c = 0;
+ return 0 if (! $testoptions->{picture_read});
+ ok($tag->picture_exists, 'Picture Exists');
+ $c+=2;
+ if ($testoptions->{picture_sha1}) {
+ my $sha1 = Digest::SHA1->new();
+ $sha1->add($tag->picture->{_Data});
+ cmp_ok($sha1->hexdigest, 'eq', $testoptions->{picture_sha1}, 'digest of picture matches during read');
+ $c++;
+ }
+}
+
+sub write_picture {
+ my $tag = shift;
+ my $testoptions = shift;
+ my $c = 0;
+ return 0 if (! $testoptions->{picture_file});
+ ok($tag->picture_filename($testoptions->{picture_file}), 'add picture');
+ ok($tag->picture_exists, 'Picture Exists after write');
+ $c+=2;
+ if ($testoptions->{picture_sha1}) {
+ my $sha1 = Digest::SHA1->new();
+ $sha1->add($tag->picture->{_Data});
+ cmp_ok($sha1->hexdigest, 'eq', $testoptions->{picture_sha1}, 'digest of picture matches after write');
+ $c++;
+ }
+ return $c;
+}
+
+sub filetest {
+ my $file = shift;
+ my $filetest = shift;
+ my $tagoptions = shift;
+ my $testoptions = shift;
+ my $c = 0;
+
+ SKIP: {
+ skip ("File: $file does not exists", $testoptions->{count} || 1) if (! -f $file);
+ return unless (-f $file);
+ copy($file, $filetest);
+
+ my $tag = create_tag($filetest,$tagoptions,$testoptions);
+ $c+=3;
+ die unless $tag;
+
+
+ read_tag($tag,$testoptions);
+ if ($testoptions->{picture_in}) {
+ ok($tag->picture_exists, 'Picture should exists');
+ }
+ else {
+ ok(! $tag->picture_exists, 'Picture should not exist');
+ }
+ $c++;
+
+ if ($testoptions->{skip_write_tests}) {
+ $tag->close();
+ $tag = undef;
+ }
+ else {
+ $c+= random_write($tag,$testoptions);
+ $c+= random_write_num($tag,$testoptions);
+ $c+= random_write_date($tag,$testoptions);
+ $c+= write_picture($tag,$testoptions);
+ ok($tag->set_tag, 'set_tag: ' . $filetest);
+ $c++;
+ $tag->close();
+ $tag = undef;
+ my $tag2 = create_tag($filetest,$tagoptions,$testoptions);
+ $c+=3;
+ $c+= random_read($tag2,$testoptions);
+ $c+= random_read_num($tag2,$testoptions);
+ $c+= random_read_date($tag2,$testoptions);
+ $c+= read_picture($tag2,$testoptions);
+ $tag2->close();
+ }
+ unlink($filetest);
+ return $c;
+ }
+}
+
+
+1;
+
|
riemann42/Music-Tag-OGG
|
18780eba525690133837856781f73613b1524563
|
Spelling changes in POD
|
diff --git a/lib/Music/Tag/OGG.pm b/lib/Music/Tag/OGG.pm
index 03ea838..9bd8372 100644
--- a/lib/Music/Tag/OGG.pm
+++ b/lib/Music/Tag/OGG.pm
@@ -1,239 +1,243 @@
package Music::Tag::OGG;
+use strict;
+use warnings;
our $VERSION = 0.35;
# Copyright (c) 2007,2008 Edward Allen III. Some rights reserved.
#
# You may distribute under the terms of either the GNU General Public
# License or the Artistic License, as specified in the README file.
#
+use Ogg::Vorbis::Header::PurePerl;
+use base qw(Music::Tag::Generic);
+
our %tagmap = (
TITLE => 'title',
TRACKNUMBER => 'track',
TRACKTOTAL => 'totaltracks',
ARTIST => 'artist',
ALBUM => 'album',
COMMENT => 'comment',
DATE => 'releasedate',
GENRE => 'genre',
DISC => 'disc',
LABEL => 'label',
ASIN => 'asin',
MUSICBRAINZ_ARTISTID => 'mb_artistid',
MUSICBRAINZ_ALBUMID => 'mb_albumid',
MUSICBRAINZ_TRACKID => 'mb_trackid',
MUSICBRAINZ_SORTNAME => 'sortname',
RELEASECOUNTRY => 'countrycode',
MUSICIP_PUID => 'mip_puid',
MUSICBRAINZ_ALBUMARTIST => 'albumartist'
);
sub default_options {
{ vorbiscomment => "vorbiscomment" }
}
-use base qw(Music::Tag::Generic);
sub ogg {
my $self = shift;
unless ((exists $self->{_OGG}) && (ref $self->{_OGG})) {
if ($self->info->filename) {
$self->{_OGG} = Ogg::Vorbis::Header::PurePerl->new($self->info->filename);
#$self->{_OGG}->load();
}
else {
return undef;
}
}
return $self->{_OGG};
}
sub get_tag {
my $self = shift;
if ( $self->ogg ) {
foreach ($self->ogg->comment_tags) {
my $comment = uc($_);
if (exists $tagmap{$comment}) {
my $method = $tagmap{$comment};
$self->info->$method($self->ogg->comment($comment));
}
else {
$self->status("Unknown comment: $comment");
}
}
$self->info->secs( $self->ogg->info->{"length"});
$self->info->bitrate( $self->ogg->info->{"bitrate_nominal"});
$self->info->frequency( $self->ogg->info->{"rate"});
}
else {
print STDERR "No ogg object created\n";
}
return $self;
}
sub set_tag {
my $self = shift;
unless (open(COMMENT, "|-", $self->options->{vorbiscomment} ." -w ". "\"". $self->info->filename . "\"")) {
$self->status("Failed to open ", $self->options->{vorbiscomment}, ". Not writing tag.\n");
return undef;
}
while (my ($t, $m) = each %tagmap) {
if (defined $self->info->$m) {
print COMMENT $t, "=", $self->info->$m, "\n";
}
}
close (COMMENT);
return $self;
}
sub close {
my $self = shift;
$self->{_OGG} = undef;
}
1;
__END__
=pod
=head1 NAME
Music::Tag::OGG - Plugin module for Music::Tag to get information from ogg-vorbis headers.
=head1 SYNOPSIS
use Music::Tag
my $filename = "/var/lib/music/artist/album/track.ogg";
my $info = Music::Tag->new($filename, { quiet => 1 }, "OGG");
$info->get_info();
print "Artist is ", $info->artist;
=head1 DESCRIPTION
Music::Tag::OGG is used to read ogg-vorbis header information. It uses Ogg::Vorbis::Header::PurePerl. I have gone back and forth with using this
and Ogg::Vorbis::Header. Finally I have settled on Ogg::Vorbis::Header::PurePerl, because the autoload for Ogg::Vorbis::Header was a pain to work with.
To write Ogg::Vorbis headers I use the program vorbiscomment. It looks for this in the path, or in the option variable "vorbiscomment." This tool
is available from L<http://www.xiph.org/> as part of the vorbis-tools distribution.
Music::Tag::Ogg objects should be created by Music::Tag.
=head1 REQUIRED DATA VALUES
No values are required (except filename, which is usually provided on object creation).
=head1 SET DATA VALUES
=over 4
=item B<title, track, totaltracks, artist, album, comment, releasedate, genre, disc, label>
Uses standard tags for these
=item B<asin>
Uses custom tag "ASIN" for this
=item B<mb_artistid, mb_albumid, mb_trackid, mip_puid, countrycode, albumartist>
Uses MusicBrainz recommended tags for these.
=back
=head1 METHODS
=over 4
=item B<default_options>
Returns the default options for the plugin.
=item B<set_tag>
Save info from object back to ogg vorbis file using L<vorbiscomment>
=item B<get_tag>
Get info for object from ogg vorbis header using Ogg::Vorbis::Header::PurePerl
=item B<close>
Close the file and destroy the Ogg::Vorbis::Header::PurePerl object.
=item B<ogg>
Returns the Ogg::Vorbis::Header::PurePerl object.
=back
=head1 OPTIONS
=over 4
=item B<vorbiscomment>
The full path to the vorbiscomment program. Defaults to just "vorbiscomment", which assumes that vorbiscomment is in your path.
=back
=head1 BUGS
No known additional bugs provided by this Module
=head1 SEE ALSO
L<Ogg::Vorbis::Header::PurePerl>, L<Music::Tag>, L<http://www.xiph.org/>
=head1 SOURCE
Source is available at github: L<http://github.com/riemann42/Music-Tag-OGG|http://github.com/riemann42/Music-Tag-OGG>.
-=head1 BUGTRACKING
+=head1 BUG TRACKING
Please use github for bug tracking: L<http://github.com/riemann42/Music-Tag-OGG/issues|http://github.com/riemann42/Music-Tag-OGG/issues>.
=head1 AUTHOR
Edward Allen III <ealleniii _at_ cpan _dot_ org>
=head1 COPYRIGHT
Copyright (c) 2007,2008 Edward Allen III. Some rights reserved.
=head1 LICENSE
This program is free software; you can redistribute it and/or modify
it under the same terms as Perl itself, either:
a) the GNU General Public License as published by the Free
Software Foundation; either version 1, or (at your option) any
later version, or
b) the "Artistic License" which comes with Perl.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See either
the GNU General Public License or the Artistic License for more details.
You should have received a copy of the Artistic License with this
Kit, in the file named "Artistic". If not, I'll be glad to provide one.
You should also have received a copy of the GNU General Public License
along with this program in the file named "Copying". If not, write to the
Free Software Foundation, Inc., 51 Franklin Street, Fifth Floor,
Boston, MA 02110-1301, USA or visit their web page on the Internet at
http://www.gnu.org/copyleft/gpl.html.
# vim: tabstop=4
|
riemann42/Music-Tag-OGG
|
b82493cadd6c8dacca6c34a86f012950dc27a1b4
|
pod move to end of file
|
diff --git a/MANIFEST b/MANIFEST
index 1b899c6..480bc7f 100644
--- a/MANIFEST
+++ b/MANIFEST
@@ -1,11 +1,15 @@
Changes
+lib/Music/Tag/.OGG.pm.swp
lib/Music/Tag/OGG.pm
Makefile.PL
MANIFEST This list of files
+META.yml Module meta-data (added by MakeMaker)
README
-t/elise.ogg
t/1-ogg.t
-t/options.conf
t/2-pod.t
t/3-pod-coverage.t
-META.yml Module meta-data (added by MakeMaker)
+t/97-pod.t
+t/98-pod-coverage.t
+t/elise.ogg
+t/MusicTagTest.pm
+t/options.conf
diff --git a/lib/Music/Tag/OGG.pm b/lib/Music/Tag/OGG.pm
index 618a228..03ea838 100644
--- a/lib/Music/Tag/OGG.pm
+++ b/lib/Music/Tag/OGG.pm
@@ -1,239 +1,239 @@
package Music::Tag::OGG;
our $VERSION = 0.35;
# Copyright (c) 2007,2008 Edward Allen III. Some rights reserved.
#
# You may distribute under the terms of either the GNU General Public
# License or the Artistic License, as specified in the README file.
#
-=pod
-
-=head1 NAME
-
-Music::Tag::OGG - Plugin module for Music::Tag to get information from ogg-vorbis headers.
-
-=head1 SYNOPSIS
-
- use Music::Tag
-
- my $filename = "/var/lib/music/artist/album/track.ogg";
-
- my $info = Music::Tag->new($filename, { quiet => 1 }, "OGG");
-
- $info->get_info();
-
- print "Artist is ", $info->artist;
-
-=head1 DESCRIPTION
-
-Music::Tag::OGG is used to read ogg-vorbis header information. It uses Ogg::Vorbis::Header::PurePerl. I have gone back and forth with using this
-and Ogg::Vorbis::Header. Finally I have settled on Ogg::Vorbis::Header::PurePerl, because the autoload for Ogg::Vorbis::Header was a pain to work with.
-
-To write Ogg::Vorbis headers I use the program vorbiscomment. It looks for this in the path, or in the option variable "vorbiscomment." This tool
-is available from L<http://www.xiph.org/> as part of the vorbis-tools distribution.
-
-Music::Tag::Ogg objects must be created by Music::Tag.
-
-=head1 REQUIRED DATA VALUES
-
-No values are required (except filename, which is usually provided on object creation).
-
-=head1 SET DATA VALUES
-
-=over 4
-
-=item B<title, track, totaltracks, artist, album, comment, releasedate, genre, disc, label>
-
-Uses standard tags for these
-
-=item B<asin>
-
-Uses custom tag "ASIN" for this
-
-=item B<mb_artistid, mb_albumid, mb_trackid, mip_puid, countrycode, albumartist>
-
-Uses MusicBrainz recommended tags for these.
-
-
-=cut
-use strict;
-use warnings;
-use Ogg::Vorbis::Header::PurePerl;
-
our %tagmap = (
TITLE => 'title',
TRACKNUMBER => 'track',
TRACKTOTAL => 'totaltracks',
ARTIST => 'artist',
ALBUM => 'album',
COMMENT => 'comment',
DATE => 'releasedate',
GENRE => 'genre',
DISC => 'disc',
LABEL => 'label',
ASIN => 'asin',
MUSICBRAINZ_ARTISTID => 'mb_artistid',
MUSICBRAINZ_ALBUMID => 'mb_albumid',
MUSICBRAINZ_TRACKID => 'mb_trackid',
MUSICBRAINZ_SORTNAME => 'sortname',
RELEASECOUNTRY => 'countrycode',
MUSICIP_PUID => 'mip_puid',
MUSICBRAINZ_ALBUMARTIST => 'albumartist'
);
sub default_options {
{ vorbiscomment => "vorbiscomment" }
}
-our @ISA = qw(Music::Tag::Generic);
+use base qw(Music::Tag::Generic);
sub ogg {
my $self = shift;
unless ((exists $self->{_OGG}) && (ref $self->{_OGG})) {
if ($self->info->filename) {
$self->{_OGG} = Ogg::Vorbis::Header::PurePerl->new($self->info->filename);
#$self->{_OGG}->load();
-
}
else {
return undef;
}
}
return $self->{_OGG};
}
sub get_tag {
my $self = shift;
if ( $self->ogg ) {
foreach ($self->ogg->comment_tags) {
my $comment = uc($_);
if (exists $tagmap{$comment}) {
my $method = $tagmap{$comment};
$self->info->$method($self->ogg->comment($comment));
}
else {
$self->status("Unknown comment: $comment");
}
}
$self->info->secs( $self->ogg->info->{"length"});
$self->info->bitrate( $self->ogg->info->{"bitrate_nominal"});
$self->info->frequency( $self->ogg->info->{"rate"});
}
else {
print STDERR "No ogg object created\n";
}
return $self;
}
sub set_tag {
my $self = shift;
unless (open(COMMENT, "|-", $self->options->{vorbiscomment} ." -w ". "\"". $self->info->filename . "\"")) {
$self->status("Failed to open ", $self->options->{vorbiscomment}, ". Not writing tag.\n");
return undef;
}
while (my ($t, $m) = each %tagmap) {
if (defined $self->info->$m) {
print COMMENT $t, "=", $self->info->$m, "\n";
}
}
close (COMMENT);
return $self;
}
sub close {
my $self = shift;
$self->{_OGG} = undef;
}
1;
+__END__
+=pod
+
+=head1 NAME
+
+Music::Tag::OGG - Plugin module for Music::Tag to get information from ogg-vorbis headers.
+
+=head1 SYNOPSIS
+
+ use Music::Tag
+
+ my $filename = "/var/lib/music/artist/album/track.ogg";
+
+ my $info = Music::Tag->new($filename, { quiet => 1 }, "OGG");
+
+ $info->get_info();
+
+ print "Artist is ", $info->artist;
+
+=head1 DESCRIPTION
+
+Music::Tag::OGG is used to read ogg-vorbis header information. It uses Ogg::Vorbis::Header::PurePerl. I have gone back and forth with using this
+and Ogg::Vorbis::Header. Finally I have settled on Ogg::Vorbis::Header::PurePerl, because the autoload for Ogg::Vorbis::Header was a pain to work with.
+
+To write Ogg::Vorbis headers I use the program vorbiscomment. It looks for this in the path, or in the option variable "vorbiscomment." This tool
+is available from L<http://www.xiph.org/> as part of the vorbis-tools distribution.
+
+Music::Tag::Ogg objects should be created by Music::Tag.
+
+=head1 REQUIRED DATA VALUES
+
+No values are required (except filename, which is usually provided on object creation).
+
+=head1 SET DATA VALUES
+
+=over 4
+
+=item B<title, track, totaltracks, artist, album, comment, releasedate, genre, disc, label>
+
+Uses standard tags for these
+
+=item B<asin>
+
+Uses custom tag "ASIN" for this
+
+=item B<mb_artistid, mb_albumid, mb_trackid, mip_puid, countrycode, albumartist>
+
+Uses MusicBrainz recommended tags for these.
+
+
=back
=head1 METHODS
=over 4
=item B<default_options>
Returns the default options for the plugin.
=item B<set_tag>
Save info from object back to ogg vorbis file using L<vorbiscomment>
=item B<get_tag>
Get info for object from ogg vorbis header using Ogg::Vorbis::Header::PurePerl
=item B<close>
Close the file and destroy the Ogg::Vorbis::Header::PurePerl object.
=item B<ogg>
Returns the Ogg::Vorbis::Header::PurePerl object.
=back
-
=head1 OPTIONS
=over 4
=item B<vorbiscomment>
The full path to the vorbiscomment program. Defaults to just "vorbiscomment", which assumes that vorbiscomment is in your path.
=back
=head1 BUGS
No known additional bugs provided by this Module
=head1 SEE ALSO
L<Ogg::Vorbis::Header::PurePerl>, L<Music::Tag>, L<http://www.xiph.org/>
+=head1 SOURCE
+
+Source is available at github: L<http://github.com/riemann42/Music-Tag-OGG|http://github.com/riemann42/Music-Tag-OGG>.
+
+=head1 BUGTRACKING
+
+Please use github for bug tracking: L<http://github.com/riemann42/Music-Tag-OGG/issues|http://github.com/riemann42/Music-Tag-OGG/issues>.
+
=head1 AUTHOR
Edward Allen III <ealleniii _at_ cpan _dot_ org>
=head1 COPYRIGHT
Copyright (c) 2007,2008 Edward Allen III. Some rights reserved.
-=cut
-
=head1 LICENSE
This program is free software; you can redistribute it and/or modify
it under the same terms as Perl itself, either:
a) the GNU General Public License as published by the Free
Software Foundation; either version 1, or (at your option) any
later version, or
b) the "Artistic License" which comes with Perl.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See either
the GNU General Public License or the Artistic License for more details.
You should have received a copy of the Artistic License with this
Kit, in the file named "Artistic". If not, I'll be glad to provide one.
You should also have received a copy of the GNU General Public License
along with this program in the file named "Copying". If not, write to the
Free Software Foundation, Inc., 51 Franklin Street, Fifth Floor,
Boston, MA 02110-1301, USA or visit their web page on the Internet at
http://www.gnu.org/copyleft/gpl.html.
# vim: tabstop=4
diff --git a/t/1-ogg.t b/t/1-ogg.t
index 91ab9dd..e6bd9d4 100644
--- a/t/1-ogg.t
+++ b/t/1-ogg.t
@@ -1,30 +1,25 @@
#!/usr/bin/perl -w
use strict;
-use Test::More tests => 8;
+use Test::More tests => 9;
+use lib 't';
+use MusicTagTest;
use 5.006;
BEGIN { use_ok('Music::Tag') }
-our $options = {};
-
-sub filetest {
- my $file = shift;
- my $testoptions = shift;
- SKIP: {
- skip "File: $file does not exists", 7 unless ( -f $file );
- return unless ( -f $file );
- my $tag = Music::Tag->new( $file, $testoptions );
- ok( $tag, 'Object created: ' . $file );
- die unless $tag;
- ok( $tag->get_tag, 'get_tag called: ' . $file );
- ok( $tag->isa('Music::Tag'), 'Correct Class: ' . $file );
- is( $tag->artist, "Beethoven", 'Artist: ' . $file );
- is( $tag->album, "GPL", 'Album: ' . $file );
- is( $tag->title, "Elise", 'Title: ' . $file );
- }
-}
-
-ok( Music::Tag->LoadOptions("t/options.conf"), "Loading options file.\n" );
-filetest( "t/elise.ogg" );
-
+ok(Music::Tag->LoadOptions("t/options.conf"), "Loading options file.");
+my $c = filetest("t/elise.ogg", "t/elisetest.ogg", {},{
+ values_in => {
+ artist =>, "Beethoven",
+ album => "GPL",
+ title => "Elise",
+ },
+ skip_write_tests => 1,
+ random_write => [
+ qw(title artist album genre comment mb_trackid asin
+ mb_artistid mb_albumid albumartist ) ],
+ random_write_num => [ qw(track disc) ],
+ count => 7,
+ plugin => 'OGG'
+});
diff --git a/t/2-pod.t b/t/97-pod.t
similarity index 100%
rename from t/2-pod.t
rename to t/97-pod.t
diff --git a/t/3-pod-coverage.t b/t/98-pod-coverage.t
similarity index 100%
rename from t/3-pod-coverage.t
rename to t/98-pod-coverage.t
|
riemann42/Music-Tag-OGG
|
66415652b2c51e3380bbd07cb571b8117db1943a
|
Version: 0.35
|
diff --git a/Changes b/Changes
index d41d238..f0e7d91 100644
--- a/Changes
+++ b/Changes
@@ -1,61 +1,65 @@
+Release Name: 0.35
+===========================
+* Updated to work with Ogg::Vorbis::Header::PurePerl to fix bug 43789
+
Release Name: 0.33
===========================
* Removed write from test
Release Name: 0.32
===========================
* Added Music::Tag prereq (was incorrect!)
Release Name: 0.31
===========================
* pod improvments
Release Name: 0.30
===========================
* Kwalitee and pod improvments
Release Name: 0.29
===========================
* Fixed type-o in synapsis (OGG was ogg)
* Now requires Music::Tag .29
Release Name: 0.28
===========================
* Split off from Music::Tag distribution
Release Name: 0.27
============================
* More documentation and tested POD.
* datamethods method now can be used to add new datamethods
* Added test for MusicBrainz and Amazon plugins
* Revised releasedate and recorddate internal storage to store as releasetime
and recordtime -- with full timestamps.
* Added releasetime, recordtime, releaseepoch, and recordepoech datamethods.
* Support for TIME ID3v2 tag.
* After much thought, replaced Ogg::Vorbis::Header with
Ogg::Vorbis::Header::PurePerl and added vorbiscomment to write tags.
* Revised OGG and FLAC plugins to clean up code (much slicker now).
Release Name: 0.26
============================
* Removed several prerequistes that weren't used
* Fixed error in README about prerequisite
Release Name: 0.25
============================
* Support many more tags for flac, ogg, and m4a
* Removed autotag safetag quicktag musictag musicsort musicinfo scripts.
All is done by musictag now.
* Added tests for some plugins. More to do!
* Bug Fixes
* Documentation improvments
* Added preset option for musictag
Release Name: 0.24
============================
* Bug Fixes
* Revised MP3 Tags to read Picard tags
Release Name: 0.23
============================
* Initial Public Release
diff --git a/META.yml b/META.yml
index b5cbf31..a2c5076 100644
--- a/META.yml
+++ b/META.yml
@@ -1,15 +1,23 @@
--- #YAML:1.0
-name: Music-Tag-OGG
-version: 0.34
-abstract: Plugin module for Music::Tag to get information from ogg-vorbis headers.
-license: perl
-author:
+name: Music-Tag-OGG
+version: 0.35
+abstract: Plugin module for Music::Tag to get information from ogg-vorbis headers.
+author:
- Edward Allen (ealleniii _at_ cpan _dot_ org)
-generated_by: ExtUtils::MakeMaker version 6.42
-distribution_type: module
-requires:
- Music::Tag: 0.29
- Ogg::Vorbis::Header::PurePerl: 0.07
+license: perl
+distribution_type: module
+configure_requires:
+ ExtUtils::MakeMaker: 0
+build_requires:
+ ExtUtils::MakeMaker: 0
+requires:
+ Music::Tag: 0.29
+ Ogg::Vorbis::Header::PurePerl: 1
+no_index:
+ directory:
+ - t
+ - inc
+generated_by: ExtUtils::MakeMaker version 6.56
meta-spec:
- url: http://module-build.sourceforge.net/META-spec-v1.3.html
- version: 1.3
+ url: http://module-build.sourceforge.net/META-spec-v1.4.html
+ version: 1.4
diff --git a/Makefile.PL b/Makefile.PL
index 4131ef7..e6748f1 100644
--- a/Makefile.PL
+++ b/Makefile.PL
@@ -1,10 +1,10 @@
use ExtUtils::MakeMaker;
WriteMakefile( NAME => "Music::Tag::OGG",
VERSION_FROM => "lib/Music/Tag/OGG.pm",
ABSTRACT_FROM => "lib/Music/Tag/OGG.pm",
AUTHOR => 'Edward Allen (ealleniii _at_ cpan _dot_ org)',
LICENSE => 'perl',
PREREQ_PM => { 'Music::Tag' => 0.29,
- 'Ogg::Vorbis::Header::PurePerl' => 0.07,
+ 'Ogg::Vorbis::Header::PurePerl' => 1,
},
);
diff --git a/README b/README
index cfb8479..88b8a04 100644
--- a/README
+++ b/README
@@ -1,45 +1,102 @@
-Music::Tag::OGG
-===============
+NAME
+ Music::Tag::OGG - Plugin module for Music::Tag to get information from
+ ogg-vorbis headers.
-Music::Tag::OGG Gather info from OGG Header. Uses Ogg::Vorbis::Header::PurePerl
+SYNOPSIS
+ use Music::Tag
-Note: As of version 0.28, Music-Tag is distributed as seperate packages.
+ my $filename = "/var/lib/music/artist/album/track.ogg";
-INSTALLATION
+ my $info = Music::Tag->new($filename, { quiet => 1 }, "OGG");
-To install this module type the following:
+ $info->get_info();
+
+ print "Artist is ", $info->artist;
- perl Makefile.PL
- make
- make test
- make install
+DESCRIPTION
+ Music::Tag::OGG is used to read ogg-vorbis header information. It uses
+ Ogg::Vorbis::Header::PurePerl. I have gone back and forth with using
+ this and Ogg::Vorbis::Header. Finally I have settled on
+ Ogg::Vorbis::Header::PurePerl, because the autoload for
+ Ogg::Vorbis::Header was a pain to work with.
-DEPENDENCIES
+ To write Ogg::Vorbis headers I use the program vorbiscomment. It looks
+ for this in the path, or in the option variable "vorbiscomment." This
+ tool is available from http://www.xiph.org/ as part of the vorbis-tools
+ distribution.
-This module requires these other modules and libraries:
+ Music::Tag::Ogg objects must be created by Music::Tag.
- Music::Tag
- Ogg::Vorbis::Header::PurePerl
+REQUIRED DATA VALUES
+ No values are required (except filename, which is usually provided on
+ object creation).
-NOTE ON WRITE SUPPORT
+SET DATA VALUES
+ title, track, totaltracks, artist, album, comment, releasedate, genre,
+ disc, label
+ Uses standard tags for these
-I have had trouble with Ogg::Vorbis::Header. As such, I have stoped using it.
-I now use OGG::Vorbis::PurePerl. This module is also buggy, and doesn't have
-write support. To overcome this limitation, for now, I am using the
-vorbiscomment program that is part of the vorbis-tools package from xiph.org.
+ asin
+ Uses custom tag "ASIN" for this
-I am planning on adding write support to Ogg::Vorbis::PurePerl someday.
+ mb_artistid, mb_albumid, mb_trackid, mip_puid, countrycode, albumartist
+ Uses MusicBrainz recommended tags for these.
-TEST FILES
+METHODS
+ default_options
+ Returns the default options for the plugin.
-Are based on the sample file for Audio::M4P. For testing only.
+ set_tag
+ Save info from object back to ogg vorbis file using vorbiscomment
-COPYRIGHT AND LICENCE
+ get_tag
+ Get info for object from ogg vorbis header using
+ Ogg::Vorbis::Header::PurePerl
-Copyright (C) 2007 Edward J. Allen III
-ealleniii _at_ cpan _dot_ org
+ close
+ Close the file and destroy the Ogg::Vorbis::Header::PurePerl object.
-This library is free software; you can redistribute it and/or modify
-it under the same terms as Perl itself, either Perl version 5.8.7 or,
-at your option, any later version of Perl 5 you may have available.
+ ogg Returns the Ogg::Vorbis::Header::PurePerl object.
+
+OPTIONS
+ vorbiscomment
+ The full path to the vorbiscomment program. Defaults to just
+ "vorbiscomment", which assumes that vorbiscomment is in your path.
+
+BUGS
+ No known additional bugs provided by this Module
+
+SEE ALSO
+ Ogg::Vorbis::Header::PurePerl, Music::Tag, http://www.xiph.org/
+
+AUTHOR
+ Edward Allen III <ealleniii _at_ cpan _dot_ org>
+
+COPYRIGHT
+ Copyright (c) 2007,2008 Edward Allen III. Some rights reserved.
+
+LICENSE
+ This program is free software; you can redistribute it and/or modify it
+ under the same terms as Perl itself, either:
+
+ a) the GNU General Public License as published by the Free Software
+ Foundation; either version 1, or (at your option) any later version, or
+
+ b) the "Artistic License" which comes with Perl.
+
+ This program is distributed in the hope that it will be useful, but
+ WITHOUT ANY WARRANTY; without even the implied warranty of
+ MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See either the GNU
+ General Public License or the Artistic License for more details.
+
+ You should have received a copy of the Artistic License with this Kit,
+ in the file named "Artistic". If not, I'll be glad to provide one.
+
+ You should also have received a copy of the GNU General Public License
+ along with this program in the file named "Copying". If not, write to
+ the Free Software Foundation, Inc., 51 Franklin Street, Fifth Floor,
+ Boston, MA 02110-1301, USA or visit their web page on the Internet at
+ http://www.gnu.org/copyleft/gpl.html.
+
+ # vim: tabstop=4
diff --git a/lib/Music/Tag/OGG.pm b/lib/Music/Tag/OGG.pm
index 6a3c8e0..618a228 100644
--- a/lib/Music/Tag/OGG.pm
+++ b/lib/Music/Tag/OGG.pm
@@ -1,239 +1,239 @@
package Music::Tag::OGG;
-our $VERSION = 0.34;
+our $VERSION = 0.35;
# Copyright (c) 2007,2008 Edward Allen III. Some rights reserved.
#
# You may distribute under the terms of either the GNU General Public
# License or the Artistic License, as specified in the README file.
#
=pod
=head1 NAME
Music::Tag::OGG - Plugin module for Music::Tag to get information from ogg-vorbis headers.
=head1 SYNOPSIS
use Music::Tag
my $filename = "/var/lib/music/artist/album/track.ogg";
my $info = Music::Tag->new($filename, { quiet => 1 }, "OGG");
$info->get_info();
print "Artist is ", $info->artist;
=head1 DESCRIPTION
Music::Tag::OGG is used to read ogg-vorbis header information. It uses Ogg::Vorbis::Header::PurePerl. I have gone back and forth with using this
and Ogg::Vorbis::Header. Finally I have settled on Ogg::Vorbis::Header::PurePerl, because the autoload for Ogg::Vorbis::Header was a pain to work with.
To write Ogg::Vorbis headers I use the program vorbiscomment. It looks for this in the path, or in the option variable "vorbiscomment." This tool
is available from L<http://www.xiph.org/> as part of the vorbis-tools distribution.
Music::Tag::Ogg objects must be created by Music::Tag.
=head1 REQUIRED DATA VALUES
No values are required (except filename, which is usually provided on object creation).
=head1 SET DATA VALUES
=over 4
=item B<title, track, totaltracks, artist, album, comment, releasedate, genre, disc, label>
Uses standard tags for these
=item B<asin>
Uses custom tag "ASIN" for this
=item B<mb_artistid, mb_albumid, mb_trackid, mip_puid, countrycode, albumartist>
Uses MusicBrainz recommended tags for these.
=cut
use strict;
use warnings;
use Ogg::Vorbis::Header::PurePerl;
our %tagmap = (
TITLE => 'title',
TRACKNUMBER => 'track',
TRACKTOTAL => 'totaltracks',
ARTIST => 'artist',
ALBUM => 'album',
COMMENT => 'comment',
DATE => 'releasedate',
GENRE => 'genre',
DISC => 'disc',
LABEL => 'label',
ASIN => 'asin',
MUSICBRAINZ_ARTISTID => 'mb_artistid',
MUSICBRAINZ_ALBUMID => 'mb_albumid',
MUSICBRAINZ_TRACKID => 'mb_trackid',
MUSICBRAINZ_SORTNAME => 'sortname',
RELEASECOUNTRY => 'countrycode',
MUSICIP_PUID => 'mip_puid',
MUSICBRAINZ_ALBUMARTIST => 'albumartist'
);
sub default_options {
{ vorbiscomment => "vorbiscomment" }
}
our @ISA = qw(Music::Tag::Generic);
sub ogg {
my $self = shift;
unless ((exists $self->{_OGG}) && (ref $self->{_OGG})) {
if ($self->info->filename) {
$self->{_OGG} = Ogg::Vorbis::Header::PurePerl->new($self->info->filename);
- $self->{_OGG}->load();
+ #$self->{_OGG}->load();
}
else {
return undef;
}
}
return $self->{_OGG};
}
sub get_tag {
my $self = shift;
if ( $self->ogg ) {
foreach ($self->ogg->comment_tags) {
my $comment = uc($_);
if (exists $tagmap{$comment}) {
my $method = $tagmap{$comment};
$self->info->$method($self->ogg->comment($comment));
}
else {
$self->status("Unknown comment: $comment");
}
}
$self->info->secs( $self->ogg->info->{"length"});
$self->info->bitrate( $self->ogg->info->{"bitrate_nominal"});
$self->info->frequency( $self->ogg->info->{"rate"});
}
else {
print STDERR "No ogg object created\n";
}
return $self;
}
sub set_tag {
my $self = shift;
unless (open(COMMENT, "|-", $self->options->{vorbiscomment} ." -w ". "\"". $self->info->filename . "\"")) {
$self->status("Failed to open ", $self->options->{vorbiscomment}, ". Not writing tag.\n");
return undef;
}
while (my ($t, $m) = each %tagmap) {
if (defined $self->info->$m) {
print COMMENT $t, "=", $self->info->$m, "\n";
}
}
close (COMMENT);
return $self;
}
sub close {
my $self = shift;
$self->{_OGG} = undef;
}
1;
=back
=head1 METHODS
=over 4
=item B<default_options>
Returns the default options for the plugin.
=item B<set_tag>
Save info from object back to ogg vorbis file using L<vorbiscomment>
=item B<get_tag>
Get info for object from ogg vorbis header using Ogg::Vorbis::Header::PurePerl
=item B<close>
Close the file and destroy the Ogg::Vorbis::Header::PurePerl object.
=item B<ogg>
Returns the Ogg::Vorbis::Header::PurePerl object.
=back
=head1 OPTIONS
=over 4
=item B<vorbiscomment>
The full path to the vorbiscomment program. Defaults to just "vorbiscomment", which assumes that vorbiscomment is in your path.
=back
=head1 BUGS
No known additional bugs provided by this Module
=head1 SEE ALSO
L<Ogg::Vorbis::Header::PurePerl>, L<Music::Tag>, L<http://www.xiph.org/>
=head1 AUTHOR
Edward Allen III <ealleniii _at_ cpan _dot_ org>
=head1 COPYRIGHT
Copyright (c) 2007,2008 Edward Allen III. Some rights reserved.
=cut
=head1 LICENSE
This program is free software; you can redistribute it and/or modify
it under the same terms as Perl itself, either:
a) the GNU General Public License as published by the Free
Software Foundation; either version 1, or (at your option) any
later version, or
b) the "Artistic License" which comes with Perl.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See either
the GNU General Public License or the Artistic License for more details.
You should have received a copy of the Artistic License with this
Kit, in the file named "Artistic". If not, I'll be glad to provide one.
You should also have received a copy of the GNU General Public License
along with this program in the file named "Copying". If not, write to the
Free Software Foundation, Inc., 51 Franklin Street, Fifth Floor,
Boston, MA 02110-1301, USA or visit their web page on the Internet at
http://www.gnu.org/copyleft/gpl.html.
# vim: tabstop=4
|
riemann42/Music-Tag-OGG
|
16e5415886bd0c8575b2f3d534cd034e822b8526
|
Version: 0.34
|
diff --git a/META.yml b/META.yml
index 090f134..b5cbf31 100644
--- a/META.yml
+++ b/META.yml
@@ -1,15 +1,15 @@
--- #YAML:1.0
name: Music-Tag-OGG
-version: 0.33
+version: 0.34
abstract: Plugin module for Music::Tag to get information from ogg-vorbis headers.
license: perl
author:
- Edward Allen (ealleniii _at_ cpan _dot_ org)
generated_by: ExtUtils::MakeMaker version 6.42
distribution_type: module
requires:
Music::Tag: 0.29
Ogg::Vorbis::Header::PurePerl: 0.07
meta-spec:
url: http://module-build.sourceforge.net/META-spec-v1.3.html
version: 1.3
diff --git a/lib/Music/Tag/OGG.pm b/lib/Music/Tag/OGG.pm
index 546de33..6a3c8e0 100644
--- a/lib/Music/Tag/OGG.pm
+++ b/lib/Music/Tag/OGG.pm
@@ -1,216 +1,239 @@
package Music::Tag::OGG;
-our $VERSION = 0.33;
+our $VERSION = 0.34;
# Copyright (c) 2007,2008 Edward Allen III. Some rights reserved.
+
#
-## This program is free software; you can redistribute it and/or
-## modify it under the terms of the Artistic License, distributed
-## with Perl.
+# You may distribute under the terms of either the GNU General Public
+# License or the Artistic License, as specified in the README file.
#
+
=pod
=head1 NAME
Music::Tag::OGG - Plugin module for Music::Tag to get information from ogg-vorbis headers.
=head1 SYNOPSIS
use Music::Tag
my $filename = "/var/lib/music/artist/album/track.ogg";
my $info = Music::Tag->new($filename, { quiet => 1 }, "OGG");
$info->get_info();
print "Artist is ", $info->artist;
=head1 DESCRIPTION
-Music::Tag::OGG is used to read ogg-vorbis header information. It uses Ogg::Vorbis::Header::PurePerl. I have gone back and forth with using this
+Music::Tag::OGG is used to read ogg-vorbis header information. It uses Ogg::Vorbis::Header::PurePerl. I have gone back and forth with using this
and Ogg::Vorbis::Header. Finally I have settled on Ogg::Vorbis::Header::PurePerl, because the autoload for Ogg::Vorbis::Header was a pain to work with.
To write Ogg::Vorbis headers I use the program vorbiscomment. It looks for this in the path, or in the option variable "vorbiscomment." This tool
is available from L<http://www.xiph.org/> as part of the vorbis-tools distribution.
-=head1 REQUIRED VALUES
+Music::Tag::Ogg objects must be created by Music::Tag.
+
+=head1 REQUIRED DATA VALUES
No values are required (except filename, which is usually provided on object creation).
-=head1 SET VALUES
+=head1 SET DATA VALUES
=over 4
=item B<title, track, totaltracks, artist, album, comment, releasedate, genre, disc, label>
Uses standard tags for these
=item B<asin>
Uses custom tag "ASIN" for this
=item B<mb_artistid, mb_albumid, mb_trackid, mip_puid, countrycode, albumartist>
Uses MusicBrainz recommended tags for these.
=cut
use strict;
use warnings;
use Ogg::Vorbis::Header::PurePerl;
our %tagmap = (
TITLE => 'title',
TRACKNUMBER => 'track',
TRACKTOTAL => 'totaltracks',
ARTIST => 'artist',
ALBUM => 'album',
COMMENT => 'comment',
DATE => 'releasedate',
GENRE => 'genre',
DISC => 'disc',
LABEL => 'label',
ASIN => 'asin',
MUSICBRAINZ_ARTISTID => 'mb_artistid',
MUSICBRAINZ_ALBUMID => 'mb_albumid',
MUSICBRAINZ_TRACKID => 'mb_trackid',
MUSICBRAINZ_SORTNAME => 'sortname',
RELEASECOUNTRY => 'countrycode',
MUSICIP_PUID => 'mip_puid',
MUSICBRAINZ_ALBUMARTIST => 'albumartist'
);
sub default_options {
{ vorbiscomment => "vorbiscomment" }
}
our @ISA = qw(Music::Tag::Generic);
sub ogg {
my $self = shift;
unless ((exists $self->{_OGG}) && (ref $self->{_OGG})) {
if ($self->info->filename) {
$self->{_OGG} = Ogg::Vorbis::Header::PurePerl->new($self->info->filename);
$self->{_OGG}->load();
}
else {
return undef;
}
}
return $self->{_OGG};
}
sub get_tag {
my $self = shift;
if ( $self->ogg ) {
foreach ($self->ogg->comment_tags) {
my $comment = uc($_);
if (exists $tagmap{$comment}) {
my $method = $tagmap{$comment};
$self->info->$method($self->ogg->comment($comment));
}
else {
$self->status("Unknown comment: $comment");
}
}
$self->info->secs( $self->ogg->info->{"length"});
$self->info->bitrate( $self->ogg->info->{"bitrate_nominal"});
$self->info->frequency( $self->ogg->info->{"rate"});
}
else {
print STDERR "No ogg object created\n";
}
return $self;
}
sub set_tag {
my $self = shift;
unless (open(COMMENT, "|-", $self->options->{vorbiscomment} ." -w ". "\"". $self->info->filename . "\"")) {
$self->status("Failed to open ", $self->options->{vorbiscomment}, ". Not writing tag.\n");
return undef;
}
while (my ($t, $m) = each %tagmap) {
if (defined $self->info->$m) {
print COMMENT $t, "=", $self->info->$m, "\n";
}
}
close (COMMENT);
return $self;
}
sub close {
my $self = shift;
$self->{_OGG} = undef;
}
1;
=back
=head1 METHODS
=over 4
=item B<default_options>
Returns the default options for the plugin.
=item B<set_tag>
Save info from object back to ogg vorbis file using L<vorbiscomment>
=item B<get_tag>
Get info for object from ogg vorbis header using Ogg::Vorbis::Header::PurePerl
=item B<close>
Close the file and destroy the Ogg::Vorbis::Header::PurePerl object.
=item B<ogg>
Returns the Ogg::Vorbis::Header::PurePerl object.
=back
=head1 OPTIONS
=over 4
=item B<vorbiscomment>
The full path to the vorbiscomment program. Defaults to just "vorbiscomment", which assumes that vorbiscomment is in your path.
=back
=head1 BUGS
No known additional bugs provided by this Module
=head1 SEE ALSO
-L<Ogg::Vorbis::Header::PurePerl>, L<Music::Tag>, L<Music::Tag::Amazon>, L<Music::Tag::File>, L<Music::Tag::FLAC>, L<Music::Tag::Lyrics>,
-L<Music::Tag::M4A>, L<Music::Tag::MP3>, L<Music::Tag::MusicBrainz>, L<Music::Tag::Option>
+L<Ogg::Vorbis::Header::PurePerl>, L<Music::Tag>, L<http://www.xiph.org/>
=head1 AUTHOR
Edward Allen III <ealleniii _at_ cpan _dot_ org>
-=head1 LICENSE
-
-This program is free software; you can redistribute it and/or
-modify it under the terms of the Artistic License, distributed
-with Perl.
-
=head1 COPYRIGHT
Copyright (c) 2007,2008 Edward Allen III. Some rights reserved.
=cut
+=head1 LICENSE
+
+This program is free software; you can redistribute it and/or modify
+it under the same terms as Perl itself, either:
+
+a) the GNU General Public License as published by the Free
+Software Foundation; either version 1, or (at your option) any
+later version, or
+
+b) the "Artistic License" which comes with Perl.
+
+This program is distributed in the hope that it will be useful,
+but WITHOUT ANY WARRANTY; without even the implied warranty of
+MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See either
+the GNU General Public License or the Artistic License for more details.
+
+You should have received a copy of the Artistic License with this
+Kit, in the file named "Artistic". If not, I'll be glad to provide one.
+
+You should also have received a copy of the GNU General Public License
+along with this program in the file named "Copying". If not, write to the
+Free Software Foundation, Inc., 51 Franklin Street, Fifth Floor,
+Boston, MA 02110-1301, USA or visit their web page on the Internet at
+http://www.gnu.org/copyleft/gpl.html.
+
+
+
# vim: tabstop=4
|
riemann42/Music-Tag-OGG
|
f645a1763f8935072c165247cfc5b740f22d7ca8
|
Version: 0.33
|
diff --git a/Changes b/Changes
index 69cc005..d41d238 100644
--- a/Changes
+++ b/Changes
@@ -1,57 +1,61 @@
+Release Name: 0.33
+===========================
+* Removed write from test
+
Release Name: 0.32
===========================
* Added Music::Tag prereq (was incorrect!)
Release Name: 0.31
===========================
* pod improvments
Release Name: 0.30
===========================
* Kwalitee and pod improvments
Release Name: 0.29
===========================
* Fixed type-o in synapsis (OGG was ogg)
* Now requires Music::Tag .29
Release Name: 0.28
===========================
* Split off from Music::Tag distribution
Release Name: 0.27
============================
* More documentation and tested POD.
* datamethods method now can be used to add new datamethods
* Added test for MusicBrainz and Amazon plugins
* Revised releasedate and recorddate internal storage to store as releasetime
and recordtime -- with full timestamps.
* Added releasetime, recordtime, releaseepoch, and recordepoech datamethods.
* Support for TIME ID3v2 tag.
* After much thought, replaced Ogg::Vorbis::Header with
Ogg::Vorbis::Header::PurePerl and added vorbiscomment to write tags.
* Revised OGG and FLAC plugins to clean up code (much slicker now).
Release Name: 0.26
============================
* Removed several prerequistes that weren't used
* Fixed error in README about prerequisite
Release Name: 0.25
============================
* Support many more tags for flac, ogg, and m4a
* Removed autotag safetag quicktag musictag musicsort musicinfo scripts.
All is done by musictag now.
* Added tests for some plugins. More to do!
* Bug Fixes
* Documentation improvments
* Added preset option for musictag
Release Name: 0.24
============================
* Bug Fixes
* Revised MP3 Tags to read Picard tags
Release Name: 0.23
============================
* Initial Public Release
diff --git a/META.yml b/META.yml
index 06adbaa..090f134 100644
--- a/META.yml
+++ b/META.yml
@@ -1,15 +1,15 @@
--- #YAML:1.0
name: Music-Tag-OGG
-version: 0.32
+version: 0.33
abstract: Plugin module for Music::Tag to get information from ogg-vorbis headers.
license: perl
author:
- Edward Allen (ealleniii _at_ cpan _dot_ org)
generated_by: ExtUtils::MakeMaker version 6.42
distribution_type: module
requires:
Music::Tag: 0.29
Ogg::Vorbis::Header::PurePerl: 0.07
meta-spec:
url: http://module-build.sourceforge.net/META-spec-v1.3.html
version: 1.3
diff --git a/lib/Music/Tag/OGG.pm b/lib/Music/Tag/OGG.pm
index cf9ac18..546de33 100644
--- a/lib/Music/Tag/OGG.pm
+++ b/lib/Music/Tag/OGG.pm
@@ -1,215 +1,216 @@
package Music::Tag::OGG;
-our $VERSION = 0.32;
+our $VERSION = 0.33;
# Copyright (c) 2007,2008 Edward Allen III. Some rights reserved.
#
## This program is free software; you can redistribute it and/or
## modify it under the terms of the Artistic License, distributed
## with Perl.
#
=pod
=head1 NAME
Music::Tag::OGG - Plugin module for Music::Tag to get information from ogg-vorbis headers.
=head1 SYNOPSIS
use Music::Tag
my $filename = "/var/lib/music/artist/album/track.ogg";
my $info = Music::Tag->new($filename, { quiet => 1 }, "OGG");
$info->get_info();
print "Artist is ", $info->artist;
=head1 DESCRIPTION
Music::Tag::OGG is used to read ogg-vorbis header information. It uses Ogg::Vorbis::Header::PurePerl. I have gone back and forth with using this
and Ogg::Vorbis::Header. Finally I have settled on Ogg::Vorbis::Header::PurePerl, because the autoload for Ogg::Vorbis::Header was a pain to work with.
To write Ogg::Vorbis headers I use the program vorbiscomment. It looks for this in the path, or in the option variable "vorbiscomment." This tool
is available from L<http://www.xiph.org/> as part of the vorbis-tools distribution.
=head1 REQUIRED VALUES
No values are required (except filename, which is usually provided on object creation).
=head1 SET VALUES
=over 4
=item B<title, track, totaltracks, artist, album, comment, releasedate, genre, disc, label>
Uses standard tags for these
=item B<asin>
Uses custom tag "ASIN" for this
=item B<mb_artistid, mb_albumid, mb_trackid, mip_puid, countrycode, albumartist>
Uses MusicBrainz recommended tags for these.
=cut
use strict;
+use warnings;
use Ogg::Vorbis::Header::PurePerl;
our %tagmap = (
TITLE => 'title',
TRACKNUMBER => 'track',
TRACKTOTAL => 'totaltracks',
ARTIST => 'artist',
ALBUM => 'album',
COMMENT => 'comment',
DATE => 'releasedate',
GENRE => 'genre',
DISC => 'disc',
LABEL => 'label',
ASIN => 'asin',
MUSICBRAINZ_ARTISTID => 'mb_artistid',
MUSICBRAINZ_ALBUMID => 'mb_albumid',
MUSICBRAINZ_TRACKID => 'mb_trackid',
MUSICBRAINZ_SORTNAME => 'sortname',
RELEASECOUNTRY => 'countrycode',
MUSICIP_PUID => 'mip_puid',
MUSICBRAINZ_ALBUMARTIST => 'albumartist'
);
sub default_options {
{ vorbiscomment => "vorbiscomment" }
}
our @ISA = qw(Music::Tag::Generic);
sub ogg {
my $self = shift;
unless ((exists $self->{_OGG}) && (ref $self->{_OGG})) {
if ($self->info->filename) {
$self->{_OGG} = Ogg::Vorbis::Header::PurePerl->new($self->info->filename);
$self->{_OGG}->load();
}
else {
return undef;
}
}
return $self->{_OGG};
}
sub get_tag {
my $self = shift;
if ( $self->ogg ) {
foreach ($self->ogg->comment_tags) {
my $comment = uc($_);
if (exists $tagmap{$comment}) {
my $method = $tagmap{$comment};
$self->info->$method($self->ogg->comment($comment));
}
else {
$self->status("Unknown comment: $comment");
}
}
$self->info->secs( $self->ogg->info->{"length"});
$self->info->bitrate( $self->ogg->info->{"bitrate_nominal"});
$self->info->frequency( $self->ogg->info->{"rate"});
}
else {
print STDERR "No ogg object created\n";
}
return $self;
}
sub set_tag {
my $self = shift;
unless (open(COMMENT, "|-", $self->options->{vorbiscomment} ." -w ". "\"". $self->info->filename . "\"")) {
$self->status("Failed to open ", $self->options->{vorbiscomment}, ". Not writing tag.\n");
return undef;
}
while (my ($t, $m) = each %tagmap) {
if (defined $self->info->$m) {
print COMMENT $t, "=", $self->info->$m, "\n";
}
}
close (COMMENT);
return $self;
}
sub close {
my $self = shift;
$self->{_OGG} = undef;
}
1;
=back
=head1 METHODS
=over 4
=item B<default_options>
Returns the default options for the plugin.
=item B<set_tag>
Save info from object back to ogg vorbis file using L<vorbiscomment>
=item B<get_tag>
Get info for object from ogg vorbis header using Ogg::Vorbis::Header::PurePerl
=item B<close>
Close the file and destroy the Ogg::Vorbis::Header::PurePerl object.
=item B<ogg>
Returns the Ogg::Vorbis::Header::PurePerl object.
=back
=head1 OPTIONS
=over 4
=item B<vorbiscomment>
The full path to the vorbiscomment program. Defaults to just "vorbiscomment", which assumes that vorbiscomment is in your path.
=back
=head1 BUGS
No known additional bugs provided by this Module
=head1 SEE ALSO
L<Ogg::Vorbis::Header::PurePerl>, L<Music::Tag>, L<Music::Tag::Amazon>, L<Music::Tag::File>, L<Music::Tag::FLAC>, L<Music::Tag::Lyrics>,
L<Music::Tag::M4A>, L<Music::Tag::MP3>, L<Music::Tag::MusicBrainz>, L<Music::Tag::Option>
=head1 AUTHOR
Edward Allen III <ealleniii _at_ cpan _dot_ org>
=head1 LICENSE
This program is free software; you can redistribute it and/or
modify it under the terms of the Artistic License, distributed
with Perl.
=head1 COPYRIGHT
Copyright (c) 2007,2008 Edward Allen III. Some rights reserved.
=cut
# vim: tabstop=4
diff --git a/t/1-ogg.t b/t/1-ogg.t
index 38b8805..91ab9dd 100644
--- a/t/1-ogg.t
+++ b/t/1-ogg.t
@@ -1,50 +1,30 @@
#!/usr/bin/perl -w
use strict;
-use Test::More tests => 15;
-use File::Copy;
+use Test::More tests => 8;
use 5.006;
BEGIN { use_ok('Music::Tag') }
our $options = {};
-# Add 13 test for each run of this
sub filetest {
my $file = shift;
- my $filetest = shift;
my $testoptions = shift;
SKIP: {
skip "File: $file does not exists", 7 unless ( -f $file );
return unless ( -f $file );
- copy( $file, $filetest );
- my $tag = Music::Tag->new( $filetest, $testoptions );
- ok( $tag, 'Object created: ' . $filetest );
+ my $tag = Music::Tag->new( $file, $testoptions );
+ ok( $tag, 'Object created: ' . $file );
die unless $tag;
- ok( $tag->get_tag, 'get_tag called: ' . $filetest );
- ok( $tag->isa('Music::Tag'), 'Correct Class: ' . $filetest );
- is( $tag->artist, "Beethoven", 'Artist: ' . $filetest );
- is( $tag->album, "GPL", 'Album: ' . $filetest );
- is( $tag->title, "Elise", 'Title: ' . $filetest );
- ok( $tag->title("Elise Test"), 'Set new title: ' . $filetest );
- ok( $tag->set_tag, 'set_tag: ' . $filetest );
- $tag->close();
- $tag = undef;
- my $tag2 = Music::Tag->new( $filetest, $testoptions);
- ok( $tag2, 'Object created again: ' . $filetest );
- die unless $tag2;
- ok( $tag2->get_tag, 'get_tag called: ' . $filetest );
- #TODO: {
- # local $TODO = "Write support is buggy for ogg";
- is( $tag2->title, "Elise Test", 'New Title: ' . $filetest );
- #}
- ok( $tag2->title("Elise"), 'Reset title: ' . $filetest );
- ok( $tag2->set_tag, 'set_tag again: ' . $filetest );
- $tag2->close();
- unlink($filetest);
+ ok( $tag->get_tag, 'get_tag called: ' . $file );
+ ok( $tag->isa('Music::Tag'), 'Correct Class: ' . $file );
+ is( $tag->artist, "Beethoven", 'Artist: ' . $file );
+ is( $tag->album, "GPL", 'Album: ' . $file );
+ is( $tag->title, "Elise", 'Title: ' . $file );
}
}
ok( Music::Tag->LoadOptions("t/options.conf"), "Loading options file.\n" );
-filetest( "t/elise.ogg", "t/elisetest.ogg" );
+filetest( "t/elise.ogg" );
|
riemann42/Music-Tag-OGG
|
f3de6329e89ffab53beb040b546aea8cb3079280
|
Version: 0.32
|
diff --git a/Changes b/Changes
index e32a94b..69cc005 100644
--- a/Changes
+++ b/Changes
@@ -1,53 +1,57 @@
+Release Name: 0.32
+===========================
+* Added Music::Tag prereq (was incorrect!)
+
Release Name: 0.31
===========================
* pod improvments
Release Name: 0.30
===========================
* Kwalitee and pod improvments
Release Name: 0.29
===========================
* Fixed type-o in synapsis (OGG was ogg)
* Now requires Music::Tag .29
Release Name: 0.28
===========================
* Split off from Music::Tag distribution
Release Name: 0.27
============================
* More documentation and tested POD.
* datamethods method now can be used to add new datamethods
* Added test for MusicBrainz and Amazon plugins
* Revised releasedate and recorddate internal storage to store as releasetime
and recordtime -- with full timestamps.
* Added releasetime, recordtime, releaseepoch, and recordepoech datamethods.
* Support for TIME ID3v2 tag.
* After much thought, replaced Ogg::Vorbis::Header with
Ogg::Vorbis::Header::PurePerl and added vorbiscomment to write tags.
* Revised OGG and FLAC plugins to clean up code (much slicker now).
Release Name: 0.26
============================
* Removed several prerequistes that weren't used
* Fixed error in README about prerequisite
Release Name: 0.25
============================
* Support many more tags for flac, ogg, and m4a
* Removed autotag safetag quicktag musictag musicsort musicinfo scripts.
All is done by musictag now.
* Added tests for some plugins. More to do!
* Bug Fixes
* Documentation improvments
* Added preset option for musictag
Release Name: 0.24
============================
* Bug Fixes
* Revised MP3 Tags to read Picard tags
Release Name: 0.23
============================
* Initial Public Release
diff --git a/META.yml b/META.yml
index 107bc93..06adbaa 100644
--- a/META.yml
+++ b/META.yml
@@ -1,15 +1,15 @@
--- #YAML:1.0
name: Music-Tag-OGG
-version: 0.31
+version: 0.32
abstract: Plugin module for Music::Tag to get information from ogg-vorbis headers.
license: perl
author:
- Edward Allen (ealleniii _at_ cpan _dot_ org)
generated_by: ExtUtils::MakeMaker version 6.42
distribution_type: module
requires:
- MP3::Tag: 0.29
+ Music::Tag: 0.29
Ogg::Vorbis::Header::PurePerl: 0.07
meta-spec:
url: http://module-build.sourceforge.net/META-spec-v1.3.html
version: 1.3
diff --git a/Makefile.PL b/Makefile.PL
index 4ebe48f..4131ef7 100644
--- a/Makefile.PL
+++ b/Makefile.PL
@@ -1,10 +1,10 @@
use ExtUtils::MakeMaker;
WriteMakefile( NAME => "Music::Tag::OGG",
VERSION_FROM => "lib/Music/Tag/OGG.pm",
ABSTRACT_FROM => "lib/Music/Tag/OGG.pm",
AUTHOR => 'Edward Allen (ealleniii _at_ cpan _dot_ org)',
LICENSE => 'perl',
- PREREQ_PM => { 'MP3::Tag' => 0.29,
+ PREREQ_PM => { 'Music::Tag' => 0.29,
'Ogg::Vorbis::Header::PurePerl' => 0.07,
},
);
diff --git a/README b/README
index 9578dfa..cfb8479 100644
--- a/README
+++ b/README
@@ -1,45 +1,45 @@
-Music::Tag::OGG version 0.28
-==============================
+Music::Tag::OGG
+===============
Music::Tag::OGG Gather info from OGG Header. Uses Ogg::Vorbis::Header::PurePerl
Note: As of version 0.28, Music-Tag is distributed as seperate packages.
INSTALLATION
To install this module type the following:
perl Makefile.PL
make
make test
make install
DEPENDENCIES
This module requires these other modules and libraries:
Music::Tag
Ogg::Vorbis::Header::PurePerl
NOTE ON WRITE SUPPORT
I have had trouble with Ogg::Vorbis::Header. As such, I have stoped using it.
I now use OGG::Vorbis::PurePerl. This module is also buggy, and doesn't have
write support. To overcome this limitation, for now, I am using the
vorbiscomment program that is part of the vorbis-tools package from xiph.org.
I am planning on adding write support to Ogg::Vorbis::PurePerl someday.
TEST FILES
Are based on the sample file for Audio::M4P. For testing only.
COPYRIGHT AND LICENCE
Copyright (C) 2007 Edward J. Allen III
ealleniii _at_ cpan _dot_ org
This library is free software; you can redistribute it and/or modify
it under the same terms as Perl itself, either Perl version 5.8.7 or,
at your option, any later version of Perl 5 you may have available.
diff --git a/lib/Music/Tag/OGG.pm b/lib/Music/Tag/OGG.pm
index 1207c11..cf9ac18 100644
--- a/lib/Music/Tag/OGG.pm
+++ b/lib/Music/Tag/OGG.pm
@@ -1,215 +1,215 @@
package Music::Tag::OGG;
-our $VERSION = 0.31;
+our $VERSION = 0.32;
# Copyright (c) 2007,2008 Edward Allen III. Some rights reserved.
#
## This program is free software; you can redistribute it and/or
## modify it under the terms of the Artistic License, distributed
## with Perl.
#
=pod
=head1 NAME
Music::Tag::OGG - Plugin module for Music::Tag to get information from ogg-vorbis headers.
=head1 SYNOPSIS
use Music::Tag
my $filename = "/var/lib/music/artist/album/track.ogg";
my $info = Music::Tag->new($filename, { quiet => 1 }, "OGG");
$info->get_info();
print "Artist is ", $info->artist;
=head1 DESCRIPTION
Music::Tag::OGG is used to read ogg-vorbis header information. It uses Ogg::Vorbis::Header::PurePerl. I have gone back and forth with using this
and Ogg::Vorbis::Header. Finally I have settled on Ogg::Vorbis::Header::PurePerl, because the autoload for Ogg::Vorbis::Header was a pain to work with.
To write Ogg::Vorbis headers I use the program vorbiscomment. It looks for this in the path, or in the option variable "vorbiscomment." This tool
is available from L<http://www.xiph.org/> as part of the vorbis-tools distribution.
=head1 REQUIRED VALUES
No values are required (except filename, which is usually provided on object creation).
=head1 SET VALUES
=over 4
=item B<title, track, totaltracks, artist, album, comment, releasedate, genre, disc, label>
Uses standard tags for these
=item B<asin>
Uses custom tag "ASIN" for this
=item B<mb_artistid, mb_albumid, mb_trackid, mip_puid, countrycode, albumartist>
Uses MusicBrainz recommended tags for these.
=cut
use strict;
use Ogg::Vorbis::Header::PurePerl;
our %tagmap = (
TITLE => 'title',
TRACKNUMBER => 'track',
TRACKTOTAL => 'totaltracks',
ARTIST => 'artist',
ALBUM => 'album',
COMMENT => 'comment',
DATE => 'releasedate',
GENRE => 'genre',
DISC => 'disc',
LABEL => 'label',
ASIN => 'asin',
MUSICBRAINZ_ARTISTID => 'mb_artistid',
MUSICBRAINZ_ALBUMID => 'mb_albumid',
MUSICBRAINZ_TRACKID => 'mb_trackid',
MUSICBRAINZ_SORTNAME => 'sortname',
RELEASECOUNTRY => 'countrycode',
MUSICIP_PUID => 'mip_puid',
MUSICBRAINZ_ALBUMARTIST => 'albumartist'
);
sub default_options {
{ vorbiscomment => "vorbiscomment" }
}
our @ISA = qw(Music::Tag::Generic);
sub ogg {
my $self = shift;
unless ((exists $self->{_OGG}) && (ref $self->{_OGG})) {
if ($self->info->filename) {
$self->{_OGG} = Ogg::Vorbis::Header::PurePerl->new($self->info->filename);
$self->{_OGG}->load();
}
else {
return undef;
}
}
return $self->{_OGG};
}
sub get_tag {
my $self = shift;
if ( $self->ogg ) {
foreach ($self->ogg->comment_tags) {
my $comment = uc($_);
if (exists $tagmap{$comment}) {
my $method = $tagmap{$comment};
$self->info->$method($self->ogg->comment($comment));
}
else {
$self->status("Unknown comment: $comment");
}
}
$self->info->secs( $self->ogg->info->{"length"});
$self->info->bitrate( $self->ogg->info->{"bitrate_nominal"});
$self->info->frequency( $self->ogg->info->{"rate"});
}
else {
print STDERR "No ogg object created\n";
}
return $self;
}
sub set_tag {
my $self = shift;
unless (open(COMMENT, "|-", $self->options->{vorbiscomment} ." -w ". "\"". $self->info->filename . "\"")) {
$self->status("Failed to open ", $self->options->{vorbiscomment}, ". Not writing tag.\n");
return undef;
}
while (my ($t, $m) = each %tagmap) {
if (defined $self->info->$m) {
print COMMENT $t, "=", $self->info->$m, "\n";
}
}
close (COMMENT);
return $self;
}
sub close {
my $self = shift;
$self->{_OGG} = undef;
}
1;
=back
=head1 METHODS
=over 4
=item B<default_options>
Returns the default options for the plugin.
=item B<set_tag>
Save info from object back to ogg vorbis file using L<vorbiscomment>
=item B<get_tag>
Get info for object from ogg vorbis header using Ogg::Vorbis::Header::PurePerl
=item B<close>
Close the file and destroy the Ogg::Vorbis::Header::PurePerl object.
=item B<ogg>
Returns the Ogg::Vorbis::Header::PurePerl object.
=back
=head1 OPTIONS
=over 4
=item B<vorbiscomment>
The full path to the vorbiscomment program. Defaults to just "vorbiscomment", which assumes that vorbiscomment is in your path.
=back
=head1 BUGS
No known additional bugs provided by this Module
=head1 SEE ALSO
L<Ogg::Vorbis::Header::PurePerl>, L<Music::Tag>, L<Music::Tag::Amazon>, L<Music::Tag::File>, L<Music::Tag::FLAC>, L<Music::Tag::Lyrics>,
L<Music::Tag::M4A>, L<Music::Tag::MP3>, L<Music::Tag::MusicBrainz>, L<Music::Tag::Option>
=head1 AUTHOR
Edward Allen III <ealleniii _at_ cpan _dot_ org>
=head1 LICENSE
This program is free software; you can redistribute it and/or
modify it under the terms of the Artistic License, distributed
with Perl.
=head1 COPYRIGHT
Copyright (c) 2007,2008 Edward Allen III. Some rights reserved.
=cut
# vim: tabstop=4
|
riemann42/Music-Tag-OGG
|
f4e11b6dd229af8bb352e71ac8cd8b2cb6f5e615
|
Version: 0.31
|
diff --git a/Changes b/Changes
index 9d5756b..e32a94b 100644
--- a/Changes
+++ b/Changes
@@ -1,49 +1,53 @@
+Release Name: 0.31
+===========================
+* pod improvments
+
Release Name: 0.30
===========================
-* Kwalitee and pod improments
+* Kwalitee and pod improvments
Release Name: 0.29
===========================
* Fixed type-o in synapsis (OGG was ogg)
* Now requires Music::Tag .29
Release Name: 0.28
===========================
* Split off from Music::Tag distribution
Release Name: 0.27
============================
* More documentation and tested POD.
* datamethods method now can be used to add new datamethods
* Added test for MusicBrainz and Amazon plugins
* Revised releasedate and recorddate internal storage to store as releasetime
and recordtime -- with full timestamps.
* Added releasetime, recordtime, releaseepoch, and recordepoech datamethods.
* Support for TIME ID3v2 tag.
* After much thought, replaced Ogg::Vorbis::Header with
Ogg::Vorbis::Header::PurePerl and added vorbiscomment to write tags.
* Revised OGG and FLAC plugins to clean up code (much slicker now).
Release Name: 0.26
============================
* Removed several prerequistes that weren't used
* Fixed error in README about prerequisite
Release Name: 0.25
============================
* Support many more tags for flac, ogg, and m4a
* Removed autotag safetag quicktag musictag musicsort musicinfo scripts.
All is done by musictag now.
* Added tests for some plugins. More to do!
* Bug Fixes
* Documentation improvments
* Added preset option for musictag
Release Name: 0.24
============================
* Bug Fixes
* Revised MP3 Tags to read Picard tags
Release Name: 0.23
============================
* Initial Public Release
diff --git a/META.yml b/META.yml
index 6c3b17f..107bc93 100644
--- a/META.yml
+++ b/META.yml
@@ -1,15 +1,15 @@
--- #YAML:1.0
name: Music-Tag-OGG
-version: 0.3
+version: 0.31
abstract: Plugin module for Music::Tag to get information from ogg-vorbis headers.
license: perl
author:
- Edward Allen (ealleniii _at_ cpan _dot_ org)
generated_by: ExtUtils::MakeMaker version 6.42
distribution_type: module
requires:
MP3::Tag: 0.29
Ogg::Vorbis::Header::PurePerl: 0.07
meta-spec:
url: http://module-build.sourceforge.net/META-spec-v1.3.html
version: 1.3
diff --git a/lib/Music/Tag/OGG.pm b/lib/Music/Tag/OGG.pm
index 149cd45..1207c11 100644
--- a/lib/Music/Tag/OGG.pm
+++ b/lib/Music/Tag/OGG.pm
@@ -1,215 +1,215 @@
package Music::Tag::OGG;
-our $VERSION = 0.30;
+our $VERSION = 0.31;
# Copyright (c) 2007,2008 Edward Allen III. Some rights reserved.
#
## This program is free software; you can redistribute it and/or
## modify it under the terms of the Artistic License, distributed
## with Perl.
#
=pod
=head1 NAME
Music::Tag::OGG - Plugin module for Music::Tag to get information from ogg-vorbis headers.
=head1 SYNOPSIS
use Music::Tag
my $filename = "/var/lib/music/artist/album/track.ogg";
my $info = Music::Tag->new($filename, { quiet => 1 }, "OGG");
$info->get_info();
print "Artist is ", $info->artist;
=head1 DESCRIPTION
Music::Tag::OGG is used to read ogg-vorbis header information. It uses Ogg::Vorbis::Header::PurePerl. I have gone back and forth with using this
and Ogg::Vorbis::Header. Finally I have settled on Ogg::Vorbis::Header::PurePerl, because the autoload for Ogg::Vorbis::Header was a pain to work with.
To write Ogg::Vorbis headers I use the program vorbiscomment. It looks for this in the path, or in the option variable "vorbiscomment." This tool
is available from L<http://www.xiph.org/> as part of the vorbis-tools distribution.
=head1 REQUIRED VALUES
No values are required (except filename, which is usually provided on object creation).
=head1 SET VALUES
=over 4
-=item title, track, totaltracks, artist, album, comment, releasedate, genre, disc, label
+=item B<title, track, totaltracks, artist, album, comment, releasedate, genre, disc, label>
Uses standard tags for these
-=item asin
+=item B<asin>
Uses custom tag "ASIN" for this
-=item mb_artistid, mb_albumid, mb_trackid, mip_puid, countrycode, albumartist
+=item B<mb_artistid, mb_albumid, mb_trackid, mip_puid, countrycode, albumartist>
Uses MusicBrainz recommended tags for these.
=cut
use strict;
use Ogg::Vorbis::Header::PurePerl;
our %tagmap = (
TITLE => 'title',
TRACKNUMBER => 'track',
TRACKTOTAL => 'totaltracks',
ARTIST => 'artist',
ALBUM => 'album',
COMMENT => 'comment',
DATE => 'releasedate',
GENRE => 'genre',
DISC => 'disc',
LABEL => 'label',
ASIN => 'asin',
MUSICBRAINZ_ARTISTID => 'mb_artistid',
MUSICBRAINZ_ALBUMID => 'mb_albumid',
MUSICBRAINZ_TRACKID => 'mb_trackid',
MUSICBRAINZ_SORTNAME => 'sortname',
RELEASECOUNTRY => 'countrycode',
MUSICIP_PUID => 'mip_puid',
MUSICBRAINZ_ALBUMARTIST => 'albumartist'
);
sub default_options {
{ vorbiscomment => "vorbiscomment" }
}
our @ISA = qw(Music::Tag::Generic);
sub ogg {
my $self = shift;
unless ((exists $self->{_OGG}) && (ref $self->{_OGG})) {
if ($self->info->filename) {
$self->{_OGG} = Ogg::Vorbis::Header::PurePerl->new($self->info->filename);
$self->{_OGG}->load();
}
else {
return undef;
}
}
return $self->{_OGG};
}
sub get_tag {
my $self = shift;
if ( $self->ogg ) {
foreach ($self->ogg->comment_tags) {
my $comment = uc($_);
if (exists $tagmap{$comment}) {
my $method = $tagmap{$comment};
$self->info->$method($self->ogg->comment($comment));
}
else {
$self->status("Unknown comment: $comment");
}
}
$self->info->secs( $self->ogg->info->{"length"});
$self->info->bitrate( $self->ogg->info->{"bitrate_nominal"});
$self->info->frequency( $self->ogg->info->{"rate"});
}
else {
print STDERR "No ogg object created\n";
}
return $self;
}
sub set_tag {
my $self = shift;
unless (open(COMMENT, "|-", $self->options->{vorbiscomment} ." -w ". "\"". $self->info->filename . "\"")) {
$self->status("Failed to open ", $self->options->{vorbiscomment}, ". Not writing tag.\n");
return undef;
}
while (my ($t, $m) = each %tagmap) {
if (defined $self->info->$m) {
print COMMENT $t, "=", $self->info->$m, "\n";
}
}
close (COMMENT);
return $self;
}
sub close {
my $self = shift;
$self->{_OGG} = undef;
}
1;
=back
=head1 METHODS
=over 4
-=item default_options
+=item B<default_options>
Returns the default options for the plugin.
-=item set_tag
+=item B<set_tag>
Save info from object back to ogg vorbis file using L<vorbiscomment>
-=item get_tag
+=item B<get_tag>
Get info for object from ogg vorbis header using Ogg::Vorbis::Header::PurePerl
-=item close
+=item B<close>
Close the file and destroy the Ogg::Vorbis::Header::PurePerl object.
-=item ogg
+=item B<ogg>
Returns the Ogg::Vorbis::Header::PurePerl object.
=back
=head1 OPTIONS
=over 4
-=item vorbiscomment
+=item B<vorbiscomment>
The full path to the vorbiscomment program. Defaults to just "vorbiscomment", which assumes that vorbiscomment is in your path.
=back
=head1 BUGS
No known additional bugs provided by this Module
=head1 SEE ALSO
L<Ogg::Vorbis::Header::PurePerl>, L<Music::Tag>, L<Music::Tag::Amazon>, L<Music::Tag::File>, L<Music::Tag::FLAC>, L<Music::Tag::Lyrics>,
L<Music::Tag::M4A>, L<Music::Tag::MP3>, L<Music::Tag::MusicBrainz>, L<Music::Tag::Option>
=head1 AUTHOR
Edward Allen III <ealleniii _at_ cpan _dot_ org>
=head1 LICENSE
This program is free software; you can redistribute it and/or
modify it under the terms of the Artistic License, distributed
with Perl.
=head1 COPYRIGHT
Copyright (c) 2007,2008 Edward Allen III. Some rights reserved.
=cut
# vim: tabstop=4
|
riemann42/Music-Tag-OGG
|
e1bcc9378813143fdf28df204e9456219c6eb35b
|
Version: 0.3
|
diff --git a/Changes b/Changes
new file mode 100644
index 0000000..9d5756b
--- /dev/null
+++ b/Changes
@@ -0,0 +1,49 @@
+Release Name: 0.30
+===========================
+* Kwalitee and pod improments
+
+Release Name: 0.29
+===========================
+* Fixed type-o in synapsis (OGG was ogg)
+* Now requires Music::Tag .29
+
+Release Name: 0.28
+===========================
+* Split off from Music::Tag distribution
+
+Release Name: 0.27
+============================
+* More documentation and tested POD.
+* datamethods method now can be used to add new datamethods
+* Added test for MusicBrainz and Amazon plugins
+* Revised releasedate and recorddate internal storage to store as releasetime
+ and recordtime -- with full timestamps.
+* Added releasetime, recordtime, releaseepoch, and recordepoech datamethods.
+* Support for TIME ID3v2 tag.
+* After much thought, replaced Ogg::Vorbis::Header with
+ Ogg::Vorbis::Header::PurePerl and added vorbiscomment to write tags.
+* Revised OGG and FLAC plugins to clean up code (much slicker now).
+
+Release Name: 0.26
+============================
+* Removed several prerequistes that weren't used
+* Fixed error in README about prerequisite
+
+Release Name: 0.25
+============================
+* Support many more tags for flac, ogg, and m4a
+* Removed autotag safetag quicktag musictag musicsort musicinfo scripts.
+ All is done by musictag now.
+* Added tests for some plugins. More to do!
+* Bug Fixes
+* Documentation improvments
+* Added preset option for musictag
+
+Release Name: 0.24
+============================
+* Bug Fixes
+* Revised MP3 Tags to read Picard tags
+
+Release Name: 0.23
+============================
+* Initial Public Release
diff --git a/MANIFEST b/MANIFEST
new file mode 100644
index 0000000..1b899c6
--- /dev/null
+++ b/MANIFEST
@@ -0,0 +1,11 @@
+Changes
+lib/Music/Tag/OGG.pm
+Makefile.PL
+MANIFEST This list of files
+README
+t/elise.ogg
+t/1-ogg.t
+t/options.conf
+t/2-pod.t
+t/3-pod-coverage.t
+META.yml Module meta-data (added by MakeMaker)
diff --git a/META.yml b/META.yml
new file mode 100644
index 0000000..6c3b17f
--- /dev/null
+++ b/META.yml
@@ -0,0 +1,15 @@
+--- #YAML:1.0
+name: Music-Tag-OGG
+version: 0.3
+abstract: Plugin module for Music::Tag to get information from ogg-vorbis headers.
+license: perl
+author:
+ - Edward Allen (ealleniii _at_ cpan _dot_ org)
+generated_by: ExtUtils::MakeMaker version 6.42
+distribution_type: module
+requires:
+ MP3::Tag: 0.29
+ Ogg::Vorbis::Header::PurePerl: 0.07
+meta-spec:
+ url: http://module-build.sourceforge.net/META-spec-v1.3.html
+ version: 1.3
diff --git a/Makefile.PL b/Makefile.PL
new file mode 100644
index 0000000..4ebe48f
--- /dev/null
+++ b/Makefile.PL
@@ -0,0 +1,10 @@
+use ExtUtils::MakeMaker;
+WriteMakefile( NAME => "Music::Tag::OGG",
+ VERSION_FROM => "lib/Music/Tag/OGG.pm",
+ ABSTRACT_FROM => "lib/Music/Tag/OGG.pm",
+ AUTHOR => 'Edward Allen (ealleniii _at_ cpan _dot_ org)',
+ LICENSE => 'perl',
+ PREREQ_PM => { 'MP3::Tag' => 0.29,
+ 'Ogg::Vorbis::Header::PurePerl' => 0.07,
+ },
+ );
diff --git a/README b/README
new file mode 100644
index 0000000..9578dfa
--- /dev/null
+++ b/README
@@ -0,0 +1,45 @@
+Music::Tag::OGG version 0.28
+==============================
+
+Music::Tag::OGG Gather info from OGG Header. Uses Ogg::Vorbis::Header::PurePerl
+
+Note: As of version 0.28, Music-Tag is distributed as seperate packages.
+
+INSTALLATION
+
+To install this module type the following:
+
+ perl Makefile.PL
+ make
+ make test
+ make install
+
+DEPENDENCIES
+
+This module requires these other modules and libraries:
+
+ Music::Tag
+ Ogg::Vorbis::Header::PurePerl
+
+NOTE ON WRITE SUPPORT
+
+I have had trouble with Ogg::Vorbis::Header. As such, I have stoped using it.
+I now use OGG::Vorbis::PurePerl. This module is also buggy, and doesn't have
+write support. To overcome this limitation, for now, I am using the
+vorbiscomment program that is part of the vorbis-tools package from xiph.org.
+
+I am planning on adding write support to Ogg::Vorbis::PurePerl someday.
+
+TEST FILES
+
+Are based on the sample file for Audio::M4P. For testing only.
+
+COPYRIGHT AND LICENCE
+
+Copyright (C) 2007 Edward J. Allen III
+ealleniii _at_ cpan _dot_ org
+
+This library is free software; you can redistribute it and/or modify
+it under the same terms as Perl itself, either Perl version 5.8.7 or,
+at your option, any later version of Perl 5 you may have available.
+
diff --git a/lib/Music/Tag/OGG.pm b/lib/Music/Tag/OGG.pm
new file mode 100644
index 0000000..149cd45
--- /dev/null
+++ b/lib/Music/Tag/OGG.pm
@@ -0,0 +1,215 @@
+package Music::Tag::OGG;
+our $VERSION = 0.30;
+
+# Copyright (c) 2007,2008 Edward Allen III. Some rights reserved.
+#
+## This program is free software; you can redistribute it and/or
+## modify it under the terms of the Artistic License, distributed
+## with Perl.
+#
+
+=pod
+
+=head1 NAME
+
+Music::Tag::OGG - Plugin module for Music::Tag to get information from ogg-vorbis headers.
+
+=head1 SYNOPSIS
+
+ use Music::Tag
+
+ my $filename = "/var/lib/music/artist/album/track.ogg";
+
+ my $info = Music::Tag->new($filename, { quiet => 1 }, "OGG");
+
+ $info->get_info();
+
+ print "Artist is ", $info->artist;
+
+=head1 DESCRIPTION
+
+Music::Tag::OGG is used to read ogg-vorbis header information. It uses Ogg::Vorbis::Header::PurePerl. I have gone back and forth with using this
+and Ogg::Vorbis::Header. Finally I have settled on Ogg::Vorbis::Header::PurePerl, because the autoload for Ogg::Vorbis::Header was a pain to work with.
+
+To write Ogg::Vorbis headers I use the program vorbiscomment. It looks for this in the path, or in the option variable "vorbiscomment." This tool
+is available from L<http://www.xiph.org/> as part of the vorbis-tools distribution.
+
+=head1 REQUIRED VALUES
+
+No values are required (except filename, which is usually provided on object creation).
+
+=head1 SET VALUES
+
+=over 4
+
+=item title, track, totaltracks, artist, album, comment, releasedate, genre, disc, label
+
+Uses standard tags for these
+
+=item asin
+
+Uses custom tag "ASIN" for this
+
+=item mb_artistid, mb_albumid, mb_trackid, mip_puid, countrycode, albumartist
+
+Uses MusicBrainz recommended tags for these.
+
+
+=cut
+use strict;
+use Ogg::Vorbis::Header::PurePerl;
+
+our %tagmap = (
+ TITLE => 'title',
+ TRACKNUMBER => 'track',
+ TRACKTOTAL => 'totaltracks',
+ ARTIST => 'artist',
+ ALBUM => 'album',
+ COMMENT => 'comment',
+ DATE => 'releasedate',
+ GENRE => 'genre',
+ DISC => 'disc',
+ LABEL => 'label',
+ ASIN => 'asin',
+ MUSICBRAINZ_ARTISTID => 'mb_artistid',
+ MUSICBRAINZ_ALBUMID => 'mb_albumid',
+ MUSICBRAINZ_TRACKID => 'mb_trackid',
+ MUSICBRAINZ_SORTNAME => 'sortname',
+ RELEASECOUNTRY => 'countrycode',
+ MUSICIP_PUID => 'mip_puid',
+ MUSICBRAINZ_ALBUMARTIST => 'albumartist'
+);
+
+sub default_options {
+ { vorbiscomment => "vorbiscomment" }
+}
+
+our @ISA = qw(Music::Tag::Generic);
+
+sub ogg {
+ my $self = shift;
+ unless ((exists $self->{_OGG}) && (ref $self->{_OGG})) {
+ if ($self->info->filename) {
+ $self->{_OGG} = Ogg::Vorbis::Header::PurePerl->new($self->info->filename);
+ $self->{_OGG}->load();
+
+ }
+ else {
+ return undef;
+ }
+ }
+ return $self->{_OGG};
+}
+
+sub get_tag {
+ my $self = shift;
+ if ( $self->ogg ) {
+ foreach ($self->ogg->comment_tags) {
+ my $comment = uc($_);
+ if (exists $tagmap{$comment}) {
+ my $method = $tagmap{$comment};
+ $self->info->$method($self->ogg->comment($comment));
+ }
+ else {
+ $self->status("Unknown comment: $comment");
+ }
+ }
+ $self->info->secs( $self->ogg->info->{"length"});
+ $self->info->bitrate( $self->ogg->info->{"bitrate_nominal"});
+ $self->info->frequency( $self->ogg->info->{"rate"});
+ }
+ else {
+ print STDERR "No ogg object created\n";
+ }
+ return $self;
+}
+
+
+sub set_tag {
+ my $self = shift;
+ unless (open(COMMENT, "|-", $self->options->{vorbiscomment} ." -w ". "\"". $self->info->filename . "\"")) {
+ $self->status("Failed to open ", $self->options->{vorbiscomment}, ". Not writing tag.\n");
+ return undef;
+ }
+ while (my ($t, $m) = each %tagmap) {
+ if (defined $self->info->$m) {
+ print COMMENT $t, "=", $self->info->$m, "\n";
+ }
+ }
+ close (COMMENT);
+ return $self;
+}
+
+sub close {
+ my $self = shift;
+ $self->{_OGG} = undef;
+}
+
+1;
+
+=back
+
+=head1 METHODS
+
+=over 4
+
+=item default_options
+
+Returns the default options for the plugin.
+
+=item set_tag
+
+Save info from object back to ogg vorbis file using L<vorbiscomment>
+
+=item get_tag
+
+Get info for object from ogg vorbis header using Ogg::Vorbis::Header::PurePerl
+
+=item close
+
+Close the file and destroy the Ogg::Vorbis::Header::PurePerl object.
+
+=item ogg
+
+Returns the Ogg::Vorbis::Header::PurePerl object.
+
+=back
+
+
+=head1 OPTIONS
+
+=over 4
+
+=item vorbiscomment
+
+The full path to the vorbiscomment program. Defaults to just "vorbiscomment", which assumes that vorbiscomment is in your path.
+
+=back
+
+=head1 BUGS
+
+No known additional bugs provided by this Module
+
+=head1 SEE ALSO
+
+L<Ogg::Vorbis::Header::PurePerl>, L<Music::Tag>, L<Music::Tag::Amazon>, L<Music::Tag::File>, L<Music::Tag::FLAC>, L<Music::Tag::Lyrics>,
+L<Music::Tag::M4A>, L<Music::Tag::MP3>, L<Music::Tag::MusicBrainz>, L<Music::Tag::Option>
+
+=head1 AUTHOR
+
+Edward Allen III <ealleniii _at_ cpan _dot_ org>
+
+=head1 LICENSE
+
+This program is free software; you can redistribute it and/or
+modify it under the terms of the Artistic License, distributed
+with Perl.
+
+=head1 COPYRIGHT
+
+Copyright (c) 2007,2008 Edward Allen III. Some rights reserved.
+
+=cut
+
+
+# vim: tabstop=4
diff --git a/t/1-ogg.t b/t/1-ogg.t
new file mode 100644
index 0000000..38b8805
--- /dev/null
+++ b/t/1-ogg.t
@@ -0,0 +1,50 @@
+#!/usr/bin/perl -w
+use strict;
+
+use Test::More tests => 15;
+use File::Copy;
+use 5.006;
+
+BEGIN { use_ok('Music::Tag') }
+
+our $options = {};
+
+# Add 13 test for each run of this
+sub filetest {
+ my $file = shift;
+ my $filetest = shift;
+ my $testoptions = shift;
+ SKIP: {
+ skip "File: $file does not exists", 7 unless ( -f $file );
+ return unless ( -f $file );
+ copy( $file, $filetest );
+ my $tag = Music::Tag->new( $filetest, $testoptions );
+ ok( $tag, 'Object created: ' . $filetest );
+ die unless $tag;
+ ok( $tag->get_tag, 'get_tag called: ' . $filetest );
+ ok( $tag->isa('Music::Tag'), 'Correct Class: ' . $filetest );
+ is( $tag->artist, "Beethoven", 'Artist: ' . $filetest );
+ is( $tag->album, "GPL", 'Album: ' . $filetest );
+ is( $tag->title, "Elise", 'Title: ' . $filetest );
+ ok( $tag->title("Elise Test"), 'Set new title: ' . $filetest );
+ ok( $tag->set_tag, 'set_tag: ' . $filetest );
+ $tag->close();
+ $tag = undef;
+ my $tag2 = Music::Tag->new( $filetest, $testoptions);
+ ok( $tag2, 'Object created again: ' . $filetest );
+ die unless $tag2;
+ ok( $tag2->get_tag, 'get_tag called: ' . $filetest );
+ #TODO: {
+ # local $TODO = "Write support is buggy for ogg";
+ is( $tag2->title, "Elise Test", 'New Title: ' . $filetest );
+ #}
+ ok( $tag2->title("Elise"), 'Reset title: ' . $filetest );
+ ok( $tag2->set_tag, 'set_tag again: ' . $filetest );
+ $tag2->close();
+ unlink($filetest);
+ }
+}
+
+ok( Music::Tag->LoadOptions("t/options.conf"), "Loading options file.\n" );
+filetest( "t/elise.ogg", "t/elisetest.ogg" );
+
diff --git a/t/2-pod.t b/t/2-pod.t
new file mode 100644
index 0000000..a2cf449
--- /dev/null
+++ b/t/2-pod.t
@@ -0,0 +1,6 @@
+#!/usr/bin/perl -w
+use strict;
+use Test::More;
+eval "use Test::Pod 1.00";
+plan skip_all => "Test::Pod 1.00 required for testing POD" if $@;
+all_pod_files_ok();
diff --git a/t/3-pod-coverage.t b/t/3-pod-coverage.t
new file mode 100644
index 0000000..868d3e5
--- /dev/null
+++ b/t/3-pod-coverage.t
@@ -0,0 +1,7 @@
+#!/usr/bin/perl
+
+use strict;
+use Test::More;
+eval "use Test::Pod::Coverage 1.00";
+plan skip_all => "Test::Pod::Coverage 1.00 required for testing POD coverage" if $@;
+all_pod_coverage_ok();
diff --git a/t/elise.ogg b/t/elise.ogg
new file mode 100644
index 0000000..bd4763e
Binary files /dev/null and b/t/elise.ogg differ
diff --git a/t/options.conf b/t/options.conf
new file mode 100644
index 0000000..86a1a83
--- /dev/null
+++ b/t/options.conf
@@ -0,0 +1 @@
+{ quiet => 1, ANSIColor => 0 }
|
iainmullan/greasemonkey
|
b02102d0bc0acafc1eacef98d3a903d8ece294d2
|
Adding Goodreads library search script
|
diff --git a/goodreads_library_search.user.js b/goodreads_library_search.user.js
new file mode 100644
index 0000000..664151e
--- /dev/null
+++ b/goodreads_library_search.user.js
@@ -0,0 +1,46 @@
+// Goodreads Local Library Search
+// Copyright (c) 2009, Iain Mullan
+//
+// --------------------------------------------------------------------
+//
+// This is a Greasemonkey user script.
+//
+// To install, you need Greasemonkey: http://greasemonkey.mozdev.org/
+// Then restart Firefox and revisit this script.
+// Under Tools, there will be a new menu item to "Install User Script".
+// Accept the default configuration and install.
+//
+// --------------------------------------------------------------------
+//
+// RELEASE NOTES
+// 0.1 Adds a link to the 'find at' section on a book page to search Boroondara library by Title
+//
+// ==UserScript==
+// @name Goodreads Local Library Search
+// @namespace http://ebotunes.com
+// @include http://www.goodreads.com/book/show/*
+// ==/UserScript==
+
+
+var base_url = 'http://boroondara.spydus.com/cgi-bin/spydus.exe/ENQ/OPAC/BIBENQ?ENTRY_NAME=TI&ENTRY=';
+
+var title = document.getElementById('bookPageTitle').innerHTML;
+
+var linksDiv = document.getElementById('affiliateLinks');
+
+var full = document.createElement('span');
+full.setAttribute('style', 'font-weight:bold');
+
+var txtNode = document.createTextNode("Your Library...");
+var link = document.createElement('a');
+link.setAttribute('href', base_url+title);
+link.setAttribute('target', '_blank');
+link.appendChild(txtNode);
+
+
+full.appendChild(document.createTextNode(' '));
+full.appendChild(link);
+full.appendChild(document.createTextNode(' | '));
+
+linksDiv.insertBefore(full,linksDiv.childNodes[3]);
+
|
iainmullan/greasemonkey
|
bc028fea3f10bee65576489b6a97b1a3f573c675
|
Adding flickr map monkey script
|
diff --git a/flickr_map_monkey.user.js b/flickr_map_monkey.user.js
new file mode 100644
index 0000000..9712e91
--- /dev/null
+++ b/flickr_map_monkey.user.js
@@ -0,0 +1,64 @@
+// Flickr Map Monkey
+// Copyright (c) 2008, Iain Mullan
+//
+// --------------------------------------------------------------------
+//
+// This is a Greasemonkey user script.
+//
+// To install, you need Greasemonkey: http://greasemonkey.mozdev.org/
+// Then restart Firefox and revisit this script.
+// Under Tools, there will be a new menu item to "Install User Script".
+// Accept the default configuration and install.
+//
+// --------------------------------------------------------------------
+//
+// RELEASE NOTES
+// VERSION 0.4 - Added 'sensor=false' parameter to the image URL, appears to be invalid without it.
+// VERSION 0.3 - Slight change needed due to Flickr HTML changes. Name of geo element info is now 'div_taken_in'
+// version 0.2 - The image is now a clickable link to Google Maps at the given location. Flickr's original "Taken In ... " text is preserved, and the map image is displayed below it.
+// VERSION 0.1 - Displays a static image with marker, replacing the "Taken In ... " text.
+// --------------------------------------------------------------------
+// ==UserScript==
+// @name Flickr Map Monkey
+// @description Display a Google Map in the Additional Information section of a Flickr photo page, if location info is available.
+// @namespace http://ebotunes.com/
+// @include http://*flickr.com/photos/*
+// ==/UserScript==
+
+function getMeta(mn){
+ var m = document.getElementsByTagName('meta');
+ for(var i in m){
+ if(m[i].name == mn){
+ return m[i].content;
+ }
+ }
+}
+
+
+
+var locSection = document.getElementById('div_taken_in');
+
+var coord = getMeta('ICBM');
+
+//alert ('Hello ebo - live editing!'+coord);
+if (coord==undefined) {
+
+} else {
+
+
+var GMAP_API_KEY = 'ABQIAAAAv6RGMPEOgkA7IasZt4WVCxTbFI-KAjwZMobsSMrlqEZg0iKTIhSEhtWTRAVWuBBFRJzrgHNNzVByRA';
+
+
+var coords = coord.split(' ');
+var lat = coords[0];
+var long = coords[1];
+var ll = lat+long;
+
+var linkURL = 'http://maps.google.co.uk/maps?z=14&ll='+ll;
+
+var imgUrl = 'http://maps.google.com/staticmap?center='+ll+'&markers='+ll+'&zoom=14&size=150x150&key='+GMAP_API_KEY+'&sensor=false';
+
+locSection.innerHTML += '<a href="'+linkURL+'"><img src="'+imgUrl+'" /></a>';
+
+}
+
|
iainmullan/greasemonkey
|
79723b78369d3211c74099579b26dd19da60bf51
|
Adding lastfm flickr search script
|
diff --git a/lastfm_flickr_search.user.js b/lastfm_flickr_search.user.js
new file mode 100644
index 0000000..902f78a
--- /dev/null
+++ b/lastfm_flickr_search.user.js
@@ -0,0 +1,40 @@
+// ==UserScript==
+// @name Last.FM Events - Search My Flickr
+// @namespace http://www.ebotunes.com
+// @description On Last.FM event pages, adds a link to your Flickr archive for the day of the event.
+// @include http://www.last.fm/event/*
+// ==/UserScript==
+
+var input = document.getElementById('machineTag');
+
+var div = input.parentNode.parentNode;
+
+var dtstarts = document.getElementsByClassName('dtstart');
+
+var dtstart = dtstarts[0];
+
+var ts = dtstart.title;
+
+var y = ts.slice(0,4);
+var m = ts.slice(4,6);
+var d = ts.slice(6,8);
+
+var date = y+'/'+m+'/'+d;
+
+var linkText = document.createTextNode('Search my Flickr archive for that day...');
+
+var url = 'http://www.flickr.com/photos/me/archives/date-taken/'+date;
+
+var para = document.createElement('p');
+para.style.marginTop = '10px';
+
+var link = document.createElement('a');
+
+link.href = url;
+link.target = '_blank';
+link.appendChild(linkText);
+
+para.appendChild(link);
+
+div.appendChild(para);
+
|
lizconlan/textmate-settings
|
eb47bca77b8d3065a453a2ea8a37adc121057726
|
large print file drawer
|
diff --git a/English.lproj/Project.nib/designable.nib b/English.lproj/Project.nib/designable.nib
new file mode 100644
index 0000000..07f1c93
--- /dev/null
+++ b/English.lproj/Project.nib/designable.nib
@@ -0,0 +1,2389 @@
+<?xml version="1.0" encoding="UTF-8"?>
+<archive type="com.apple.InterfaceBuilder3.Cocoa.XIB" version="7.10">
+ <data>
+ <int key="IBDocument.SystemTarget">1060</int>
+ <string key="IBDocument.SystemVersion">10D573</string>
+ <string key="IBDocument.InterfaceBuilderVersion">762</string>
+ <string key="IBDocument.AppKitVersion">1038.29</string>
+ <string key="IBDocument.HIToolboxVersion">460.00</string>
+ <object class="NSMutableDictionary" key="IBDocument.PluginVersions">
+ <string key="NS.key.0">com.apple.InterfaceBuilder.CocoaPlugin</string>
+ <string key="NS.object.0">762</string>
+ </object>
+ <object class="NSMutableArray" key="IBDocument.EditedObjectIDs">
+ <bool key="EncodedWithXMLCoder">YES</bool>
+ <integer value="6"/>
+ <integer value="77"/>
+ <integer value="92"/>
+ <integer value="70"/>
+ </object>
+ <object class="NSArray" key="IBDocument.PluginDependencies">
+ <bool key="EncodedWithXMLCoder">YES</bool>
+ <string>com.apple.InterfaceBuilder.CocoaPlugin</string>
+ </object>
+ <object class="NSMutableDictionary" key="IBDocument.Metadata">
+ <bool key="EncodedWithXMLCoder">YES</bool>
+ <object class="NSArray" key="dict.sortedKeys" id="0">
+ <bool key="EncodedWithXMLCoder">YES</bool>
+ </object>
+ <object class="NSMutableArray" key="dict.values">
+ <bool key="EncodedWithXMLCoder">YES</bool>
+ </object>
+ </object>
+ <object class="NSMutableArray" key="IBDocument.RootObjects" id="681999512">
+ <bool key="EncodedWithXMLCoder">YES</bool>
+ <object class="NSCustomObject" id="89583259">
+ <string key="NSClassName">OakProjectController</string>
+ </object>
+ <object class="NSCustomObject" id="254447271">
+ <string key="NSClassName">FirstResponder</string>
+ </object>
+ <object class="NSCustomObject" id="396787127">
+ <string key="NSClassName">NSApplication</string>
+ </object>
+ <object class="NSWindowTemplate" id="655332567">
+ <int key="NSWindowStyleMask">15</int>
+ <int key="NSWindowBacking">2</int>
+ <string key="NSWindowRect">{{547, 252}, {338, 372}}</string>
+ <int key="NSWTFlags">1886912512</int>
+ <string key="NSWindowTitle">New Project</string>
+ <string key="NSWindowClass">OakWindow</string>
+ <object class="NSMutableString" key="NSViewClass">
+ <characters key="NS.bytes">View</characters>
+ </object>
+ <string key="NSWindowContentMaxSize">{3.40282e+38, 3.40282e+38}</string>
+ <string key="NSWindowContentMinSize">{213, 107}</string>
+ <object class="NSView" key="NSWindowView" id="966285757">
+ <reference key="NSNextResponder"/>
+ <int key="NSvFlags">256</int>
+ <object class="NSMutableArray" key="NSSubviews">
+ <bool key="EncodedWithXMLCoder">YES</bool>
+ <object class="NSScrollView" id="546062898">
+ <reference key="NSNextResponder" ref="966285757"/>
+ <int key="NSvFlags">274</int>
+ <object class="NSMutableArray" key="NSSubviews">
+ <bool key="EncodedWithXMLCoder">YES</bool>
+ <object class="NSClipView" id="413684113">
+ <reference key="NSNextResponder" ref="546062898"/>
+ <int key="NSvFlags">2304</int>
+ <object class="NSMutableArray" key="NSSubviews">
+ <bool key="EncodedWithXMLCoder">YES</bool>
+ <object class="NSCustomView" id="166343530">
+ <reference key="NSNextResponder" ref="413684113"/>
+ <int key="NSvFlags">274</int>
+ <string key="NSFrameSize">{323, 313}</string>
+ <reference key="NSSuperview" ref="413684113"/>
+ <string key="NSClassName">OakTextView</string>
+ <string key="NSExtension">NSView</string>
+ </object>
+ </object>
+ <string key="NSFrame">{{1, 1}, {323, 314}}</string>
+ <reference key="NSSuperview" ref="546062898"/>
+ <reference key="NSNextKeyView" ref="166343530"/>
+ <reference key="NSDocView" ref="166343530"/>
+ <object class="NSColor" key="NSBGColor" id="665133345">
+ <int key="NSColorSpace">6</int>
+ <string key="NSCatalogName">System</string>
+ <string key="NSColorName">controlColor</string>
+ <object class="NSColor" key="NSColor" id="964409186">
+ <int key="NSColorSpace">3</int>
+ <bytes key="NSWhite">MC42NjY2NjY2ODY1AA</bytes>
+ </object>
+ </object>
+ <int key="NScvFlags">4</int>
+ </object>
+ <object class="NSScroller" id="144064177">
+ <reference key="NSNextResponder" ref="546062898"/>
+ <int key="NSvFlags">256</int>
+ <string key="NSFrame">{{324, 1}, {15, 314}}</string>
+ <reference key="NSSuperview" ref="546062898"/>
+ <reference key="NSTarget" ref="546062898"/>
+ <string key="NSAction">_doScroller:</string>
+ <double key="NSCurValue">0.19760477542877197</double>
+ </object>
+ <object class="NSScroller" id="762080368">
+ <reference key="NSNextResponder" ref="546062898"/>
+ <int key="NSvFlags">256</int>
+ <string key="NSFrame">{{1, 315}, {323, 15}}</string>
+ <reference key="NSSuperview" ref="546062898"/>
+ <int key="NSsFlags">1</int>
+ <reference key="NSTarget" ref="546062898"/>
+ <string key="NSAction">_doScroller:</string>
+ <double key="NSCurValue">1</double>
+ </object>
+ </object>
+ <string key="NSFrame">{{-1, 0}, {340, 331}}</string>
+ <reference key="NSSuperview" ref="966285757"/>
+ <reference key="NSNextKeyView" ref="413684113"/>
+ <int key="NSsFlags">50</int>
+ <reference key="NSVScroller" ref="144064177"/>
+ <reference key="NSHScroller" ref="762080368"/>
+ <reference key="NSContentView" ref="413684113"/>
+ </object>
+ <object class="NSCustomView" id="18839554">
+ <reference key="NSNextResponder" ref="966285757"/>
+ <int key="NSvFlags">266</int>
+ <string key="NSFrame">{{4, 346}, {334, 26}}</string>
+ <reference key="NSSuperview" ref="966285757"/>
+ <string key="NSClassName">OakTabBarView</string>
+ <string key="NSExtension">NSView</string>
+ </object>
+ <object class="NSCustomView" id="457481926">
+ <reference key="NSNextResponder" ref="966285757"/>
+ <int key="NSvFlags">266</int>
+ <string key="NSFrame">{{0, 330}, {338, 16}}</string>
+ <reference key="NSSuperview" ref="966285757"/>
+ <string key="NSClassName">OakStatusBar</string>
+ <string key="NSExtension">NSView</string>
+ </object>
+ </object>
+ <string key="NSFrameSize">{338, 372}</string>
+ <reference key="NSSuperview"/>
+ </object>
+ <string key="NSScreenRect">{{0, 0}, {1600, 1002}}</string>
+ <string key="NSMinSize">{213, 129}</string>
+ <string key="NSMaxSize">{3.40282e+38, 3.40282e+38}</string>
+ </object>
+ <object class="NSWindowTemplate" id="1002098058">
+ <int key="NSWindowStyleMask">11</int>
+ <int key="NSWindowBacking">2</int>
+ <string key="NSWindowRect">{{235, 537}, {386, 187}}</string>
+ <int key="NSWTFlags">1886912512</int>
+ <string key="NSWindowTitle">New File Sheet</string>
+ <string key="NSWindowClass">NSPanel</string>
+ <object class="NSMutableString" key="NSViewClass">
+ <characters key="NS.bytes">View</characters>
+ </object>
+ <string key="NSWindowContentMaxSize">{3.40282e+38, 3.40282e+38}</string>
+ <string key="NSWindowContentMinSize">{360, 164}</string>
+ <object class="NSView" key="NSWindowView" id="829918786">
+ <reference key="NSNextResponder"/>
+ <int key="NSvFlags">256</int>
+ <object class="NSMutableArray" key="NSSubviews">
+ <bool key="EncodedWithXMLCoder">YES</bool>
+ <object class="NSButton" id="670848000">
+ <reference key="NSNextResponder" ref="829918786"/>
+ <int key="NSvFlags">257</int>
+ <string key="NSFrame">{{209, 12}, {82, 32}}</string>
+ <reference key="NSSuperview" ref="829918786"/>
+ <bool key="NSEnabled">YES</bool>
+ <object class="NSButtonCell" key="NSCell" id="218898697">
+ <int key="NSCellFlags">67239424</int>
+ <int key="NSCellFlags2">134217728</int>
+ <string key="NSContents">Cancel</string>
+ <object class="NSFont" key="NSSupport" id="1071851398">
+ <string key="NSName">LucidaGrande</string>
+ <double key="NSSize">13</double>
+ <int key="NSfFlags">1044</int>
+ </object>
+ <reference key="NSControlView" ref="670848000"/>
+ <int key="NSButtonFlags">-2038284033</int>
+ <int key="NSButtonFlags2">1</int>
+ <reference key="NSAlternateImage" ref="1071851398"/>
+ <string key="NSAlternateContents"/>
+ <string type="base64-UTF8" key="NSKeyEquivalent">Gw</string>
+ <int key="NSPeriodicDelay">200</int>
+ <int key="NSPeriodicInterval">25</int>
+ </object>
+ </object>
+ <object class="NSButton" id="929390742">
+ <reference key="NSNextResponder" ref="829918786"/>
+ <int key="NSvFlags">257</int>
+ <string key="NSFrame">{{291, 12}, {81, 32}}</string>
+ <reference key="NSSuperview" ref="829918786"/>
+ <int key="NSTag">1</int>
+ <bool key="NSEnabled">YES</bool>
+ <object class="NSButtonCell" key="NSCell" id="572664875">
+ <int key="NSCellFlags">67239424</int>
+ <int key="NSCellFlags2">134217728</int>
+ <string key="NSContents">Create</string>
+ <reference key="NSSupport" ref="1071851398"/>
+ <reference key="NSControlView" ref="929390742"/>
+ <int key="NSTag">1</int>
+ <int key="NSButtonFlags">-2038284033</int>
+ <int key="NSButtonFlags2">1</int>
+ <reference key="NSAlternateImage" ref="1071851398"/>
+ <string key="NSAlternateContents"/>
+ <string type="base64-UTF8" key="NSKeyEquivalent">DQ</string>
+ <int key="NSPeriodicDelay">200</int>
+ <int key="NSPeriodicInterval">25</int>
+ </object>
+ </object>
+ <object class="NSBox" id="980627405">
+ <reference key="NSNextResponder" ref="829918786"/>
+ <int key="NSvFlags">266</int>
+ <object class="NSMutableArray" key="NSSubviews">
+ <bool key="EncodedWithXMLCoder">YES</bool>
+ <object class="NSView" id="829833402">
+ <reference key="NSNextResponder" ref="980627405"/>
+ <int key="NSvFlags">256</int>
+ <object class="NSMutableArray" key="NSSubviews">
+ <bool key="EncodedWithXMLCoder">YES</bool>
+ <object class="NSTextField" id="665801832">
+ <reference key="NSNextResponder" ref="829833402"/>
+ <int key="NSvFlags">258</int>
+ <string key="NSFrame">{{71, 72}, {263, 22}}</string>
+ <reference key="NSSuperview" ref="829833402"/>
+ <bool key="NSEnabled">YES</bool>
+ <object class="NSTextFieldCell" key="NSCell" id="703549938">
+ <int key="NSCellFlags">-1804468671</int>
+ <int key="NSCellFlags2">4195328</int>
+ <string key="NSContents"/>
+ <reference key="NSSupport" ref="1071851398"/>
+ <reference key="NSControlView" ref="665801832"/>
+ <bool key="NSDrawsBackground">YES</bool>
+ <object class="NSColor" key="NSBackgroundColor" id="884289040">
+ <int key="NSColorSpace">6</int>
+ <string key="NSCatalogName">System</string>
+ <string key="NSColorName">textBackgroundColor</string>
+ <object class="NSColor" key="NSColor" id="293562186">
+ <int key="NSColorSpace">3</int>
+ <bytes key="NSWhite">MQA</bytes>
+ </object>
+ </object>
+ <object class="NSColor" key="NSTextColor" id="1002998119">
+ <int key="NSColorSpace">6</int>
+ <string key="NSCatalogName">System</string>
+ <string key="NSColorName">textColor</string>
+ <object class="NSColor" key="NSColor" id="594959115">
+ <int key="NSColorSpace">3</int>
+ <bytes key="NSWhite">MAA</bytes>
+ </object>
+ </object>
+ </object>
+ </object>
+ <object class="NSTextField" id="1004731108">
+ <reference key="NSNextResponder" ref="829833402"/>
+ <int key="NSvFlags">256</int>
+ <string key="NSFrame">{{11, 74}, {55, 13}}</string>
+ <reference key="NSSuperview" ref="829833402"/>
+ <bool key="NSEnabled">YES</bool>
+ <object class="NSTextFieldCell" key="NSCell" id="919680613">
+ <int key="NSCellFlags">67239424</int>
+ <int key="NSCellFlags2">4194304</int>
+ <string key="NSContents">File Name:</string>
+ <object class="NSFont" key="NSSupport" id="220929093">
+ <string key="NSName">LucidaGrande</string>
+ <double key="NSSize">10</double>
+ <int key="NSfFlags">2843</int>
+ </object>
+ <reference key="NSControlView" ref="1004731108"/>
+ <reference key="NSBackgroundColor" ref="665133345"/>
+ <object class="NSColor" key="NSTextColor" id="515319004">
+ <int key="NSColorSpace">6</int>
+ <string key="NSCatalogName">System</string>
+ <string key="NSColorName">controlTextColor</string>
+ <reference key="NSColor" ref="594959115"/>
+ </object>
+ </object>
+ </object>
+ <object class="NSTextField" id="980991664">
+ <reference key="NSNextResponder" ref="829833402"/>
+ <int key="NSvFlags">256</int>
+ <string key="NSFrame">{{17, 44}, {49, 13}}</string>
+ <reference key="NSSuperview" ref="829833402"/>
+ <bool key="NSEnabled">YES</bool>
+ <object class="NSTextFieldCell" key="NSCell" id="415625681">
+ <int key="NSCellFlags">67239424</int>
+ <int key="NSCellFlags2">4194304</int>
+ <string key="NSContents">Location:</string>
+ <reference key="NSSupport" ref="220929093"/>
+ <reference key="NSControlView" ref="980991664"/>
+ <reference key="NSBackgroundColor" ref="665133345"/>
+ <reference key="NSTextColor" ref="515319004"/>
+ </object>
+ </object>
+ <object class="NSTextField" id="705866245">
+ <reference key="NSNextResponder" ref="829833402"/>
+ <int key="NSvFlags">258</int>
+ <string key="NSFrame">{{71, 42}, {167, 22}}</string>
+ <reference key="NSSuperview" ref="829833402"/>
+ <bool key="NSEnabled">YES</bool>
+ <object class="NSTextFieldCell" key="NSCell" id="85028366">
+ <int key="NSCellFlags">-1804468671</int>
+ <int key="NSCellFlags2">4195328</int>
+ <string key="NSContents"/>
+ <reference key="NSSupport" ref="1071851398"/>
+ <reference key="NSControlView" ref="705866245"/>
+ <bool key="NSDrawsBackground">YES</bool>
+ <reference key="NSBackgroundColor" ref="884289040"/>
+ <reference key="NSTextColor" ref="1002998119"/>
+ </object>
+ </object>
+ <object class="NSButton" id="873633507">
+ <reference key="NSNextResponder" ref="829833402"/>
+ <int key="NSvFlags">257</int>
+ <string key="NSFrame">{{240, 36}, {100, 32}}</string>
+ <reference key="NSSuperview" ref="829833402"/>
+ <int key="NSTag">2</int>
+ <bool key="NSEnabled">YES</bool>
+ <object class="NSButtonCell" key="NSCell" id="591452043">
+ <int key="NSCellFlags">67239424</int>
+ <int key="NSCellFlags2">134217728</int>
+ <string key="NSContents">Chooseâ¦</string>
+ <reference key="NSSupport" ref="1071851398"/>
+ <reference key="NSControlView" ref="873633507"/>
+ <int key="NSTag">2</int>
+ <int key="NSButtonFlags">-2038284033</int>
+ <int key="NSButtonFlags2">1</int>
+ <reference key="NSAlternateImage" ref="1071851398"/>
+ <string key="NSAlternateContents"/>
+ <object class="NSMutableString" key="NSKeyEquivalent">
+ <characters key="NS.bytes"/>
+ </object>
+ <int key="NSPeriodicDelay">200</int>
+ <int key="NSPeriodicInterval">25</int>
+ </object>
+ </object>
+ <object class="NSPopUpButton" id="817159019">
+ <reference key="NSNextResponder" ref="829833402"/>
+ <int key="NSvFlags">258</int>
+ <string key="NSFrame">{{68, 10}, {269, 26}}</string>
+ <reference key="NSSuperview" ref="829833402"/>
+ <bool key="NSEnabled">YES</bool>
+ <object class="NSPopUpButtonCell" key="NSCell" id="696153694">
+ <int key="NSCellFlags">-2076049856</int>
+ <int key="NSCellFlags2">1024</int>
+ <reference key="NSSupport" ref="1071851398"/>
+ <reference key="NSControlView" ref="817159019"/>
+ <int key="NSButtonFlags">109199615</int>
+ <int key="NSButtonFlags2">1</int>
+ <object class="NSFont" key="NSAlternateImage">
+ <string key="NSName">LucidaGrande</string>
+ <double key="NSSize">13</double>
+ <int key="NSfFlags">16</int>
+ </object>
+ <object class="NSMutableString" key="NSAlternateContents">
+ <characters key="NS.bytes"/>
+ </object>
+ <object class="NSMutableString" key="NSKeyEquivalent">
+ <characters key="NS.bytes"/>
+ </object>
+ <int key="NSPeriodicDelay">400</int>
+ <int key="NSPeriodicInterval">75</int>
+ <object class="NSMenuItem" key="NSMenuItem" id="970908715">
+ <reference key="NSMenu" ref="187615474"/>
+ <string key="NSTitle">Select templateâ¦</string>
+ <string key="NSKeyEquiv"/>
+ <int key="NSKeyEquivModMask">1048576</int>
+ <int key="NSMnemonicLoc">2147483647</int>
+ <int key="NSState">1</int>
+ <object class="NSCustomResource" key="NSOnImage" id="1023208147">
+ <string key="NSClassName">NSImage</string>
+ <string key="NSResourceName">NSMenuCheckmark</string>
+ </object>
+ <object class="NSCustomResource" key="NSMixedImage" id="117810018">
+ <string key="NSClassName">NSImage</string>
+ <string key="NSResourceName">NSMenuMixedState</string>
+ </object>
+ <string key="NSAction">_popUpItemAction:</string>
+ <reference key="NSTarget" ref="696153694"/>
+ </object>
+ <bool key="NSMenuItemRespectAlignment">YES</bool>
+ <object class="NSMenu" key="NSMenu" id="187615474">
+ <object class="NSMutableString" key="NSTitle">
+ <characters key="NS.bytes">OtherViews</characters>
+ </object>
+ <object class="NSMutableArray" key="NSMenuItems">
+ <bool key="EncodedWithXMLCoder">YES</bool>
+ <reference ref="970908715"/>
+ </object>
+ </object>
+ <int key="NSPreferredEdge">3</int>
+ <bool key="NSUsesItemFromMenu">YES</bool>
+ <bool key="NSAltersState">YES</bool>
+ <int key="NSArrowPosition">1</int>
+ </object>
+ </object>
+ <object class="NSTextField" id="389356410">
+ <reference key="NSNextResponder" ref="829833402"/>
+ <int key="NSvFlags">256</int>
+ <string key="NSFrame">{{13, 16}, {53, 13}}</string>
+ <reference key="NSSuperview" ref="829833402"/>
+ <bool key="NSEnabled">YES</bool>
+ <object class="NSTextFieldCell" key="NSCell" id="622076711">
+ <int key="NSCellFlags">67239424</int>
+ <int key="NSCellFlags2">4194304</int>
+ <string key="NSContents">Template:</string>
+ <reference key="NSSupport" ref="220929093"/>
+ <reference key="NSControlView" ref="389356410"/>
+ <reference key="NSBackgroundColor" ref="665133345"/>
+ <reference key="NSTextColor" ref="515319004"/>
+ </object>
+ </object>
+ </object>
+ <string key="NSFrame">{{2, 2}, {348, 109}}</string>
+ <reference key="NSSuperview" ref="980627405"/>
+ </object>
+ </object>
+ <string key="NSFrame">{{17, 56}, {352, 113}}</string>
+ <reference key="NSSuperview" ref="829918786"/>
+ <string key="NSOffsets">{0, 0}</string>
+ <object class="NSTextFieldCell" key="NSTitleCell">
+ <int key="NSCellFlags">67239424</int>
+ <int key="NSCellFlags2">0</int>
+ <string key="NSContents">Title</string>
+ <object class="NSFont" key="NSSupport" id="26">
+ <string key="NSName">LucidaGrande</string>
+ <double key="NSSize">11</double>
+ <int key="NSfFlags">3100</int>
+ </object>
+ <reference key="NSBackgroundColor" ref="884289040"/>
+ <object class="NSColor" key="NSTextColor">
+ <int key="NSColorSpace">3</int>
+ <bytes key="NSWhite">MCAwLjgwMDAwMDAxMTkAA</bytes>
+ </object>
+ </object>
+ <reference key="NSContentView" ref="829833402"/>
+ <int key="NSBorderType">3</int>
+ <int key="NSBoxType">0</int>
+ <int key="NSTitlePosition">0</int>
+ <bool key="NSTransparent">NO</bool>
+ </object>
+ </object>
+ <string key="NSFrameSize">{386, 187}</string>
+ <reference key="NSSuperview"/>
+ </object>
+ <string key="NSScreenRect">{{0, 0}, {1600, 1002}}</string>
+ <string key="NSMinSize">{360, 186}</string>
+ <string key="NSMaxSize">{3.40282e+38, 3.40282e+38}</string>
+ <string key="NSFrameAutosaveName">New File Sheet</string>
+ </object>
+ <object class="NSDrawer" id="455222251">
+ <nil key="NSNextResponder"/>
+ <string key="NSContentSize">{200, 350}</string>
+ <string key="NSMinContentSize">{114, 50}</string>
+ <string key="NSMaxContentSize">{600, 400}</string>
+ <int key="NSPreferredEdge">0</int>
+ <double key="NSLeadingOffset">0.0</double>
+ <double key="NSTrailingOffset">15</double>
+ <nil key="NSParentWindow"/>
+ <nil key="NSDelegate"/>
+ </object>
+ <object class="NSCustomView" id="935416310">
+ <reference key="NSNextResponder"/>
+ <int key="NSvFlags">256</int>
+ <object class="NSMutableArray" key="NSSubviews">
+ <bool key="EncodedWithXMLCoder">YES</bool>
+ <object class="NSScrollView" id="87090703">
+ <reference key="NSNextResponder" ref="935416310"/>
+ <int key="NSvFlags">274</int>
+ <object class="NSMutableArray" key="NSSubviews">
+ <bool key="EncodedWithXMLCoder">YES</bool>
+ <object class="NSClipView" id="1029793179">
+ <reference key="NSNextResponder" ref="87090703"/>
+ <int key="NSvFlags">2304</int>
+ <object class="NSMutableArray" key="NSSubviews">
+ <bool key="EncodedWithXMLCoder">YES</bool>
+ <object class="NSOutlineView" id="849170071">
+ <reference key="NSNextResponder" ref="1029793179"/>
+ <int key="NSvFlags">4352</int>
+ <string key="NSFrameSize">{163, 360}</string>
+ <reference key="NSSuperview" ref="1029793179"/>
+ <bool key="NSEnabled">YES</bool>
+ <object class="_NSCornerView" key="NSCornerView">
+ <nil key="NSNextResponder"/>
+ <int key="NSvFlags">256</int>
+ <string key="NSFrame">{{129, 0}, {16, 17}}</string>
+ </object>
+ <object class="NSMutableArray" key="NSTableColumns">
+ <bool key="EncodedWithXMLCoder">YES</bool>
+ <object class="NSTableColumn" id="43786524">
+ <string key="NSIdentifier">displayName</string>
+ <double key="NSWidth">160</double>
+ <double key="NSMinWidth">20</double>
+ <double key="NSMaxWidth">1000</double>
+ <object class="NSTableHeaderCell" key="NSHeaderCell">
+ <int key="NSCellFlags">75628096</int>
+ <int key="NSCellFlags2">2048</int>
+ <string key="NSContents">Groups & Files</string>
+ <reference key="NSSupport" ref="26"/>
+ <object class="NSColor" key="NSBackgroundColor">
+ <int key="NSColorSpace">3</int>
+ <bytes key="NSWhite">MC4zMzMzMzI5ODU2AA</bytes>
+ </object>
+ <object class="NSColor" key="NSTextColor">
+ <int key="NSColorSpace">6</int>
+ <string key="NSCatalogName">System</string>
+ <string key="NSColorName">headerTextColor</string>
+ <reference key="NSColor" ref="594959115"/>
+ </object>
+ </object>
+ <object class="NSTextFieldCell" key="NSDataCell" id="679889651">
+ <int key="NSCellFlags">338820672</int>
+ <int key="NSCellFlags2">1024</int>
+ <object class="NSFont" key="NSSupport">
+ <string key="NSName">LucidaGrande</string>
+ <double key="NSSize">12</double>
+ <int key="NSfFlags">16</int>
+ </object>
+ <reference key="NSControlView" ref="849170071"/>
+ <bool key="NSDrawsBackground">YES</bool>
+ <reference key="NSBackgroundColor" ref="293562186"/>
+ <reference key="NSTextColor" ref="515319004"/>
+ </object>
+ <int key="NSResizingMask">3</int>
+ <bool key="NSIsResizeable">YES</bool>
+ <reference key="NSTableView" ref="849170071"/>
+ </object>
+ </object>
+ <double key="NSIntercellSpacingWidth">3</double>
+ <double key="NSIntercellSpacingHeight">2</double>
+ <reference key="NSBackgroundColor" ref="293562186"/>
+ <object class="NSColor" key="NSGridColor">
+ <int key="NSColorSpace">6</int>
+ <string key="NSCatalogName">System</string>
+ <string key="NSColorName">gridColor</string>
+ <object class="NSColor" key="NSColor">
+ <int key="NSColorSpace">3</int>
+ <bytes key="NSWhite">MC41AA</bytes>
+ </object>
+ </object>
+ <double key="NSRowHeight">17</double>
+ <int key="NSTvFlags">-633307136</int>
+ <reference key="NSDelegate"/>
+ <reference key="NSDataSource"/>
+ <int key="NSColumnAutoresizingStyle">1</int>
+ <int key="NSDraggingSourceMaskForLocal">15</int>
+ <int key="NSDraggingSourceMaskForNonLocal">0</int>
+ <bool key="NSAllowsTypeSelect">YES</bool>
+ <int key="NSTableViewDraggingDestinationStyle">0</int>
+ </object>
+ </object>
+ <string key="NSFrame">{{1, 1}, {163, 360}}</string>
+ <reference key="NSSuperview" ref="87090703"/>
+ <reference key="NSNextKeyView" ref="849170071"/>
+ <reference key="NSDocView" ref="849170071"/>
+ <object class="NSColor" key="NSBGColor">
+ <int key="NSColorSpace">6</int>
+ <string key="NSCatalogName">System</string>
+ <string key="NSColorName">controlBackgroundColor</string>
+ <reference key="NSColor" ref="964409186"/>
+ </object>
+ <int key="NScvFlags">4</int>
+ </object>
+ <object class="NSScroller" id="863081078">
+ <reference key="NSNextResponder" ref="87090703"/>
+ <int key="NSvFlags">-2147483392</int>
+ <string key="NSFrame">{{-30, 1}, {15, 360}}</string>
+ <reference key="NSSuperview" ref="87090703"/>
+ <reference key="NSTarget" ref="87090703"/>
+ <string key="NSAction">_doScroller:</string>
+ <double key="NSPercent">0.97826087474822998</double>
+ </object>
+ <object class="NSScroller" id="460991953">
+ <reference key="NSNextResponder" ref="87090703"/>
+ <int key="NSvFlags">-2147483392</int>
+ <string key="NSFrame">{{-100, -100}, {128, 15}}</string>
+ <reference key="NSSuperview" ref="87090703"/>
+ <int key="NSsFlags">1</int>
+ <reference key="NSTarget" ref="87090703"/>
+ <string key="NSAction">_doScroller:</string>
+ <double key="NSPercent">0.8888888955116272</double>
+ </object>
+ </object>
+ <string key="NSFrame">{{0, 27}, {165, 362}}</string>
+ <reference key="NSSuperview" ref="935416310"/>
+ <reference key="NSNextKeyView" ref="1029793179"/>
+ <int key="NSsFlags">530</int>
+ <reference key="NSVScroller" ref="863081078"/>
+ <reference key="NSHScroller" ref="460991953"/>
+ <reference key="NSContentView" ref="1029793179"/>
+ <bytes key="NSScrollAmts">QSAAAEEgAABBmAAAQZgAAA</bytes>
+ </object>
+ <object class="NSButton" id="871294766">
+ <reference key="NSNextResponder" ref="935416310"/>
+ <int key="NSvFlags">292</int>
+ <string key="NSFrame">{{0, -1}, {23, 22}}</string>
+ <reference key="NSSuperview" ref="935416310"/>
+ <bool key="NSEnabled">YES</bool>
+ <object class="NSButtonCell" key="NSCell" id="370215819">
+ <int key="NSCellFlags">67239424</int>
+ <int key="NSCellFlags2">134217728</int>
+ <string key="NSContents"/>
+ <object class="NSFont" key="NSSupport" id="24">
+ <string key="NSName">LucidaGrande</string>
+ <double key="NSSize">10</double>
+ <int key="NSfFlags">16</int>
+ </object>
+ <reference key="NSControlView" ref="871294766"/>
+ <int key="NSButtonFlags">138674431</int>
+ <int key="NSButtonFlags2">2</int>
+ <object class="NSCustomResource" key="NSNormalImage">
+ <string key="NSClassName">NSImage</string>
+ <string key="NSResourceName">AddNewFile</string>
+ </object>
+ <object class="NSCustomResource" key="NSAlternateImage">
+ <string key="NSClassName">NSImage</string>
+ <string key="NSResourceName">AddNewFilePressed</string>
+ </object>
+ <string key="NSAlternateContents"/>
+ <string key="NSKeyEquivalent"/>
+ <int key="NSPeriodicDelay">400</int>
+ <int key="NSPeriodicInterval">75</int>
+ </object>
+ </object>
+ <object class="NSButton" id="619172655">
+ <reference key="NSNextResponder" ref="935416310"/>
+ <int key="NSvFlags">292</int>
+ <string key="NSFrame">{{52, -1}, {28, 22}}</string>
+ <reference key="NSSuperview" ref="935416310"/>
+ <bool key="NSEnabled">YES</bool>
+ <object class="NSButtonCell" key="NSCell" id="662956182">
+ <int key="NSCellFlags">67239424</int>
+ <int key="NSCellFlags2">134217728</int>
+ <string key="NSContents"/>
+ <reference key="NSSupport" ref="24"/>
+ <reference key="NSControlView" ref="619172655"/>
+ <int key="NSButtonFlags">138674431</int>
+ <int key="NSButtonFlags2">2</int>
+ <object class="NSCustomResource" key="NSNormalImage">
+ <string key="NSClassName">NSImage</string>
+ <string key="NSResourceName">Action</string>
+ </object>
+ <object class="NSCustomResource" key="NSAlternateImage">
+ <string key="NSClassName">NSImage</string>
+ <string key="NSResourceName">ActionPressed</string>
+ </object>
+ <string key="NSAlternateContents"/>
+ <string key="NSKeyEquivalent"/>
+ <int key="NSPeriodicDelay">400</int>
+ <int key="NSPeriodicInterval">75</int>
+ </object>
+ </object>
+ <object class="NSButton" id="53051961">
+ <reference key="NSNextResponder" ref="935416310"/>
+ <int key="NSvFlags">292</int>
+ <string key="NSFrame">{{26, -1}, {23, 22}}</string>
+ <reference key="NSSuperview" ref="935416310"/>
+ <bool key="NSEnabled">YES</bool>
+ <object class="NSButtonCell" key="NSCell" id="732940612">
+ <int key="NSCellFlags">67239424</int>
+ <int key="NSCellFlags2">134217728</int>
+ <string key="NSContents"/>
+ <reference key="NSSupport" ref="24"/>
+ <reference key="NSControlView" ref="53051961"/>
+ <int key="NSButtonFlags">138674431</int>
+ <int key="NSButtonFlags2">2</int>
+ <object class="NSCustomResource" key="NSNormalImage">
+ <string key="NSClassName">NSImage</string>
+ <string key="NSResourceName">AddGroup</string>
+ </object>
+ <object class="NSCustomResource" key="NSAlternateImage">
+ <string key="NSClassName">NSImage</string>
+ <string key="NSResourceName">AddGroupPressed</string>
+ </object>
+ <string key="NSAlternateContents"/>
+ <string key="NSKeyEquivalent"/>
+ <int key="NSPeriodicDelay">400</int>
+ <int key="NSPeriodicInterval">75</int>
+ </object>
+ </object>
+ <object class="NSButton" id="802282051">
+ <reference key="NSNextResponder" ref="935416310"/>
+ <int key="NSvFlags">289</int>
+ <string key="NSFrame">{{137, -1}, {22, 22}}</string>
+ <reference key="NSSuperview" ref="935416310"/>
+ <bool key="NSEnabled">YES</bool>
+ <object class="NSButtonCell" key="NSCell" id="98754047">
+ <int key="NSCellFlags">67239424</int>
+ <int key="NSCellFlags2">67108864</int>
+ <string key="NSContents"/>
+ <reference key="NSSupport" ref="24"/>
+ <reference key="NSControlView" ref="802282051"/>
+ <int key="NSButtonFlags">137101567</int>
+ <int key="NSButtonFlags2">268435458</int>
+ <object class="NSCustomResource" key="NSNormalImage">
+ <string key="NSClassName">NSImage</string>
+ <string key="NSResourceName">Info</string>
+ </object>
+ <object class="NSCustomResource" key="NSAlternateImage">
+ <string key="NSClassName">NSImage</string>
+ <string key="NSResourceName">InfoPressed</string>
+ </object>
+ <string key="NSAlternateContents">InfoDisabled</string>
+ <string key="NSKeyEquivalent">i</string>
+ <int key="NSPeriodicDelay">400</int>
+ <int key="NSPeriodicInterval">75</int>
+ </object>
+ </object>
+ </object>
+ <string key="NSFrameSize">{165, 385}</string>
+ <reference key="NSSuperview"/>
+ <object class="NSMutableString" key="NSClassName">
+ <characters key="NS.bytes">NSView</characters>
+ </object>
+ <string key="NSExtension">NSResponder</string>
+ </object>
+ <object class="NSMenu" id="648008078">
+ <string key="NSTitle">Menu</string>
+ <object class="NSMutableArray" key="NSMenuItems">
+ <bool key="EncodedWithXMLCoder">YES</bool>
+ <object class="NSMenuItem" id="991326378">
+ <reference key="NSMenu" ref="648008078"/>
+ <string key="NSTitle">Menu Title</string>
+ <string key="NSKeyEquiv"/>
+ <int key="NSKeyEquivModMask">1048576</int>
+ <int key="NSMnemonicLoc">2147483647</int>
+ <reference key="NSOnImage" ref="1023208147"/>
+ <reference key="NSMixedImage" ref="117810018"/>
+ </object>
+ <object class="NSMenuItem" id="824781535">
+ <reference key="NSMenu" ref="648008078"/>
+ <string key="NSTitle">New Fileâ¦</string>
+ <string key="NSKeyEquiv">N</string>
+ <int key="NSKeyEquivModMask">1048576</int>
+ <int key="NSMnemonicLoc">2147483647</int>
+ <reference key="NSOnImage" ref="1023208147"/>
+ <reference key="NSMixedImage" ref="117810018"/>
+ </object>
+ <object class="NSMenuItem" id="595807440">
+ <reference key="NSMenu" ref="648008078"/>
+ <string key="NSTitle">Add Existing Filesâ¦</string>
+ <string key="NSKeyEquiv">A</string>
+ <int key="NSKeyEquivModMask">1048576</int>
+ <int key="NSMnemonicLoc">2147483647</int>
+ <reference key="NSOnImage" ref="1023208147"/>
+ <reference key="NSMixedImage" ref="117810018"/>
+ </object>
+ <object class="NSMenuItem" id="745513254">
+ <reference key="NSMenu" ref="648008078"/>
+ <string key="NSTitle">Renameâ¦</string>
+ <string key="NSKeyEquiv"/>
+ <int key="NSKeyEquivModMask">1048576</int>
+ <int key="NSMnemonicLoc">2147483647</int>
+ <reference key="NSOnImage" ref="1023208147"/>
+ <reference key="NSMixedImage" ref="117810018"/>
+ </object>
+ <object class="NSMenuItem" id="642680144">
+ <reference key="NSMenu" ref="648008078"/>
+ <string key="NSTitle">Remove Selected Filesâ¦</string>
+ <string key="NSKeyEquiv"/>
+ <int key="NSMnemonicLoc">2147483647</int>
+ <reference key="NSOnImage" ref="1023208147"/>
+ <reference key="NSMixedImage" ref="117810018"/>
+ </object>
+ <object class="NSMenuItem" id="873517359">
+ <reference key="NSMenu" ref="648008078"/>
+ <bool key="NSIsDisabled">YES</bool>
+ <bool key="NSIsSeparator">YES</bool>
+ <string key="NSTitle"/>
+ <string key="NSKeyEquiv"/>
+ <int key="NSKeyEquivModMask">1048576</int>
+ <int key="NSMnemonicLoc">2147483647</int>
+ <reference key="NSOnImage" ref="1023208147"/>
+ <reference key="NSMixedImage" ref="117810018"/>
+ </object>
+ <object class="NSMenuItem" id="687297486">
+ <reference key="NSMenu" ref="648008078"/>
+ <string key="NSTitle">Open Selected File in New Window</string>
+ <string key="NSKeyEquiv"/>
+ <int key="NSKeyEquivModMask">1048576</int>
+ <int key="NSMnemonicLoc">2147483647</int>
+ <reference key="NSOnImage" ref="1023208147"/>
+ <reference key="NSMixedImage" ref="117810018"/>
+ </object>
+ <object class="NSMenuItem" id="432386971">
+ <reference key="NSMenu" ref="648008078"/>
+ <string key="NSTitle">Reveal Selected File in Finder</string>
+ <string key="NSKeyEquiv"/>
+ <int key="NSMnemonicLoc">2147483647</int>
+ <reference key="NSOnImage" ref="1023208147"/>
+ <reference key="NSMixedImage" ref="117810018"/>
+ </object>
+ <object class="NSMenuItem" id="911871238">
+ <reference key="NSMenu" ref="648008078"/>
+ <string key="NSTitle">Open Selected File with Finder</string>
+ <string key="NSKeyEquiv"/>
+ <int key="NSKeyEquivModMask">1048576</int>
+ <int key="NSMnemonicLoc">2147483647</int>
+ <reference key="NSOnImage" ref="1023208147"/>
+ <reference key="NSMixedImage" ref="117810018"/>
+ </object>
+ <object class="NSMenuItem" id="388438325">
+ <reference key="NSMenu" ref="648008078"/>
+ <string key="NSTitle">Show Informationâ¦</string>
+ <string key="NSKeyEquiv">i</string>
+ <int key="NSKeyEquivModMask">1048576</int>
+ <int key="NSMnemonicLoc">2147483647</int>
+ <reference key="NSOnImage" ref="1023208147"/>
+ <reference key="NSMixedImage" ref="117810018"/>
+ </object>
+ <object class="NSMenuItem" id="307204867">
+ <reference key="NSMenu" ref="648008078"/>
+ <bool key="NSIsDisabled">YES</bool>
+ <bool key="NSIsSeparator">YES</bool>
+ <string key="NSTitle"/>
+ <string key="NSKeyEquiv"/>
+ <int key="NSKeyEquivModMask">1048576</int>
+ <int key="NSMnemonicLoc">2147483647</int>
+ <reference key="NSOnImage" ref="1023208147"/>
+ <reference key="NSMixedImage" ref="117810018"/>
+ </object>
+ <object class="NSMenuItem" id="77632122">
+ <reference key="NSMenu" ref="648008078"/>
+ <string key="NSTitle">New Group</string>
+ <string key="NSKeyEquiv">g</string>
+ <int key="NSKeyEquivModMask">1310720</int>
+ <int key="NSMnemonicLoc">2147483647</int>
+ <reference key="NSOnImage" ref="1023208147"/>
+ <reference key="NSMixedImage" ref="117810018"/>
+ </object>
+ <object class="NSMenuItem" id="206454028">
+ <reference key="NSMenu" ref="648008078"/>
+ <string key="NSTitle">Group Selected Files</string>
+ <string key="NSKeyEquiv">g</string>
+ <int key="NSKeyEquivModMask">1572864</int>
+ <int key="NSMnemonicLoc">2147483647</int>
+ <reference key="NSOnImage" ref="1023208147"/>
+ <reference key="NSMixedImage" ref="117810018"/>
+ </object>
+ <object class="NSMenuItem" id="48220369">
+ <reference key="NSMenu" ref="648008078"/>
+ <bool key="NSIsDisabled">YES</bool>
+ <bool key="NSIsSeparator">YES</bool>
+ <string key="NSTitle"/>
+ <string key="NSKeyEquiv"/>
+ <int key="NSKeyEquivModMask">1048576</int>
+ <int key="NSMnemonicLoc">2147483647</int>
+ <reference key="NSOnImage" ref="1023208147"/>
+ <reference key="NSMixedImage" ref="117810018"/>
+ </object>
+ <object class="NSMenuItem" id="463217590">
+ <reference key="NSMenu" ref="648008078"/>
+ <string key="NSTitle">Treat File as Text/Binary</string>
+ <string key="NSKeyEquiv"/>
+ <int key="NSKeyEquivModMask">1048576</int>
+ <int key="NSMnemonicLoc">2147483647</int>
+ <reference key="NSOnImage" ref="1023208147"/>
+ <reference key="NSMixedImage" ref="117810018"/>
+ </object>
+ </object>
+ <string key="NSName"/>
+ </object>
+ </object>
+ <object class="IBObjectContainer" key="IBDocument.Objects">
+ <object class="NSMutableArray" key="connectionRecords">
+ <bool key="EncodedWithXMLCoder">YES</bool>
+ <object class="IBConnectionRecord">
+ <object class="IBOutletConnection" key="connection">
+ <string key="label">window</string>
+ <reference key="source" ref="89583259"/>
+ <reference key="destination" ref="655332567"/>
+ </object>
+ <int key="connectionID">14</int>
+ </object>
+ <object class="IBConnectionRecord">
+ <object class="IBOutletConnection" key="connection">
+ <string key="label">textView</string>
+ <reference key="source" ref="89583259"/>
+ <reference key="destination" ref="166343530"/>
+ </object>
+ <int key="connectionID">16</int>
+ </object>
+ <object class="IBConnectionRecord">
+ <object class="IBOutletConnection" key="connection">
+ <string key="label">initialFirstResponder</string>
+ <reference key="source" ref="655332567"/>
+ <reference key="destination" ref="166343530"/>
+ </object>
+ <int key="connectionID">17</int>
+ </object>
+ <object class="IBConnectionRecord">
+ <object class="IBOutletConnection" key="connection">
+ <string key="label">newFileSheet</string>
+ <reference key="source" ref="89583259"/>
+ <reference key="destination" ref="1002098058"/>
+ </object>
+ <int key="connectionID">26</int>
+ </object>
+ <object class="IBConnectionRecord">
+ <object class="IBActionConnection" key="connection">
+ <string key="label">performNewFileSheetAction:</string>
+ <reference key="source" ref="89583259"/>
+ <reference key="destination" ref="670848000"/>
+ </object>
+ <int key="connectionID">34</int>
+ </object>
+ <object class="IBConnectionRecord">
+ <object class="IBActionConnection" key="connection">
+ <string key="label">performNewFileSheetAction:</string>
+ <reference key="source" ref="89583259"/>
+ <reference key="destination" ref="929390742"/>
+ </object>
+ <int key="connectionID">35</int>
+ </object>
+ <object class="IBConnectionRecord">
+ <object class="IBBindingConnection" key="connection">
+ <string key="label">enabled: newFileSheetFilename</string>
+ <reference key="source" ref="929390742"/>
+ <reference key="destination" ref="89583259"/>
+ <object class="NSNibBindingConnector" key="connector">
+ <reference key="NSSource" ref="929390742"/>
+ <reference key="NSDestination" ref="89583259"/>
+ <string key="NSLabel">enabled: newFileSheetFilename</string>
+ <string key="NSBinding">enabled</string>
+ <string key="NSKeyPath">newFileSheetFilename</string>
+ <object class="NSDictionary" key="NSOptions">
+ <string key="NS.key.0">NSValueTransformerName</string>
+ <string key="NS.object.0">NSIsNotNil</string>
+ </object>
+ <int key="NSNibBindingConnectorVersion">2</int>
+ </object>
+ </object>
+ <int key="connectionID">39</int>
+ </object>
+ <object class="IBConnectionRecord">
+ <object class="IBBindingConnection" key="connection">
+ <string key="label">value: newFileSheetFilename</string>
+ <reference key="source" ref="665801832"/>
+ <reference key="destination" ref="89583259"/>
+ <object class="NSNibBindingConnector" key="connector">
+ <reference key="NSSource" ref="665801832"/>
+ <reference key="NSDestination" ref="89583259"/>
+ <string key="NSLabel">value: newFileSheetFilename</string>
+ <string key="NSBinding">value</string>
+ <string key="NSKeyPath">newFileSheetFilename</string>
+ <object class="NSDictionary" key="NSOptions">
+ <string key="NS.key.0">NSContinuouslyUpdatesValue</string>
+ <boolean value="YES" key="NS.object.0"/>
+ </object>
+ <int key="NSNibBindingConnectorVersion">2</int>
+ </object>
+ </object>
+ <int key="connectionID">40</int>
+ </object>
+ <object class="IBConnectionRecord">
+ <object class="IBOutletConnection" key="connection">
+ <string key="label">delegate</string>
+ <reference key="source" ref="655332567"/>
+ <reference key="destination" ref="89583259"/>
+ </object>
+ <int key="connectionID">41</int>
+ </object>
+ <object class="IBConnectionRecord">
+ <object class="IBActionConnection" key="connection">
+ <string key="label">performNewFileSheetAction:</string>
+ <reference key="source" ref="89583259"/>
+ <reference key="destination" ref="873633507"/>
+ </object>
+ <int key="connectionID">45</int>
+ </object>
+ <object class="IBConnectionRecord">
+ <object class="IBBindingConnection" key="connection">
+ <string key="label">hidden: currentDocument</string>
+ <reference key="source" ref="546062898"/>
+ <reference key="destination" ref="89583259"/>
+ <object class="NSNibBindingConnector" key="connector">
+ <reference key="NSSource" ref="546062898"/>
+ <reference key="NSDestination" ref="89583259"/>
+ <string key="NSLabel">hidden: currentDocument</string>
+ <string key="NSBinding">hidden</string>
+ <string key="NSKeyPath">currentDocument</string>
+ <object class="NSDictionary" key="NSOptions">
+ <string key="NS.key.0">NSValueTransformerName</string>
+ <string key="NS.object.0">NSIsNil</string>
+ </object>
+ <int key="NSNibBindingConnectorVersion">2</int>
+ </object>
+ </object>
+ <int key="connectionID">52</int>
+ </object>
+ <object class="IBConnectionRecord">
+ <object class="IBOutletConnection" key="connection">
+ <string key="label">delegate</string>
+ <reference key="source" ref="18839554"/>
+ <reference key="destination" ref="89583259"/>
+ </object>
+ <int key="connectionID">54</int>
+ </object>
+ <object class="IBConnectionRecord">
+ <object class="IBOutletConnection" key="connection">
+ <string key="label">tabBarView</string>
+ <reference key="source" ref="89583259"/>
+ <reference key="destination" ref="18839554"/>
+ </object>
+ <int key="connectionID">55</int>
+ </object>
+ <object class="IBConnectionRecord">
+ <object class="IBOutletConnection" key="connection">
+ <string key="label">newFileTemplatePopupButton</string>
+ <reference key="source" ref="89583259"/>
+ <reference key="destination" ref="817159019"/>
+ </object>
+ <int key="connectionID">68</int>
+ </object>
+ <object class="IBConnectionRecord">
+ <object class="IBOutletConnection" key="connection">
+ <string key="label">outlineView</string>
+ <reference key="source" ref="89583259"/>
+ <reference key="destination" ref="849170071"/>
+ </object>
+ <int key="connectionID">81</int>
+ </object>
+ <object class="IBConnectionRecord">
+ <object class="IBOutletConnection" key="connection">
+ <string key="label">contentView</string>
+ <reference key="source" ref="455222251"/>
+ <reference key="destination" ref="935416310"/>
+ </object>
+ <int key="connectionID">82</int>
+ </object>
+ <object class="IBConnectionRecord">
+ <object class="IBOutletConnection" key="connection">
+ <string key="label">parentWindow</string>
+ <reference key="source" ref="455222251"/>
+ <reference key="destination" ref="655332567"/>
+ </object>
+ <int key="connectionID">83</int>
+ </object>
+ <object class="IBConnectionRecord">
+ <object class="IBOutletConnection" key="connection">
+ <string key="label">dataSource</string>
+ <reference key="source" ref="849170071"/>
+ <reference key="destination" ref="89583259"/>
+ </object>
+ <int key="connectionID">88</int>
+ </object>
+ <object class="IBConnectionRecord">
+ <object class="IBOutletConnection" key="connection">
+ <string key="label">delegate</string>
+ <reference key="source" ref="849170071"/>
+ <reference key="destination" ref="89583259"/>
+ </object>
+ <int key="connectionID">89</int>
+ </object>
+ <object class="IBConnectionRecord">
+ <object class="IBOutletConnection" key="connection">
+ <string key="label">groupsAndFilesDrawer</string>
+ <reference key="source" ref="89583259"/>
+ <reference key="destination" ref="455222251"/>
+ </object>
+ <int key="connectionID">91</int>
+ </object>
+ <object class="IBConnectionRecord">
+ <object class="IBOutletConnection" key="connection">
+ <string key="label">actionMenu</string>
+ <reference key="source" ref="619172655"/>
+ <reference key="destination" ref="648008078"/>
+ </object>
+ <int key="connectionID">100</int>
+ </object>
+ <object class="IBConnectionRecord">
+ <object class="IBActionConnection" key="connection">
+ <string key="label">singleClickItem:</string>
+ <reference key="source" ref="89583259"/>
+ <reference key="destination" ref="849170071"/>
+ </object>
+ <int key="connectionID">101</int>
+ </object>
+ <object class="IBConnectionRecord">
+ <object class="IBOutletConnection" key="connection">
+ <string key="label">delegate</string>
+ <reference key="source" ref="619172655"/>
+ <reference key="destination" ref="89583259"/>
+ </object>
+ <int key="connectionID">144</int>
+ </object>
+ <object class="IBConnectionRecord">
+ <object class="IBOutletConnection" key="connection">
+ <string key="label">outlineView</string>
+ <reference key="source" ref="619172655"/>
+ <reference key="destination" ref="849170071"/>
+ </object>
+ <int key="connectionID">145</int>
+ </object>
+ <object class="IBConnectionRecord">
+ <object class="IBActionConnection" key="connection">
+ <string key="label">projectRenameFile:</string>
+ <reference key="source" ref="619172655"/>
+ <reference key="destination" ref="745513254"/>
+ </object>
+ <int key="connectionID">146</int>
+ </object>
+ <object class="IBConnectionRecord">
+ <object class="IBOutletConnection" key="connection">
+ <string key="label">nextKeyView</string>
+ <reference key="source" ref="546062898"/>
+ <reference key="destination" ref="849170071"/>
+ </object>
+ <int key="connectionID">151</int>
+ </object>
+ <object class="IBConnectionRecord">
+ <object class="IBOutletConnection" key="connection">
+ <string key="label">nextKeyView</string>
+ <reference key="source" ref="935416310"/>
+ <reference key="destination" ref="166343530"/>
+ </object>
+ <int key="connectionID">152</int>
+ </object>
+ <object class="IBConnectionRecord">
+ <object class="IBOutletConnection" key="connection">
+ <string key="label">menu</string>
+ <reference key="source" ref="849170071"/>
+ <reference key="destination" ref="648008078"/>
+ </object>
+ <int key="connectionID">156</int>
+ </object>
+ <object class="IBConnectionRecord">
+ <object class="IBActionConnection" key="connection">
+ <string key="label">projectGroupFiles:</string>
+ <reference key="source" ref="89583259"/>
+ <reference key="destination" ref="206454028"/>
+ </object>
+ <int key="connectionID">159</int>
+ </object>
+ <object class="IBConnectionRecord">
+ <object class="IBActionConnection" key="connection">
+ <string key="label">projectNewGroup:</string>
+ <reference key="source" ref="89583259"/>
+ <reference key="destination" ref="77632122"/>
+ </object>
+ <int key="connectionID">160</int>
+ </object>
+ <object class="IBConnectionRecord">
+ <object class="IBActionConnection" key="connection">
+ <string key="label">projectRemoveFiles:</string>
+ <reference key="source" ref="89583259"/>
+ <reference key="destination" ref="642680144"/>
+ </object>
+ <int key="connectionID">161</int>
+ </object>
+ <object class="IBConnectionRecord">
+ <object class="IBActionConnection" key="connection">
+ <string key="label">projectAddFiles:</string>
+ <reference key="source" ref="89583259"/>
+ <reference key="destination" ref="595807440"/>
+ </object>
+ <int key="connectionID">162</int>
+ </object>
+ <object class="IBConnectionRecord">
+ <object class="IBActionConnection" key="connection">
+ <string key="label">projectNewFile:</string>
+ <reference key="source" ref="89583259"/>
+ <reference key="destination" ref="824781535"/>
+ </object>
+ <int key="connectionID">163</int>
+ </object>
+ <object class="IBConnectionRecord">
+ <object class="IBActionConnection" key="connection">
+ <string key="label">revealFileInFinder:</string>
+ <reference key="source" ref="89583259"/>
+ <reference key="destination" ref="432386971"/>
+ </object>
+ <int key="connectionID">164</int>
+ </object>
+ <object class="IBConnectionRecord">
+ <object class="IBOutletConnection" key="connection">
+ <string key="label">statusBar</string>
+ <reference key="source" ref="89583259"/>
+ <reference key="destination" ref="457481926"/>
+ </object>
+ <int key="connectionID">166</int>
+ </object>
+ <object class="IBConnectionRecord">
+ <object class="IBActionConnection" key="connection">
+ <string key="label">projectNewGroup:</string>
+ <reference key="source" ref="89583259"/>
+ <reference key="destination" ref="53051961"/>
+ </object>
+ <int key="connectionID">169</int>
+ </object>
+ <object class="IBConnectionRecord">
+ <object class="IBActionConnection" key="connection">
+ <string key="label">projectNewFile:</string>
+ <reference key="source" ref="89583259"/>
+ <reference key="destination" ref="871294766"/>
+ </object>
+ <int key="connectionID">170</int>
+ </object>
+ <object class="IBConnectionRecord">
+ <object class="IBBindingConnection" key="connection">
+ <string key="label">enabled: canOpenInformationPanel</string>
+ <reference key="source" ref="802282051"/>
+ <reference key="destination" ref="89583259"/>
+ <object class="NSNibBindingConnector" key="connector">
+ <reference key="NSSource" ref="802282051"/>
+ <reference key="NSDestination" ref="89583259"/>
+ <string key="NSLabel">enabled: canOpenInformationPanel</string>
+ <string key="NSBinding">enabled</string>
+ <string key="NSKeyPath">canOpenInformationPanel</string>
+ <int key="NSNibBindingConnectorVersion">2</int>
+ </object>
+ </object>
+ <int key="connectionID">172</int>
+ </object>
+ <object class="IBConnectionRecord">
+ <object class="IBActionConnection" key="connection">
+ <string key="label">projectShowInformationPanel:</string>
+ <reference key="source" ref="89583259"/>
+ <reference key="destination" ref="802282051"/>
+ </object>
+ <int key="connectionID">173</int>
+ </object>
+ <object class="IBConnectionRecord">
+ <object class="IBOutletConnection" key="connection">
+ <string key="label">nextKeyView</string>
+ <reference key="source" ref="849170071"/>
+ <reference key="destination" ref="871294766"/>
+ </object>
+ <int key="connectionID">174</int>
+ </object>
+ <object class="IBConnectionRecord">
+ <object class="IBOutletConnection" key="connection">
+ <string key="label">nextKeyView</string>
+ <reference key="source" ref="871294766"/>
+ <reference key="destination" ref="53051961"/>
+ </object>
+ <int key="connectionID">175</int>
+ </object>
+ <object class="IBConnectionRecord">
+ <object class="IBOutletConnection" key="connection">
+ <string key="label">nextKeyView</string>
+ <reference key="source" ref="53051961"/>
+ <reference key="destination" ref="619172655"/>
+ </object>
+ <int key="connectionID">176</int>
+ </object>
+ <object class="IBConnectionRecord">
+ <object class="IBOutletConnection" key="connection">
+ <string key="label">nextKeyView</string>
+ <reference key="source" ref="619172655"/>
+ <reference key="destination" ref="802282051"/>
+ </object>
+ <int key="connectionID">177</int>
+ </object>
+ <object class="IBConnectionRecord">
+ <object class="IBOutletConnection" key="connection">
+ <string key="label">nextKeyView</string>
+ <reference key="source" ref="802282051"/>
+ <reference key="destination" ref="166343530"/>
+ </object>
+ <int key="connectionID">178</int>
+ </object>
+ <object class="IBConnectionRecord">
+ <object class="IBBindingConnection" key="connection">
+ <string key="label">value: newFileSheetDirectory</string>
+ <reference key="source" ref="705866245"/>
+ <reference key="destination" ref="89583259"/>
+ <object class="NSNibBindingConnector" key="connector">
+ <reference key="NSSource" ref="705866245"/>
+ <reference key="NSDestination" ref="89583259"/>
+ <string key="NSLabel">value: newFileSheetDirectory</string>
+ <string key="NSBinding">value</string>
+ <string key="NSKeyPath">newFileSheetDirectory</string>
+ <object class="NSDictionary" key="NSOptions">
+ <string key="NS.key.0">NSContinuouslyUpdatesValue</string>
+ <boolean value="YES" key="NS.object.0"/>
+ </object>
+ <int key="NSNibBindingConnectorVersion">2</int>
+ </object>
+ </object>
+ <int key="connectionID">185</int>
+ </object>
+ <object class="IBConnectionRecord">
+ <object class="IBActionConnection" key="connection">
+ <string key="label">openFileWithFinder:</string>
+ <reference key="source" ref="89583259"/>
+ <reference key="destination" ref="911871238"/>
+ </object>
+ <int key="connectionID">188</int>
+ </object>
+ <object class="IBConnectionRecord">
+ <object class="IBActionConnection" key="connection">
+ <string key="label">projectShowInformationPanel:</string>
+ <reference key="source" ref="89583259"/>
+ <reference key="destination" ref="388438325"/>
+ </object>
+ <int key="connectionID">190</int>
+ </object>
+ <object class="IBConnectionRecord">
+ <object class="IBActionConnection" key="connection">
+ <string key="label">toggleTreatFileAsText:</string>
+ <reference key="source" ref="89583259"/>
+ <reference key="destination" ref="463217590"/>
+ </object>
+ <int key="connectionID">193</int>
+ </object>
+ <object class="IBConnectionRecord">
+ <object class="IBActionConnection" key="connection">
+ <string key="label">openFileInNewWindow:</string>
+ <reference key="source" ref="89583259"/>
+ <reference key="destination" ref="687297486"/>
+ </object>
+ <int key="connectionID">195</int>
+ </object>
+ </object>
+ <object class="IBMutableOrderedSet" key="objectRecords">
+ <object class="NSArray" key="orderedObjects">
+ <bool key="EncodedWithXMLCoder">YES</bool>
+ <object class="IBObjectRecord">
+ <int key="objectID">0</int>
+ <reference key="object" ref="0"/>
+ <reference key="children" ref="681999512"/>
+ <nil key="parent"/>
+ </object>
+ <object class="IBObjectRecord">
+ <int key="objectID">-2</int>
+ <reference key="object" ref="89583259"/>
+ <reference key="parent" ref="0"/>
+ <string key="objectName">File's Owner</string>
+ </object>
+ <object class="IBObjectRecord">
+ <int key="objectID">-1</int>
+ <reference key="object" ref="254447271"/>
+ <reference key="parent" ref="0"/>
+ <string key="objectName">First Responder</string>
+ </object>
+ <object class="IBObjectRecord">
+ <int key="objectID">6</int>
+ <reference key="object" ref="655332567"/>
+ <object class="NSMutableArray" key="children">
+ <bool key="EncodedWithXMLCoder">YES</bool>
+ <reference ref="966285757"/>
+ </object>
+ <reference key="parent" ref="0"/>
+ <string key="objectName">ProjectWindow</string>
+ </object>
+ <object class="IBObjectRecord">
+ <int key="objectID">5</int>
+ <reference key="object" ref="966285757"/>
+ <object class="NSMutableArray" key="children">
+ <bool key="EncodedWithXMLCoder">YES</bool>
+ <reference ref="546062898"/>
+ <reference ref="18839554"/>
+ <reference ref="457481926"/>
+ </object>
+ <reference key="parent" ref="655332567"/>
+ </object>
+ <object class="IBObjectRecord">
+ <int key="objectID">12</int>
+ <reference key="object" ref="546062898"/>
+ <object class="NSMutableArray" key="children">
+ <bool key="EncodedWithXMLCoder">YES</bool>
+ <reference ref="166343530"/>
+ <reference ref="144064177"/>
+ <reference ref="762080368"/>
+ </object>
+ <reference key="parent" ref="966285757"/>
+ </object>
+ <object class="IBObjectRecord">
+ <int key="objectID">11</int>
+ <reference key="object" ref="166343530"/>
+ <reference key="parent" ref="546062898"/>
+ </object>
+ <object class="IBObjectRecord">
+ <int key="objectID">53</int>
+ <reference key="object" ref="18839554"/>
+ <reference key="parent" ref="966285757"/>
+ </object>
+ <object class="IBObjectRecord">
+ <int key="objectID">165</int>
+ <reference key="object" ref="457481926"/>
+ <reference key="parent" ref="966285757"/>
+ </object>
+ <object class="IBObjectRecord">
+ <int key="objectID">25</int>
+ <reference key="object" ref="1002098058"/>
+ <object class="NSMutableArray" key="children">
+ <bool key="EncodedWithXMLCoder">YES</bool>
+ <reference ref="829918786"/>
+ </object>
+ <reference key="parent" ref="0"/>
+ <string key="objectName">NewFile</string>
+ </object>
+ <object class="IBObjectRecord">
+ <int key="objectID">24</int>
+ <reference key="object" ref="829918786"/>
+ <object class="NSMutableArray" key="children">
+ <bool key="EncodedWithXMLCoder">YES</bool>
+ <reference ref="670848000"/>
+ <reference ref="929390742"/>
+ <reference ref="980627405"/>
+ </object>
+ <reference key="parent" ref="1002098058"/>
+ </object>
+ <object class="IBObjectRecord">
+ <int key="objectID">31</int>
+ <reference key="object" ref="670848000"/>
+ <object class="NSMutableArray" key="children">
+ <bool key="EncodedWithXMLCoder">YES</bool>
+ <reference ref="218898697"/>
+ </object>
+ <reference key="parent" ref="829918786"/>
+ </object>
+ <object class="IBObjectRecord">
+ <int key="objectID">32</int>
+ <reference key="object" ref="929390742"/>
+ <object class="NSMutableArray" key="children">
+ <bool key="EncodedWithXMLCoder">YES</bool>
+ <reference ref="572664875"/>
+ </object>
+ <reference key="parent" ref="829918786"/>
+ </object>
+ <object class="IBObjectRecord">
+ <int key="objectID">70</int>
+ <reference key="object" ref="980627405"/>
+ <object class="NSMutableArray" key="children">
+ <bool key="EncodedWithXMLCoder">YES</bool>
+ <reference ref="665801832"/>
+ <reference ref="1004731108"/>
+ <reference ref="980991664"/>
+ <reference ref="705866245"/>
+ <reference ref="873633507"/>
+ <reference ref="817159019"/>
+ <reference ref="389356410"/>
+ </object>
+ <reference key="parent" ref="829918786"/>
+ </object>
+ <object class="IBObjectRecord">
+ <int key="objectID">76</int>
+ <reference key="object" ref="455222251"/>
+ <reference key="parent" ref="0"/>
+ <string key="objectName">File Drawer</string>
+ </object>
+ <object class="IBObjectRecord">
+ <int key="objectID">77</int>
+ <reference key="object" ref="935416310"/>
+ <object class="NSMutableArray" key="children">
+ <bool key="EncodedWithXMLCoder">YES</bool>
+ <reference ref="87090703"/>
+ <reference ref="871294766"/>
+ <reference ref="619172655"/>
+ <reference ref="53051961"/>
+ <reference ref="802282051"/>
+ </object>
+ <reference key="parent" ref="0"/>
+ <string key="objectName">FileHierarchy</string>
+ </object>
+ <object class="IBObjectRecord">
+ <int key="objectID">79</int>
+ <reference key="object" ref="87090703"/>
+ <object class="NSMutableArray" key="children">
+ <bool key="EncodedWithXMLCoder">YES</bool>
+ <reference ref="849170071"/>
+ <reference ref="863081078"/>
+ <reference ref="460991953"/>
+ </object>
+ <reference key="parent" ref="935416310"/>
+ </object>
+ <object class="IBObjectRecord">
+ <int key="objectID">78</int>
+ <reference key="object" ref="849170071"/>
+ <object class="NSMutableArray" key="children">
+ <bool key="EncodedWithXMLCoder">YES</bool>
+ <reference ref="43786524"/>
+ </object>
+ <reference key="parent" ref="87090703"/>
+ </object>
+ <object class="IBObjectRecord">
+ <int key="objectID">80</int>
+ <reference key="object" ref="43786524"/>
+ <object class="NSMutableArray" key="children">
+ <bool key="EncodedWithXMLCoder">YES</bool>
+ <reference ref="679889651"/>
+ </object>
+ <reference key="parent" ref="849170071"/>
+ </object>
+ <object class="IBObjectRecord">
+ <int key="objectID">86</int>
+ <reference key="object" ref="871294766"/>
+ <object class="NSMutableArray" key="children">
+ <bool key="EncodedWithXMLCoder">YES</bool>
+ <reference ref="370215819"/>
+ </object>
+ <reference key="parent" ref="935416310"/>
+ </object>
+ <object class="IBObjectRecord">
+ <int key="objectID">87</int>
+ <reference key="object" ref="619172655"/>
+ <object class="NSMutableArray" key="children">
+ <bool key="EncodedWithXMLCoder">YES</bool>
+ <reference ref="662956182"/>
+ </object>
+ <reference key="parent" ref="935416310"/>
+ </object>
+ <object class="IBObjectRecord">
+ <int key="objectID">167</int>
+ <reference key="object" ref="53051961"/>
+ <object class="NSMutableArray" key="children">
+ <bool key="EncodedWithXMLCoder">YES</bool>
+ <reference ref="732940612"/>
+ </object>
+ <reference key="parent" ref="935416310"/>
+ </object>
+ <object class="IBObjectRecord">
+ <int key="objectID">168</int>
+ <reference key="object" ref="802282051"/>
+ <object class="NSMutableArray" key="children">
+ <bool key="EncodedWithXMLCoder">YES</bool>
+ <reference ref="98754047"/>
+ </object>
+ <reference key="parent" ref="935416310"/>
+ </object>
+ <object class="IBObjectRecord">
+ <int key="objectID">92</int>
+ <reference key="object" ref="648008078"/>
+ <object class="NSMutableArray" key="children">
+ <bool key="EncodedWithXMLCoder">YES</bool>
+ <reference ref="824781535"/>
+ <reference ref="745513254"/>
+ <reference ref="873517359"/>
+ <reference ref="595807440"/>
+ <reference ref="642680144"/>
+ <reference ref="432386971"/>
+ <reference ref="307204867"/>
+ <reference ref="77632122"/>
+ <reference ref="206454028"/>
+ <reference ref="911871238"/>
+ <reference ref="388438325"/>
+ <reference ref="48220369"/>
+ <reference ref="463217590"/>
+ <reference ref="687297486"/>
+ <reference ref="991326378"/>
+ </object>
+ <reference key="parent" ref="0"/>
+ </object>
+ <object class="IBObjectRecord">
+ <int key="objectID">93</int>
+ <reference key="object" ref="824781535"/>
+ <reference key="parent" ref="648008078"/>
+ </object>
+ <object class="IBObjectRecord">
+ <int key="objectID">94</int>
+ <reference key="object" ref="745513254"/>
+ <reference key="parent" ref="648008078"/>
+ </object>
+ <object class="IBObjectRecord">
+ <int key="objectID">129</int>
+ <reference key="object" ref="873517359"/>
+ <reference key="parent" ref="648008078"/>
+ </object>
+ <object class="IBObjectRecord">
+ <int key="objectID">130</int>
+ <reference key="object" ref="595807440"/>
+ <reference key="parent" ref="648008078"/>
+ </object>
+ <object class="IBObjectRecord">
+ <int key="objectID">132</int>
+ <reference key="object" ref="642680144"/>
+ <reference key="parent" ref="648008078"/>
+ </object>
+ <object class="IBObjectRecord">
+ <int key="objectID">134</int>
+ <reference key="object" ref="432386971"/>
+ <reference key="parent" ref="648008078"/>
+ </object>
+ <object class="IBObjectRecord">
+ <int key="objectID">136</int>
+ <reference key="object" ref="307204867"/>
+ <reference key="parent" ref="648008078"/>
+ </object>
+ <object class="IBObjectRecord">
+ <int key="objectID">137</int>
+ <reference key="object" ref="77632122"/>
+ <reference key="parent" ref="648008078"/>
+ </object>
+ <object class="IBObjectRecord">
+ <int key="objectID">139</int>
+ <reference key="object" ref="206454028"/>
+ <reference key="parent" ref="648008078"/>
+ </object>
+ <object class="IBObjectRecord">
+ <int key="objectID">187</int>
+ <reference key="object" ref="911871238"/>
+ <reference key="parent" ref="648008078"/>
+ </object>
+ <object class="IBObjectRecord">
+ <int key="objectID">189</int>
+ <reference key="object" ref="388438325"/>
+ <reference key="parent" ref="648008078"/>
+ </object>
+ <object class="IBObjectRecord">
+ <int key="objectID">191</int>
+ <reference key="object" ref="48220369"/>
+ <reference key="parent" ref="648008078"/>
+ </object>
+ <object class="IBObjectRecord">
+ <int key="objectID">192</int>
+ <reference key="object" ref="463217590"/>
+ <reference key="parent" ref="648008078"/>
+ </object>
+ <object class="IBObjectRecord">
+ <int key="objectID">194</int>
+ <reference key="object" ref="687297486"/>
+ <reference key="parent" ref="648008078"/>
+ </object>
+ <object class="IBObjectRecord">
+ <int key="objectID">196</int>
+ <reference key="object" ref="991326378"/>
+ <reference key="parent" ref="648008078"/>
+ </object>
+ <object class="IBObjectRecord">
+ <int key="objectID">198</int>
+ <reference key="object" ref="218898697"/>
+ <reference key="parent" ref="670848000"/>
+ </object>
+ <object class="IBObjectRecord">
+ <int key="objectID">199</int>
+ <reference key="object" ref="572664875"/>
+ <reference key="parent" ref="929390742"/>
+ </object>
+ <object class="IBObjectRecord">
+ <int key="objectID">207</int>
+ <reference key="object" ref="370215819"/>
+ <reference key="parent" ref="871294766"/>
+ </object>
+ <object class="IBObjectRecord">
+ <int key="objectID">208</int>
+ <reference key="object" ref="662956182"/>
+ <reference key="parent" ref="619172655"/>
+ </object>
+ <object class="IBObjectRecord">
+ <int key="objectID">209</int>
+ <reference key="object" ref="732940612"/>
+ <reference key="parent" ref="53051961"/>
+ </object>
+ <object class="IBObjectRecord">
+ <int key="objectID">210</int>
+ <reference key="object" ref="98754047"/>
+ <reference key="parent" ref="802282051"/>
+ </object>
+ <object class="IBObjectRecord">
+ <int key="objectID">211</int>
+ <reference key="object" ref="679889651"/>
+ <reference key="parent" ref="43786524"/>
+ </object>
+ <object class="IBObjectRecord">
+ <int key="objectID">212</int>
+ <reference key="object" ref="144064177"/>
+ <reference key="parent" ref="546062898"/>
+ </object>
+ <object class="IBObjectRecord">
+ <int key="objectID">213</int>
+ <reference key="object" ref="762080368"/>
+ <reference key="parent" ref="546062898"/>
+ </object>
+ <object class="IBObjectRecord">
+ <int key="objectID">214</int>
+ <reference key="object" ref="863081078"/>
+ <reference key="parent" ref="87090703"/>
+ </object>
+ <object class="IBObjectRecord">
+ <int key="objectID">215</int>
+ <reference key="object" ref="460991953"/>
+ <reference key="parent" ref="87090703"/>
+ </object>
+ <object class="IBObjectRecord">
+ <int key="objectID">-3</int>
+ <reference key="object" ref="396787127"/>
+ <reference key="parent" ref="0"/>
+ <string key="objectName">Application</string>
+ </object>
+ <object class="IBObjectRecord">
+ <int key="objectID">27</int>
+ <reference key="object" ref="665801832"/>
+ <object class="NSMutableArray" key="children">
+ <bool key="EncodedWithXMLCoder">YES</bool>
+ <reference ref="703549938"/>
+ </object>
+ <reference key="parent" ref="980627405"/>
+ </object>
+ <object class="IBObjectRecord">
+ <int key="objectID">200</int>
+ <reference key="object" ref="703549938"/>
+ <reference key="parent" ref="665801832"/>
+ </object>
+ <object class="IBObjectRecord">
+ <int key="objectID">28</int>
+ <reference key="object" ref="1004731108"/>
+ <object class="NSMutableArray" key="children">
+ <bool key="EncodedWithXMLCoder">YES</bool>
+ <reference ref="919680613"/>
+ </object>
+ <reference key="parent" ref="980627405"/>
+ </object>
+ <object class="IBObjectRecord">
+ <int key="objectID">201</int>
+ <reference key="object" ref="919680613"/>
+ <reference key="parent" ref="1004731108"/>
+ </object>
+ <object class="IBObjectRecord">
+ <int key="objectID">29</int>
+ <reference key="object" ref="980991664"/>
+ <object class="NSMutableArray" key="children">
+ <bool key="EncodedWithXMLCoder">YES</bool>
+ <reference ref="415625681"/>
+ </object>
+ <reference key="parent" ref="980627405"/>
+ </object>
+ <object class="IBObjectRecord">
+ <int key="objectID">202</int>
+ <reference key="object" ref="415625681"/>
+ <reference key="parent" ref="980991664"/>
+ </object>
+ <object class="IBObjectRecord">
+ <int key="objectID">30</int>
+ <reference key="object" ref="705866245"/>
+ <object class="NSMutableArray" key="children">
+ <bool key="EncodedWithXMLCoder">YES</bool>
+ <reference ref="85028366"/>
+ </object>
+ <reference key="parent" ref="980627405"/>
+ </object>
+ <object class="IBObjectRecord">
+ <int key="objectID">203</int>
+ <reference key="object" ref="85028366"/>
+ <reference key="parent" ref="705866245"/>
+ </object>
+ <object class="IBObjectRecord">
+ <int key="objectID">42</int>
+ <reference key="object" ref="873633507"/>
+ <object class="NSMutableArray" key="children">
+ <bool key="EncodedWithXMLCoder">YES</bool>
+ <reference ref="591452043"/>
+ </object>
+ <reference key="parent" ref="980627405"/>
+ </object>
+ <object class="IBObjectRecord">
+ <int key="objectID">204</int>
+ <reference key="object" ref="591452043"/>
+ <reference key="parent" ref="873633507"/>
+ </object>
+ <object class="IBObjectRecord">
+ <int key="objectID">61</int>
+ <reference key="object" ref="817159019"/>
+ <object class="NSMutableArray" key="children">
+ <bool key="EncodedWithXMLCoder">YES</bool>
+ <reference ref="696153694"/>
+ </object>
+ <reference key="parent" ref="980627405"/>
+ </object>
+ <object class="IBObjectRecord">
+ <int key="objectID">205</int>
+ <reference key="object" ref="696153694"/>
+ <object class="NSMutableArray" key="children">
+ <bool key="EncodedWithXMLCoder">YES</bool>
+ <reference ref="187615474"/>
+ </object>
+ <reference key="parent" ref="817159019"/>
+ </object>
+ <object class="IBObjectRecord">
+ <int key="objectID">63</int>
+ <reference key="object" ref="187615474"/>
+ <object class="NSMutableArray" key="children">
+ <bool key="EncodedWithXMLCoder">YES</bool>
+ <reference ref="970908715"/>
+ </object>
+ <reference key="parent" ref="696153694"/>
+ </object>
+ <object class="IBObjectRecord">
+ <int key="objectID">60</int>
+ <reference key="object" ref="970908715"/>
+ <reference key="parent" ref="187615474"/>
+ </object>
+ <object class="IBObjectRecord">
+ <int key="objectID">66</int>
+ <reference key="object" ref="389356410"/>
+ <object class="NSMutableArray" key="children">
+ <bool key="EncodedWithXMLCoder">YES</bool>
+ <reference ref="622076711"/>
+ </object>
+ <reference key="parent" ref="980627405"/>
+ </object>
+ <object class="IBObjectRecord">
+ <int key="objectID">206</int>
+ <reference key="object" ref="622076711"/>
+ <reference key="parent" ref="389356410"/>
+ </object>
+ </object>
+ </object>
+ <object class="NSMutableDictionary" key="flattenedProperties">
+ <bool key="EncodedWithXMLCoder">YES</bool>
+ <object class="NSArray" key="dict.sortedKeys">
+ <bool key="EncodedWithXMLCoder">YES</bool>
+ <string>-3.IBPluginDependency</string>
+ <string>11.IBPluginDependency</string>
+ <string>11.ImportedFromIB2</string>
+ <string>12.IBPluginDependency</string>
+ <string>12.ImportedFromIB2</string>
+ <string>129.IBPluginDependency</string>
+ <string>129.ImportedFromIB2</string>
+ <string>130.IBPluginDependency</string>
+ <string>130.ImportedFromIB2</string>
+ <string>132.IBPluginDependency</string>
+ <string>132.ImportedFromIB2</string>
+ <string>134.IBPluginDependency</string>
+ <string>134.ImportedFromIB2</string>
+ <string>136.IBPluginDependency</string>
+ <string>136.ImportedFromIB2</string>
+ <string>137.IBPluginDependency</string>
+ <string>137.ImportedFromIB2</string>
+ <string>139.IBPluginDependency</string>
+ <string>139.ImportedFromIB2</string>
+ <string>165.IBPluginDependency</string>
+ <string>165.ImportedFromIB2</string>
+ <string>167.IBAttributePlaceholdersKey</string>
+ <string>167.IBPluginDependency</string>
+ <string>167.ImportedFromIB2</string>
+ <string>168.CustomClassName</string>
+ <string>168.IBAttributePlaceholdersKey</string>
+ <string>168.IBPluginDependency</string>
+ <string>168.ImportedFromIB2</string>
+ <string>187.IBPluginDependency</string>
+ <string>187.ImportedFromIB2</string>
+ <string>189.IBPluginDependency</string>
+ <string>189.ImportedFromIB2</string>
+ <string>191.IBPluginDependency</string>
+ <string>191.ImportedFromIB2</string>
+ <string>192.IBPluginDependency</string>
+ <string>192.ImportedFromIB2</string>
+ <string>194.IBPluginDependency</string>
+ <string>194.ImportedFromIB2</string>
+ <string>196.IBPluginDependency</string>
+ <string>196.ImportedFromIB2</string>
+ <string>198.IBPluginDependency</string>
+ <string>199.IBPluginDependency</string>
+ <string>200.IBPluginDependency</string>
+ <string>201.IBPluginDependency</string>
+ <string>202.IBPluginDependency</string>
+ <string>203.IBPluginDependency</string>
+ <string>204.IBPluginDependency</string>
+ <string>205.IBPluginDependency</string>
+ <string>206.IBPluginDependency</string>
+ <string>207.IBPluginDependency</string>
+ <string>208.IBPluginDependency</string>
+ <string>209.IBPluginDependency</string>
+ <string>210.IBPluginDependency</string>
+ <string>211.IBPluginDependency</string>
+ <string>211.IBShouldRemoveOnLegacySave</string>
+ <string>212.IBPluginDependency</string>
+ <string>212.IBShouldRemoveOnLegacySave</string>
+ <string>213.IBPluginDependency</string>
+ <string>213.IBShouldRemoveOnLegacySave</string>
+ <string>214.IBPluginDependency</string>
+ <string>214.IBShouldRemoveOnLegacySave</string>
+ <string>215.IBPluginDependency</string>
+ <string>215.IBShouldRemoveOnLegacySave</string>
+ <string>24.IBPluginDependency</string>
+ <string>24.ImportedFromIB2</string>
+ <string>25.IBEditorWindowLastContentRect</string>
+ <string>25.IBPluginDependency</string>
+ <string>25.IBWindowTemplateEditedContentRect</string>
+ <string>25.ImportedFromIB2</string>
+ <string>25.windowTemplate.hasMinSize</string>
+ <string>25.windowTemplate.minSize</string>
+ <string>27.CustomClassName</string>
+ <string>27.IBPluginDependency</string>
+ <string>27.ImportedFromIB2</string>
+ <string>28.IBPluginDependency</string>
+ <string>28.ImportedFromIB2</string>
+ <string>29.IBPluginDependency</string>
+ <string>29.ImportedFromIB2</string>
+ <string>30.IBPluginDependency</string>
+ <string>30.ImportedFromIB2</string>
+ <string>31.IBPluginDependency</string>
+ <string>31.ImportedFromIB2</string>
+ <string>32.IBPluginDependency</string>
+ <string>32.ImportedFromIB2</string>
+ <string>42.IBPluginDependency</string>
+ <string>42.ImportedFromIB2</string>
+ <string>5.IBPluginDependency</string>
+ <string>5.ImportedFromIB2</string>
+ <string>53.IBPluginDependency</string>
+ <string>53.ImportedFromIB2</string>
+ <string>6.IBEditorWindowLastContentRect</string>
+ <string>6.IBPluginDependency</string>
+ <string>6.IBWindowTemplateEditedContentRect</string>
+ <string>6.ImportedFromIB2</string>
+ <string>6.windowTemplate.hasMinSize</string>
+ <string>6.windowTemplate.minSize</string>
+ <string>60.IBPluginDependency</string>
+ <string>60.ImportedFromIB2</string>
+ <string>61.IBPluginDependency</string>
+ <string>61.ImportedFromIB2</string>
+ <string>63.IBPluginDependency</string>
+ <string>63.ImportedFromIB2</string>
+ <string>66.IBPluginDependency</string>
+ <string>66.ImportedFromIB2</string>
+ <string>70.IBPluginDependency</string>
+ <string>70.ImportedFromIB2</string>
+ <string>76.IBPluginDependency</string>
+ <string>76.ImportedFromIB2</string>
+ <string>77.IBEditorWindowLastContentRect</string>
+ <string>77.IBPluginDependency</string>
+ <string>77.ImportedFromIB2</string>
+ <string>78.CustomClassName</string>
+ <string>78.IBPluginDependency</string>
+ <string>78.ImportedFromIB2</string>
+ <string>79.IBPluginDependency</string>
+ <string>79.ImportedFromIB2</string>
+ <string>80.IBPluginDependency</string>
+ <string>80.ImportedFromIB2</string>
+ <string>86.IBAttributePlaceholdersKey</string>
+ <string>86.IBPluginDependency</string>
+ <string>86.ImportedFromIB2</string>
+ <string>87.CustomClassName</string>
+ <string>87.IBPluginDependency</string>
+ <string>87.ImportedFromIB2</string>
+ <string>92.IBEditorWindowLastContentRect</string>
+ <string>92.IBPluginDependency</string>
+ <string>92.ImportedFromIB2</string>
+ <string>93.IBPluginDependency</string>
+ <string>93.ImportedFromIB2</string>
+ <string>94.IBPluginDependency</string>
+ <string>94.ImportedFromIB2</string>
+ </object>
+ <object class="NSMutableArray" key="dict.values">
+ <bool key="EncodedWithXMLCoder">YES</bool>
+ <string>com.apple.InterfaceBuilder.CocoaPlugin</string>
+ <string>com.apple.InterfaceBuilder.CocoaPlugin</string>
+ <boolean value="YES"/>
+ <string>com.apple.InterfaceBuilder.CocoaPlugin</string>
+ <boolean value="YES"/>
+ <string>com.apple.InterfaceBuilder.CocoaPlugin</string>
+ <boolean value="YES"/>
+ <string>com.apple.InterfaceBuilder.CocoaPlugin</string>
+ <boolean value="YES"/>
+ <string>com.apple.InterfaceBuilder.CocoaPlugin</string>
+ <boolean value="YES"/>
+ <string>com.apple.InterfaceBuilder.CocoaPlugin</string>
+ <boolean value="YES"/>
+ <string>com.apple.InterfaceBuilder.CocoaPlugin</string>
+ <boolean value="YES"/>
+ <string>com.apple.InterfaceBuilder.CocoaPlugin</string>
+ <boolean value="YES"/>
+ <string>com.apple.InterfaceBuilder.CocoaPlugin</string>
+ <boolean value="YES"/>
+ <string>com.apple.InterfaceBuilder.CocoaPlugin</string>
+ <boolean value="YES"/>
+ <object class="NSMutableDictionary">
+ <string key="NS.key.0">ToolTip</string>
+ <object class="IBToolTipAttribute" key="NS.object.0">
+ <string key="name">ToolTip</string>
+ <reference key="object" ref="53051961"/>
+ <string key="toolTip">If the first selected item belongs to or is a folder reference, create a new folder, otherwise create a new group</string>
+ </object>
+ </object>
+ <string>com.apple.InterfaceBuilder.CocoaPlugin</string>
+ <boolean value="YES"/>
+ <string>OakButton</string>
+ <object class="NSMutableDictionary">
+ <string key="NS.key.0">ToolTip</string>
+ <object class="IBToolTipAttribute" key="NS.object.0">
+ <string key="name">ToolTip</string>
+ <reference key="object" ref="802282051"/>
+ <string key="toolTip">Show information about the selected item</string>
+ </object>
+ </object>
+ <string>com.apple.InterfaceBuilder.CocoaPlugin</string>
+ <boolean value="YES"/>
+ <string>com.apple.InterfaceBuilder.CocoaPlugin</string>
+ <boolean value="YES"/>
+ <string>com.apple.InterfaceBuilder.CocoaPlugin</string>
+ <boolean value="YES"/>
+ <string>com.apple.InterfaceBuilder.CocoaPlugin</string>
+ <boolean value="YES"/>
+ <string>com.apple.InterfaceBuilder.CocoaPlugin</string>
+ <boolean value="YES"/>
+ <string>com.apple.InterfaceBuilder.CocoaPlugin</string>
+ <boolean value="YES"/>
+ <string>com.apple.InterfaceBuilder.CocoaPlugin</string>
+ <boolean value="YES"/>
+ <string>com.apple.InterfaceBuilder.CocoaPlugin</string>
+ <string>com.apple.InterfaceBuilder.CocoaPlugin</string>
+ <string>com.apple.InterfaceBuilder.CocoaPlugin</string>
+ <string>com.apple.InterfaceBuilder.CocoaPlugin</string>
+ <string>com.apple.InterfaceBuilder.CocoaPlugin</string>
+ <string>com.apple.InterfaceBuilder.CocoaPlugin</string>
+ <string>com.apple.InterfaceBuilder.CocoaPlugin</string>
+ <string>com.apple.InterfaceBuilder.CocoaPlugin</string>
+ <string>com.apple.InterfaceBuilder.CocoaPlugin</string>
+ <string>com.apple.InterfaceBuilder.CocoaPlugin</string>
+ <string>com.apple.InterfaceBuilder.CocoaPlugin</string>
+ <string>com.apple.InterfaceBuilder.CocoaPlugin</string>
+ <string>com.apple.InterfaceBuilder.CocoaPlugin</string>
+ <string>com.apple.InterfaceBuilder.CocoaPlugin</string>
+ <boolean value="YES"/>
+ <string>com.apple.InterfaceBuilder.CocoaPlugin</string>
+ <boolean value="YES"/>
+ <string>com.apple.InterfaceBuilder.CocoaPlugin</string>
+ <boolean value="YES"/>
+ <string>com.apple.InterfaceBuilder.CocoaPlugin</string>
+ <boolean value="YES"/>
+ <string>com.apple.InterfaceBuilder.CocoaPlugin</string>
+ <boolean value="YES"/>
+ <string>com.apple.InterfaceBuilder.CocoaPlugin</string>
+ <boolean value="YES"/>
+ <string>{{42, 501}, {386, 187}}</string>
+ <string>com.apple.InterfaceBuilder.CocoaPlugin</string>
+ <string>{{42, 501}, {386, 187}}</string>
+ <boolean value="YES"/>
+ <boolean value="YES"/>
+ <string>{360, 164}</string>
+ <string>OakFilenameTextField</string>
+ <string>com.apple.InterfaceBuilder.CocoaPlugin</string>
+ <boolean value="YES"/>
+ <string>com.apple.InterfaceBuilder.CocoaPlugin</string>
+ <boolean value="YES"/>
+ <string>com.apple.InterfaceBuilder.CocoaPlugin</string>
+ <boolean value="YES"/>
+ <string>com.apple.InterfaceBuilder.CocoaPlugin</string>
+ <boolean value="YES"/>
+ <string>com.apple.InterfaceBuilder.CocoaPlugin</string>
+ <boolean value="YES"/>
+ <string>com.apple.InterfaceBuilder.CocoaPlugin</string>
+ <boolean value="YES"/>
+ <string>com.apple.InterfaceBuilder.CocoaPlugin</string>
+ <boolean value="YES"/>
+ <string>com.apple.InterfaceBuilder.CocoaPlugin</string>
+ <boolean value="YES"/>
+ <string>com.apple.InterfaceBuilder.CocoaPlugin</string>
+ <boolean value="YES"/>
+ <string>{{0, 362}, {338, 372}}</string>
+ <string>com.apple.InterfaceBuilder.CocoaPlugin</string>
+ <string>{{0, 362}, {338, 372}}</string>
+ <boolean value="YES"/>
+ <boolean value="YES"/>
+ <string>{213, 107}</string>
+ <string>com.apple.InterfaceBuilder.CocoaPlugin</string>
+ <boolean value="YES"/>
+ <string>com.apple.InterfaceBuilder.CocoaPlugin</string>
+ <boolean value="YES"/>
+ <string>com.apple.InterfaceBuilder.CocoaPlugin</string>
+ <boolean value="YES"/>
+ <string>com.apple.InterfaceBuilder.CocoaPlugin</string>
+ <boolean value="YES"/>
+ <string>com.apple.InterfaceBuilder.CocoaPlugin</string>
+ <boolean value="YES"/>
+ <string>com.apple.InterfaceBuilder.CocoaPlugin</string>
+ <boolean value="YES"/>
+ <string>{{475, 310}, {165, 385}}</string>
+ <string>com.apple.InterfaceBuilder.CocoaPlugin</string>
+ <boolean value="YES"/>
+ <string>OakOutlineView</string>
+ <string>com.apple.InterfaceBuilder.CocoaPlugin</string>
+ <boolean value="YES"/>
+ <string>com.apple.InterfaceBuilder.CocoaPlugin</string>
+ <boolean value="YES"/>
+ <string>com.apple.InterfaceBuilder.CocoaPlugin</string>
+ <boolean value="YES"/>
+ <object class="NSMutableDictionary">
+ <string key="NS.key.0">ToolTip</string>
+ <object class="IBToolTipAttribute" key="NS.object.0">
+ <string key="name">ToolTip</string>
+ <reference key="object" ref="871294766"/>
+ <string key="toolTip">Create a new file using a template</string>
+ </object>
+ </object>
+ <string>com.apple.InterfaceBuilder.CocoaPlugin</string>
+ <boolean value="YES"/>
+ <string>OakMenuButton</string>
+ <string>com.apple.InterfaceBuilder.CocoaPlugin</string>
+ <boolean value="YES"/>
+ <string>{{0, 450}, {323, 273}}</string>
+ <string>com.apple.InterfaceBuilder.CocoaPlugin</string>
+ <boolean value="YES"/>
+ <string>com.apple.InterfaceBuilder.CocoaPlugin</string>
+ <boolean value="YES"/>
+ <string>com.apple.InterfaceBuilder.CocoaPlugin</string>
+ <boolean value="YES"/>
+ </object>
+ </object>
+ <object class="NSMutableDictionary" key="unlocalizedProperties">
+ <bool key="EncodedWithXMLCoder">YES</bool>
+ <reference key="dict.sortedKeys" ref="0"/>
+ <object class="NSMutableArray" key="dict.values">
+ <bool key="EncodedWithXMLCoder">YES</bool>
+ </object>
+ </object>
+ <nil key="activeLocalization"/>
+ <object class="NSMutableDictionary" key="localizations">
+ <bool key="EncodedWithXMLCoder">YES</bool>
+ <reference key="dict.sortedKeys" ref="0"/>
+ <object class="NSMutableArray" key="dict.values">
+ <bool key="EncodedWithXMLCoder">YES</bool>
+ </object>
+ </object>
+ <nil key="sourceID"/>
+ <int key="maxID">215</int>
+ </object>
+ <object class="IBClassDescriber" key="IBDocument.Classes">
+ <object class="NSMutableArray" key="referencedPartialClassDescriptions">
+ <bool key="EncodedWithXMLCoder">YES</bool>
+ <object class="IBPartialClassDescription">
+ <string key="className">FirstResponder</string>
+ <string key="superclassName">NSObject</string>
+ <object class="NSMutableDictionary" key="actions">
+ <bool key="EncodedWithXMLCoder">YES</bool>
+ <object class="NSArray" key="dict.sortedKeys">
+ <bool key="EncodedWithXMLCoder">YES</bool>
+ <string>myAction:</string>
+ <string>newProject:</string>
+ <string>projectAddFiles:</string>
+ <string>projectGroupFiles:</string>
+ <string>projectNewFile:</string>
+ <string>projectNewGroup:</string>
+ <string>projectRemoveFiles:</string>
+ <string>projectShowInformationPanel:</string>
+ <string>revealFileInFinder:</string>
+ <string>saveProject:</string>
+ <string>saveProjectAs:</string>
+ <string>toggleGroupsAndFilesDrawer:</string>
+ </object>
+ <object class="NSMutableArray" key="dict.values">
+ <bool key="EncodedWithXMLCoder">YES</bool>
+ <string>id</string>
+ <string>id</string>
+ <string>id</string>
+ <string>id</string>
+ <string>id</string>
+ <string>id</string>
+ <string>id</string>
+ <string>id</string>
+ <string>id</string>
+ <string>id</string>
+ <string>id</string>
+ <string>id</string>
+ </object>
+ </object>
+ <object class="IBClassDescriptionSource" key="sourceIdentifier">
+ <string key="majorKey">IBUserSource</string>
+ <string key="minorKey"/>
+ </object>
+ </object>
+ <object class="IBPartialClassDescription">
+ <string key="className">NSObject</string>
+ <object class="IBClassDescriptionSource" key="sourceIdentifier">
+ <string key="majorKey">IBUserSource</string>
+ <string key="minorKey"/>
+ </object>
+ </object>
+ <object class="IBPartialClassDescription">
+ <string key="className">OakButton</string>
+ <string key="superclassName">NSButton</string>
+ <object class="IBClassDescriptionSource" key="sourceIdentifier">
+ <string key="majorKey">IBUserSource</string>
+ <string key="minorKey"/>
+ </object>
+ </object>
+ <object class="IBPartialClassDescription">
+ <string key="className">OakFilenameTextField</string>
+ <string key="superclassName">NSTextField</string>
+ <object class="NSMutableDictionary" key="actions">
+ <string key="NS.key.0">selectBaseName:</string>
+ <string key="NS.object.0">id</string>
+ </object>
+ <object class="IBClassDescriptionSource" key="sourceIdentifier">
+ <string key="majorKey">IBUserSource</string>
+ <string key="minorKey"/>
+ </object>
+ </object>
+ <object class="IBPartialClassDescription">
+ <string key="className">OakMenuButton</string>
+ <string key="superclassName">NSButton</string>
+ <object class="NSMutableDictionary" key="actions">
+ <string key="NS.key.0">projectRenameFile:</string>
+ <string key="NS.object.0">id</string>
+ </object>
+ <object class="NSMutableDictionary" key="outlets">
+ <bool key="EncodedWithXMLCoder">YES</bool>
+ <object class="NSArray" key="dict.sortedKeys">
+ <bool key="EncodedWithXMLCoder">YES</bool>
+ <string>actionMenu</string>
+ <string>delegate</string>
+ <string>outlineView</string>
+ </object>
+ <object class="NSMutableArray" key="dict.values">
+ <bool key="EncodedWithXMLCoder">YES</bool>
+ <string>NSMenu</string>
+ <string>id</string>
+ <string>NSOutlineView</string>
+ </object>
+ </object>
+ <object class="IBClassDescriptionSource" key="sourceIdentifier">
+ <string key="majorKey">IBUserSource</string>
+ <string key="minorKey"/>
+ </object>
+ </object>
+ <object class="IBPartialClassDescription">
+ <string key="className">OakOutlineView</string>
+ <string key="superclassName">NSOutlineView</string>
+ <object class="IBClassDescriptionSource" key="sourceIdentifier">
+ <string key="majorKey">IBUserSource</string>
+ <string key="minorKey"/>
+ </object>
+ </object>
+ <object class="IBPartialClassDescription">
+ <string key="className">OakProjectController</string>
+ <string key="superclassName">NSWindowController</string>
+ <object class="NSMutableDictionary" key="actions">
+ <bool key="EncodedWithXMLCoder">YES</bool>
+ <object class="NSArray" key="dict.sortedKeys">
+ <bool key="EncodedWithXMLCoder">YES</bool>
+ <string>goToFileCounterpart:</string>
+ <string>goToNextFile:</string>
+ <string>goToPreviousFile:</string>
+ <string>openFileInNewWindow:</string>
+ <string>openFileWithFinder:</string>
+ <string>performNewFileSheetAction:</string>
+ <string>projectAddFiles:</string>
+ <string>projectGroupFiles:</string>
+ <string>projectNewFile:</string>
+ <string>projectNewGroup:</string>
+ <string>projectRemoveFiles:</string>
+ <string>projectShowInformationPanel:</string>
+ <string>revealFileInFinder:</string>
+ <string>saveDocument:</string>
+ <string>saveDocumentAs:</string>
+ <string>saveProject:</string>
+ <string>saveProjectAs:</string>
+ <string>singleClickItem:</string>
+ <string>toggleTreatFileAsText:</string>
+ </object>
+ <object class="NSMutableArray" key="dict.values">
+ <bool key="EncodedWithXMLCoder">YES</bool>
+ <string>id</string>
+ <string>id</string>
+ <string>id</string>
+ <string>id</string>
+ <string>id</string>
+ <string>id</string>
+ <string>id</string>
+ <string>id</string>
+ <string>id</string>
+ <string>id</string>
+ <string>id</string>
+ <string>id</string>
+ <string>id</string>
+ <string>id</string>
+ <string>id</string>
+ <string>id</string>
+ <string>id</string>
+ <string>id</string>
+ <string>id</string>
+ </object>
+ </object>
+ <object class="NSMutableDictionary" key="outlets">
+ <bool key="EncodedWithXMLCoder">YES</bool>
+ <object class="NSArray" key="dict.sortedKeys">
+ <bool key="EncodedWithXMLCoder">YES</bool>
+ <string>groupsAndFilesDrawer</string>
+ <string>newFileSheet</string>
+ <string>newFileTemplatePopupButton</string>
+ <string>outlineView</string>
+ <string>statusBar</string>
+ <string>tabBarView</string>
+ <string>textView</string>
+ </object>
+ <object class="NSMutableArray" key="dict.values">
+ <bool key="EncodedWithXMLCoder">YES</bool>
+ <string>NSDrawer</string>
+ <string>NSWindow</string>
+ <string>NSPopUpButton</string>
+ <string>NSOutlineView</string>
+ <string>id</string>
+ <string>OakTabBarView</string>
+ <string>OakTextView</string>
+ </object>
+ </object>
+ <object class="IBClassDescriptionSource" key="sourceIdentifier">
+ <string key="majorKey">IBUserSource</string>
+ <string key="minorKey"/>
+ </object>
+ </object>
+ <object class="IBPartialClassDescription">
+ <string key="className">OakStatusBar</string>
+ <string key="superclassName">NSView</string>
+ <object class="IBClassDescriptionSource" key="sourceIdentifier">
+ <string key="majorKey">IBUserSource</string>
+ <string key="minorKey"/>
+ </object>
+ </object>
+ <object class="IBPartialClassDescription">
+ <string key="className">OakTabBarView</string>
+ <string key="superclassName">NSView</string>
+ <object class="NSMutableDictionary" key="actions">
+ <bool key="EncodedWithXMLCoder">YES</bool>
+ <object class="NSArray" key="dict.sortedKeys">
+ <bool key="EncodedWithXMLCoder">YES</bool>
+ <string>selectNextTab:</string>
+ <string>selectPreviousTab:</string>
+ </object>
+ <object class="NSMutableArray" key="dict.values">
+ <bool key="EncodedWithXMLCoder">YES</bool>
+ <string>id</string>
+ <string>id</string>
+ </object>
+ </object>
+ <object class="NSMutableDictionary" key="outlets">
+ <string key="NS.key.0">delegate</string>
+ <string key="NS.object.0">id</string>
+ </object>
+ <object class="IBClassDescriptionSource" key="sourceIdentifier">
+ <string key="majorKey">IBUserSource</string>
+ <string key="minorKey"/>
+ </object>
+ </object>
+ <object class="IBPartialClassDescription">
+ <string key="className">OakTextView</string>
+ <string key="superclassName">NSView</string>
+ <object class="IBClassDescriptionSource" key="sourceIdentifier">
+ <string key="majorKey">IBUserSource</string>
+ <string key="minorKey"/>
+ </object>
+ </object>
+ <object class="IBPartialClassDescription">
+ <string key="className">OakWindow</string>
+ <string key="superclassName">NSWindow</string>
+ <object class="IBClassDescriptionSource" key="sourceIdentifier">
+ <string key="majorKey">IBUserSource</string>
+ <string key="minorKey"/>
+ </object>
+ </object>
+ </object>
+ </object>
+ <int key="IBDocument.localizationMode">0</int>
+ <string key="IBDocument.TargetRuntimeIdentifier">IBCocoaFramework</string>
+ <object class="NSMutableDictionary" key="IBDocument.PluginDeclaredDependencies">
+ <string key="NS.key.0">com.apple.InterfaceBuilder.CocoaPlugin.macosx</string>
+ <integer value="1060" key="NS.object.0"/>
+ </object>
+ <object class="NSMutableDictionary" key="IBDocument.PluginDeclaredDevelopmentDependencies">
+ <string key="NS.key.0">com.apple.InterfaceBuilder.CocoaPlugin.InterfaceBuilder3</string>
+ <integer value="3000" key="NS.object.0"/>
+ </object>
+ <bool key="IBDocument.PluginDeclaredDependenciesTrackSystemTargetVersion">YES</bool>
+ <nil key="IBDocument.LastKnownRelativeProjectPath"/>
+ <int key="IBDocument.defaultPropertyAccessControl">3</int>
+ <object class="NSMutableDictionary" key="IBDocument.LastKnownImageSizes">
+ <bool key="EncodedWithXMLCoder">YES</bool>
+ <object class="NSArray" key="dict.sortedKeys">
+ <bool key="EncodedWithXMLCoder">YES</bool>
+ <string>Action</string>
+ <string>ActionPressed</string>
+ <string>AddGroup</string>
+ <string>AddGroupPressed</string>
+ <string>AddNewFile</string>
+ <string>AddNewFilePressed</string>
+ <string>Info</string>
+ <string>InfoPressed</string>
+ <string>NSMenuCheckmark</string>
+ <string>NSMenuMixedState</string>
+ </object>
+ <object class="NSMutableArray" key="dict.values">
+ <bool key="EncodedWithXMLCoder">YES</bool>
+ <string>{128, 128}</string>
+ <string>{128, 128}</string>
+ <string>{128, 128}</string>
+ <string>{128, 128}</string>
+ <string>{128, 128}</string>
+ <string>{128, 128}</string>
+ <string>{128, 128}</string>
+ <string>{128, 128}</string>
+ <string>{9, 8}</string>
+ <string>{7, 2}</string>
+ </object>
+ </object>
+ </data>
+</archive>
diff --git a/English.lproj/Project.nib/keyedobjects.nib b/English.lproj/Project.nib/keyedobjects.nib
new file mode 100644
index 0000000..4c60339
Binary files /dev/null and b/English.lproj/Project.nib/keyedobjects.nib differ
|
lizconlan/textmate-settings
|
f79c4d18a3edfdb759392d153fd48d7dd964fd05
|
ignoring things
|
diff --git a/.gitignore b/.gitignore
new file mode 100644
index 0000000..496ee2c
--- /dev/null
+++ b/.gitignore
@@ -0,0 +1 @@
+.DS_Store
\ No newline at end of file
|
mthurlin/gevent-MySQL
|
3c18b0ef54db5f6478179cdbbcb1c76efd0c59e7
|
Turn off SSL on client_capabilities.
|
diff --git a/lib/geventmysql/client.py b/lib/geventmysql/client.py
index 1dab43c..52f256a 100644
--- a/lib/geventmysql/client.py
+++ b/lib/geventmysql/client.py
@@ -1,404 +1,406 @@
# Copyright (C) 2009, Hyves (Startphone Ltd.)
#
# This module is part of the Concurrence Framework and is released under
# the New BSD License: http://www.opensource.org/licenses/bsd-license.php
#TODO supporting closing a halfread resultset (e.g. automatically read and discard rest)
import errno
from geventmysql._mysql import Buffer
from geventmysql.mysql import BufferedPacketReader, BufferedPacketWriter, PACKET_READ_RESULT, CAPS, COMMAND
import logging
import time
from gevent import socket
import gevent
import sys
# From query: SHOW COLLATION;
charset_map = {}
charset_map["big5"] = 1
charset_map["dec8"] = 3
charset_map["cp850"] = 4
charset_map["hp8"] = 6
charset_map["koi8r"] = 7
charset_map["latin1"] = 8
charset_map["latin1"] = 8
charset_map["latin2"] = 9
charset_map["swe7"] = 10
charset_map["ascii"] = 11
charset_map["ujis"] = 12
charset_map["sjis"] = 13
charset_map["hebrew"] = 16
charset_map["tis620"] = 18
charset_map["euckr"] = 19
charset_map["koi8u"] = 22
charset_map["gb2312"] = 24
charset_map["greek"] = 25
charset_map["cp1250"] = 26
charset_map["gbk"] = 28
charset_map["latin5"] = 30
charset_map["armscii8"] = 32
charset_map["utf8"] = 33
charset_map["utf8"] = 33
charset_map["ucs2"] = 35
charset_map["cp866"] = 36
charset_map["keybcs2"] = 37
charset_map["macce"] = 38
charset_map["macroman"] = 39
charset_map["cp852"] = 40
charset_map["latin7"] = 41
charset_map["cp1251"] = 51
charset_map["cp1256"] = 57
charset_map["cp1257"] = 59
charset_map["binary"] = 63
charset_map["geostd8"] = 92
charset_map["cp932"] = 95
charset_map["eucjpms"] = 97
try:
#python 2.6
import hashlib
SHA = hashlib.sha1
except ImportError:
#python 2.5
import sha
SHA = sha.new
#import time
class ClientError(Exception):
@classmethod
def from_error_packet(cls, packet, skip = 8):
packet.skip(skip)
return cls(packet.read_bytes(packet.remaining))
class ClientLoginError(ClientError): pass
class ClientCommandError(ClientError): pass
class ClientProgrammingError(ClientError): pass
class ResultSet(object):
"""Represents the current resultset being read from a Connection.
The resultset implements an iterator over rows. A Resultset must
be iterated entirely and closed explicitly."""
STATE_INIT = 0
STATE_OPEN = 1
STATE_EOF = 2
STATE_CLOSED = 3
def __init__(self, connection, field_count):
self.state = self.STATE_INIT
self.connection = connection
self.fields = connection.reader.read_fields(field_count)
self.state = self.STATE_OPEN
def __iter__(self):
assert self.state == self.STATE_OPEN, "cannot iterate a resultset when it is not open"
for row in self.connection.reader.read_rows(self.fields):
yield row
self.state = self.STATE_EOF
def close(self, connection_close = False):
"""Closes the current resultset. Make sure you have iterated over all rows before closing it!"""
#print 'close on ResultSet', id(self.connection)
if self.state != self.STATE_EOF and not connection_close:
raise ClientProgrammingError("you can only close a resultset when it was read entirely!")
connection = self.connection
del self.connection
del self.fields
connection._close_current_resultset(self)
self.state = self.STATE_CLOSED
class Connection(object):
"""Represents a single connection to a MySQL Database host."""
STATE_ERROR = -1
STATE_INIT = 0
STATE_CONNECTING = 1
STATE_CONNECTED = 2
STATE_CLOSING = 3
STATE_CLOSED = 4
def __init__(self):
self.state = self.STATE_INIT
self.buffer = Buffer(1024 * 16)
self.socket = None
self.reader = None
self.writer = None
self._time_command = False #whether to keep timing stats on a cmd
self._command_time = -1
self._incommand = False
self.current_resultset = None
def _scramble(self, password, seed):
"""taken from java jdbc driver, scrambles the password using the given seed
according to the mysql login protocol"""
stage1 = SHA(password).digest()
stage2 = SHA(stage1).digest()
md = SHA()
md.update(seed)
md.update(stage2)
#i love python :-):
return ''.join(map(chr, [x ^ ord(stage1[i]) for i, x in enumerate(map(ord, md.digest()))]))
def _handshake(self, user, password, database, charset):
"""performs the mysql login handshake"""
#init buffer for reading (both pos and lim = 0)
self.buffer.clear()
self.buffer.flip()
#read server welcome
packet = self.reader.read_packet()
self.protocol_version = packet.read_byte() #normally this would be 10 (0xa)
if self.protocol_version == 0xff:
#error on initial greeting, possibly too many connection error
raise ClientLoginError.from_error_packet(packet, skip = 2)
elif self.protocol_version == 0xa:
pass #expected
else:
assert False, "Unexpected protocol version %02x" % self.protocol_version
self.server_version = packet.read_bytes_until(0)
packet.skip(4) #thread_id
scramble_buff = packet.read_bytes(8)
packet.skip(1) #filler
server_caps = packet.read_short()
#CAPS.dbg(server_caps)
if not server_caps & CAPS.PROTOCOL_41:
assert False, "<4.1 auth not supported"
server_language = packet.read_byte()
server_status = packet.read_short()
packet.skip(13) #filler
if packet.remaining:
scramble_buff += packet.read_bytes_until(0)
else:
assert False, "<4.1 auth not supported"
client_caps = server_caps
#always turn off compression
client_caps &= ~CAPS.COMPRESS
client_caps &= ~CAPS.NO_SCHEMA
+ #always turn off ssl
+ client_caps &= ~CAPS.SSL
if not server_caps & CAPS.CONNECT_WITH_DB and database:
assert False, "initial db given but not supported by server"
if server_caps & CAPS.CONNECT_WITH_DB and not database:
client_caps &= ~CAPS.CONNECT_WITH_DB
#build and write our answer to the initial handshake packet
self.writer.clear()
self.writer.start()
self.writer.write_int(client_caps)
self.writer.write_int(1024 * 1024 * 32) #16mb max packet
if charset:
self.writer.write_byte(charset_map[charset.replace("-", "")])
else:
self.writer.write_byte(server_language)
self.writer.write_bytes('\0' * 23) #filler
self.writer.write_bytes(user + '\0')
if password:
self.writer.write_byte(20)
self.writer.write_bytes(self._scramble(password, scramble_buff))
else:
self.writer.write_byte(0)
if database:
self.writer.write_bytes(database + '\0')
self.writer.finish(1)
self.writer.flush()
#read final answer from server
self.buffer.flip()
packet = self.reader.read_packet()
result = packet.read_byte()
if result == 0xff:
raise ClientLoginError.from_error_packet(packet)
elif result == 0xfe:
assert False, "old password handshake not implemented"
def _close_current_resultset(self, resultset):
assert resultset == self.current_resultset
self.current_resultset = None
def _send_command(self, cmd, cmd_text):
"""sends a command with the given text"""
#self.log.debug('cmd %s %s', cmd, cmd_text)
#note: we are not using normal writer.start/finish here, because the cmd
#could not fit in buffer, causing flushes in write_string, in that case 'finish' would
#not be able to go back to the header of the packet to write the length in that case
self.writer.clear()
self.writer.write_header(len(cmd_text) + 1 + 4, 0) #1 is len of cmd, 4 is len of header, 0 is packet number
self.writer.write_byte(cmd)
self.writer.write_bytes(cmd_text)
self.writer.flush()
def _close(self):
#self.log.debug("close mysql client %s", id(self))
try:
self.state = self.STATE_CLOSING
if self.current_resultset:
self.current_resultset.close(True)
self.socket.close()
self.state = self.STATE_CLOSED
except:
self.state = self.STATE_ERROR
raise
def connect(self, host = "localhost", port = 3306, user = "", password = "", db = "", autocommit = None, charset = None, use_unicode=False):
"""connects to the given host and port with user and password"""
#self.log.debug("connect mysql client %s %s %s %s %s", id(self), host, port, user, password)
try:
#parse addresses of form str <host:port>
assert type(host) == str, "make sure host is a string"
if host[0] == '/': #assume unix domain socket
addr = host
elif ':' in host:
host, port = host.split(':')
port = int(port)
addr = (host, port)
else:
addr = (host, port)
assert self.state == self.STATE_INIT, "make sure connection is not already connected or closed"
self.state = self.STATE_CONNECTING
self.socket = socket.create_connection(addr)
self.reader = BufferedPacketReader(self.socket, self.buffer)
self.writer = BufferedPacketWriter(self.socket, self.buffer)
self._handshake(user, password, db, charset)
#handshake complete client can now send commands
self.state = self.STATE_CONNECTED
if autocommit == False:
self.set_autocommit(False)
elif autocommit == True:
self.set_autocommit(True)
else:
pass #whatever is the default of the db (ON in the case of mysql)
if charset is not None:
self.set_charset(charset)
self.set_use_unicode(use_unicode)
return self
except gevent.Timeout:
self.state = self.STATE_INIT
raise
except ClientLoginError:
self.state = self.STATE_INIT
raise
except:
self.state = self.STATE_ERROR
raise
def close(self):
"""close this connection"""
assert self.is_connected(), "make sure connection is connected before closing"
if self._incommand != False: assert False, "cannot close while still in a command"
self._close()
def command(self, cmd, cmd_text):
"""sends a COM_XXX command with the given text and possibly return a resultset (select)"""
#print 'command', cmd, repr(cmd_text), type(cmd_text)
assert type(cmd_text) == str #as opposed to unicode
assert self.is_connected(), "make sure connection is connected before query"
if self._incommand != False: assert False, "overlapped commands not supported"
if self.current_resultset: assert False, "overlapped commands not supported, pls read prev resultset and close it"
try:
self._incommand = True
if self._time_command:
start_time = time.time()
self._send_command(cmd, cmd_text)
#read result, expect 1 of OK, ERROR or result set header
self.buffer.flip()
packet = self.reader.read_packet()
result = packet.read_byte()
#print 'res', result
if self._time_command:
end_time = time.time()
self._command_time = end_time - start_time
if result == 0x00:
#OK, return (affected rows, last row id)
rowcount = self.reader.read_length_coded_binary()
lastrowid = self.reader.read_length_coded_binary()
return (rowcount, lastrowid)
elif result == 0xff:
raise ClientCommandError.from_error_packet(packet)
else: #result set
self.current_resultset = ResultSet(self, result)
return self.current_resultset
except socket.error, e:
(errorcode, errorstring) = e
if errorcode in [errno.ECONNABORTED, errno.ECONNREFUSED, errno.ECONNRESET, errno.EPIPE]:
self._incommand = False
self.close()
if sys.platform == "win32":
if errorcode in [errno.WSAECONNABORTED]:
self._incommand = False
self.close()
raise
finally:
self._incommand = False
def is_connected(self):
return self.state == self.STATE_CONNECTED
def query(self, cmd_text):
"""Sends a COM_QUERY command with the given text and return a resultset (select)"""
return self.command(COMMAND.QUERY, cmd_text)
def init_db(self, cmd_text):
"""Sends a COM_INIT command with the given text"""
return self.command(COMMAND.INITDB, cmd_text)
def set_autocommit(self, commit):
"""Sets autocommit setting for this connection. True = on, False = off"""
self.command(COMMAND.QUERY, "SET AUTOCOMMIT = %s" % ('1' if commit else '0'))
def commit(self):
"""Commits this connection"""
self.command(COMMAND.QUERY, "COMMIT")
def rollback(self):
"""Issues a rollback on this connection"""
self.command(COMMAND.QUERY, "ROLLBACK")
def set_charset(self, charset):
"""Sets the charset for this connections (used to decode string fields into unicode strings)"""
self.reader.reader.encoding = charset
def set_use_unicode(self, use_unicode):
self.reader.reader.use_unicode = use_unicode
def set_time_command(self, time_command):
self._time_command = time_command
def get_command_time(self):
return self._command_time
Connection.log = logging.getLogger(Connection.__name__)
def connect(*args, **kwargs):
return Connection().connect(*args, **kwargs)
|
mthurlin/gevent-MySQL
|
6625e0b5ea7d49d61a0ef73fab8bec4daf7e533f
|
setup.py pep8 fixes
|
diff --git a/setup.py b/setup.py
index 77c409e..eb47992 100755
--- a/setup.py
+++ b/setup.py
@@ -1,19 +1,24 @@
#! /usr/bin/env python
from setuptools import setup
from distutils.extension import Extension
VERSION = '0.0.1'
+DESCRIPTION = """\
+A gevent (http://www.gevent.org) adaption of the asynchronous MySQL driver
+from the Concurrence framework (http://opensource.hyves.org/concurrence)
+"""
+
setup(
- name = "gevent-MySQL",
- version = VERSION,
- license = "New BSD",
- description = "A gevent (http://www.gevent.org) adaption of the asynchronous MySQL driver from the Concurrence framework (http://opensource.hyves.org/concurrence)",
- package_dir = {'':'lib'},
- packages = ['geventmysql'],
- install_requires = ["gevent"],
- ext_modules = [Extension("geventmysql._mysql",
+ name="gevent-MySQL",
+ version=VERSION,
+ license="New BSD",
+ description=DESCRIPTION,
+ package_dir={'': 'lib'},
+ packages=['geventmysql'],
+ install_requires=["gevent"],
+ ext_modules=[Extension("geventmysql._mysql",
["lib/geventmysql/geventmysql._mysql.c"])]
)
|
mthurlin/gevent-MySQL
|
5db3de7a0816a1baac37f638d842cba3c08b1426
|
Don't use cython in setup.py because .c file exists
|
diff --git a/setup.py b/setup.py
old mode 100644
new mode 100755
index 6fa6f45..77c409e
--- a/setup.py
+++ b/setup.py
@@ -1,16 +1,19 @@
-from distutils.core import setup
-from distutils.extension import Extension
-from Cython.Distutils import build_ext
-
-VERSION = '0.0.1'
-
-setup(
- name = "gevent-MySQL",
- version = VERSION,
- license = "New BSD",
- description = "A gevent (http://www.gevent.org) adaption of the asynchronous MySQL driver from the Concurrence framework (http://opensource.hyves.org/concurrence)",
- cmdclass = {"build_ext": build_ext},
- package_dir = {'':'lib'},
- packages = ['geventmysql'],
- ext_modules = [Extension("geventmysql._mysql", ["lib/geventmysql/geventmysql._mysql.pyx"])]
-)
\ No newline at end of file
+#! /usr/bin/env python
+
+from setuptools import setup
+from distutils.extension import Extension
+
+VERSION = '0.0.1'
+
+
+setup(
+ name = "gevent-MySQL",
+ version = VERSION,
+ license = "New BSD",
+ description = "A gevent (http://www.gevent.org) adaption of the asynchronous MySQL driver from the Concurrence framework (http://opensource.hyves.org/concurrence)",
+ package_dir = {'':'lib'},
+ packages = ['geventmysql'],
+ install_requires = ["gevent"],
+ ext_modules = [Extension("geventmysql._mysql",
+ ["lib/geventmysql/geventmysql._mysql.c"])]
+)
|
mthurlin/gevent-MySQL
|
fa2f20ad868973718080f64b14f7f12ec2c516b5
|
added binary/blob field's tests.
|
diff --git a/test/testmysql.py b/test/testmysql.py
index 62c2480..b6c0e44 100644
--- a/test/testmysql.py
+++ b/test/testmysql.py
@@ -93,521 +93,561 @@ class TestMySQL(unittest.TestCase):
def query(s):
cnn = dbapi.connect(host = DB_HOST, user = DB_USER,
password = DB_PASSWD, db = DB_DB)
cur = cnn.cursor()
cur.execute("select sleep(%d)" % s)
cur.close()
cnn.close()
start = time.time()
ch1 = gevent.spawn(query, 1)
ch2 = gevent.spawn(query, 2)
ch3 = gevent.spawn(query, 3)
gevent.joinall([ch1, ch2, ch3])
end = time.time()
self.assertAlmostEqual(3.0, end - start, places = 1)
def testMySQLDBAPI(self):
cnn = dbapi.connect(host = DB_HOST, user = DB_USER,
password = DB_PASSWD, db = DB_DB)
cur = cnn.cursor()
cur.execute("truncate tbltest")
for i in range(10):
cur.execute("insert into tbltest (test_id, test_string) values (%d, 'test%d')" % (i, i))
cur.close()
cur = cnn.cursor()
cur.execute("select test_id, test_string from tbltest")
self.assertEquals((0, 'test0'), cur.fetchone())
#check that fetchall gets the remainder
self.assertEquals([(1, 'test1'), (2, 'test2'), (3, 'test3'), (4, 'test4'), (5, 'test5'), (6, 'test6'), (7, 'test7'), (8, 'test8'), (9, 'test9')], cur.fetchall())
#another query on the same cursor should work
cur.execute("select test_id, test_string from tbltest")
#fetch some but not all
self.assertEquals((0, 'test0'), cur.fetchone())
self.assertEquals((1, 'test1'), cur.fetchone())
self.assertEquals((2, 'test2'), cur.fetchone())
#close whould work even with half read resultset
cur.close()
#this should not work, cursor was closed
try:
cur.execute("select * from tbltest")
self.fail("expected exception")
except dbapi.ProgrammingError:
pass
def testLargePackets(self):
cnn = client.connect(host = DB_HOST, user = DB_USER,
password = DB_PASSWD, db = DB_DB)
cnn.query("truncate tbltest")
c = cnn.buffer.capacity
blob = '0123456789'
while 1:
cnn.query("insert into tbltest (test_id, test_blob) values (%d, '%s')" % (len(blob), blob))
if len(blob) > (c * 2): break
blob = blob * 2
rs = cnn.query("select test_id, test_blob from tbltest")
for row in rs:
self.assertEquals(row[0], len(row[1]))
self.assertEquals(blob[:row[0]], row[1])
rs.close()
#reread, second time, oversize packet is already present
rs = cnn.query("select test_id, test_blob from tbltest")
for row in rs:
self.assertEquals(row[0], len(row[1]))
self.assertEquals(blob[:row[0]], row[1])
rs.close()
cnn.close()
#have a very low max packet size for oversize packets
#and check that exception is thrown when trying to read larger packets
from geventmysql import _mysql
_mysql.MAX_PACKET_SIZE = 1024 * 4
cnn = client.connect(host = DB_HOST, user = DB_USER,
password = DB_PASSWD, db = DB_DB)
try:
rs = cnn.query("select test_id, test_blob from tbltest")
for row in rs:
self.assertEquals(row[0], len(row[1]))
self.assertEquals(blob[:row[0]], row[1])
self.fail()
except PacketReadError:
pass
finally:
try:
rs.close()
except:
pass
cnn.close()
def testEscapeArgs(self):
cnn = dbapi.connect(host = DB_HOST, user = DB_USER,
password = DB_PASSWD, db = DB_DB)
cur = cnn.cursor()
cur.execute("truncate tbltest")
cur.execute("insert into tbltest (test_id, test_string) values (%s, %s)", (1, 'piet'))
cur.execute("insert into tbltest (test_id, test_string) values (%s, %s)", (2, 'klaas'))
cur.execute("insert into tbltest (test_id, test_string) values (%s, %s)", (3, "pi'et"))
#classic sql injection, would return all rows if no proper escaping is done
cur.execute("select test_id, test_string from tbltest where test_string = %s", ("piet' OR 'a' = 'a",))
self.assertEquals([], cur.fetchall()) #assert no rows are found
#but we should still be able to find the piet with the apostrophe in its name
cur.execute("select test_id, test_string from tbltest where test_string = %s", ("pi'et",))
self.assertEquals([(3, "pi'et")], cur.fetchall())
#also we should be able to insert and retrieve blob/string with all possible bytes transparently
chars = ''.join([chr(i) for i in range(256)])
cur.execute("insert into tbltest (test_id, test_string, test_blob) values (%s, %s, %s)", (4, chars, chars))
cur.execute("select test_string, test_blob from tbltest where test_id = %s", (4,))
#self.assertEquals([(chars, chars)], cur.fetchall())
s, b = cur.fetchall()[0]
#test blob
self.assertEquals(256, len(b))
self.assertEquals(chars, b)
#test string
self.assertEquals(256, len(s))
self.assertEquals(chars, s)
cur.close()
cnn.close()
def testSelectUnicode(self):
s = u'r\xc3\xa4ksm\xc3\xb6rg\xc3\xa5s'
cnn = dbapi.connect(host = DB_HOST, user = DB_USER,
password = DB_PASSWD, db = DB_DB,
charset = 'latin-1', use_unicode = True)
cur = cnn.cursor()
cur.execute("truncate tbltest")
cur.execute("insert into tbltest (test_id, test_string) values (%s, %s)", (1, 'piet'))
cur.execute("insert into tbltest (test_id, test_string) values (%s, %s)", (2, s))
cur.execute(u"insert into tbltest (test_id, test_string) values (%s, %s)", (3, s))
cur.execute("select test_id, test_string from tbltest")
result = cur.fetchall()
self.assertEquals([(1, u'piet'), (2, s), (3, s)], result)
#test that we can still cleanly roundtrip a blob, (it should not be encoded if we pass
#it as 'str' argument), eventhough we pass the qry itself as unicode
blob = ''.join([chr(i) for i in range(256)])
cur.execute(u"insert into tbltest (test_id, test_blob) values (%s, %s)", (4, blob))
cur.execute("select test_blob from tbltest where test_id = %s", (4,))
b2 = cur.fetchall()[0][0]
self.assertEquals(str, type(b2))
self.assertEquals(256, len(b2))
self.assertEquals(blob, b2)
def testAutoInc(self):
cnn = dbapi.connect(host = DB_HOST, user = DB_USER,
password = DB_PASSWD, db = DB_DB)
cur = cnn.cursor()
cur.execute("truncate tblautoincint")
cur.execute("ALTER TABLE tblautoincint AUTO_INCREMENT = 100")
cur.execute("insert into tblautoincint (test_string) values (%s)", ('piet',))
self.assertEqual(1, cur.rowcount)
self.assertEqual(100, cur.lastrowid)
cur.execute("insert into tblautoincint (test_string) values (%s)", ('piet',))
self.assertEqual(1, cur.rowcount)
self.assertEqual(101, cur.lastrowid)
cur.execute("ALTER TABLE tblautoincint AUTO_INCREMENT = 4294967294")
cur.execute("insert into tblautoincint (test_string) values (%s)", ('piet',))
self.assertEqual(1, cur.rowcount)
self.assertEqual(4294967294, cur.lastrowid)
cur.execute("insert into tblautoincint (test_string) values (%s)", ('piet',))
self.assertEqual(1, cur.rowcount)
self.assertEqual(4294967295, cur.lastrowid)
cur.execute("truncate tblautoincbigint")
cur.execute("ALTER TABLE tblautoincbigint AUTO_INCREMENT = 100")
cur.execute("insert into tblautoincbigint (test_string) values (%s)", ('piet',))
self.assertEqual(1, cur.rowcount)
self.assertEqual(100, cur.lastrowid)
cur.execute("insert into tblautoincbigint (test_string) values (%s)", ('piet',))
self.assertEqual(1, cur.rowcount)
self.assertEqual(101, cur.lastrowid)
cur.execute("ALTER TABLE tblautoincbigint AUTO_INCREMENT = 18446744073709551614")
cur.execute("insert into tblautoincbigint (test_string) values (%s)", ('piet',))
self.assertEqual(1, cur.rowcount)
self.assertEqual(18446744073709551614, cur.lastrowid)
#this fails on mysql, but that is a mysql problem
#cur.execute("insert into tblautoincbigint (test_string) values (%s)", ('piet',))
#self.assertEqual(1, cur.rowcount)
#self.assertEqual(18446744073709551615, cur.lastrowid)
cur.close()
cnn.close()
def testLengthCodedBinary(self):
from geventmysql._mysql import Buffer, BufferUnderflowError
from geventmysql.mysql import PacketReader
def create_reader(bytes):
b = Buffer(1024)
for byte in bytes:
b.write_byte(byte)
b.flip()
p = PacketReader(b)
p.packet.position = b.position
p.packet.limit = b.limit
return p
p = create_reader([100])
self.assertEquals(100, p.read_length_coded_binary())
self.assertEquals(p.packet.position, p.packet.limit)
try:
p.read_length_coded_binary()
except BufferUnderflowError:
pass
except:
self.fail('expected underflow')
try:
p = create_reader([252])
p.read_length_coded_binary()
self.fail('expected underflow')
except BufferUnderflowError:
pass
except:
self.fail('expected underflow')
try:
p = create_reader([252, 0xff])
p.read_length_coded_binary()
self.fail('expected underflow')
except BufferUnderflowError:
pass
except:
self.fail('expected underflow')
p = create_reader([252, 0xff, 0xff])
self.assertEquals(0xFFFF, p.read_length_coded_binary())
self.assertEquals(3, p.packet.limit)
self.assertEquals(3, p.packet.position)
try:
p = create_reader([253])
p.read_length_coded_binary()
self.fail('expected underflow')
except BufferUnderflowError:
pass
except:
self.fail('expected underflow')
try:
p = create_reader([253, 0xff])
p.read_length_coded_binary()
self.fail('expected underflow')
except BufferUnderflowError:
pass
except:
self.fail('expected underflow')
try:
p = create_reader([253, 0xff, 0xff])
p.read_length_coded_binary()
self.fail('expected underflow')
except BufferUnderflowError:
pass
except:
self.fail('expected underflow')
p = create_reader([253, 0xff, 0xff, 0xff])
self.assertEquals(0xFFFFFF, p.read_length_coded_binary())
self.assertEquals(4, p.packet.limit)
self.assertEquals(4, p.packet.position)
try:
p = create_reader([254])
p.read_length_coded_binary()
self.fail('expected underflow')
except BufferUnderflowError:
pass
except:
self.fail('expected underflow')
try:
p = create_reader([254, 0xff])
p.read_length_coded_binary()
self.fail('expected underflow')
except BufferUnderflowError:
pass
except:
self.fail('expected underflow')
try:
p = create_reader([254, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff])
p.read_length_coded_binary()
self.fail('expected underflow')
except BufferUnderflowError:
pass
except:
self.fail('expected underflow')
p = create_reader([254, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff])
self.assertEquals(9, p.packet.limit)
self.assertEquals(0, p.packet.position)
self.assertEquals(0xFFFFFFFFFFFFFFFFL, p.read_length_coded_binary())
self.assertEquals(9, p.packet.limit)
self.assertEquals(9, p.packet.position)
def testBigInt(self):
"""Tests the behaviour of insert/select with bigint/long."""
BIGNUM = 112233445566778899
cnn = dbapi.connect(host = DB_HOST, user = DB_USER,
password = DB_PASSWD, db = DB_DB,
charset = 'latin-1', use_unicode = True)
cur = cnn.cursor()
cur.execute("drop table if exists tblbigint")
cur.execute("""create table tblbigint (
test_id int(11) DEFAULT NULL,
test_bigint bigint DEFAULT NULL,
test_bigint2 bigint DEFAULT NULL) ENGINE=MyISAM DEFAULT CHARSET=latin1""")
cur.execute("insert into tblbigint (test_id, test_bigint, test_bigint2) values (%s, " + str(BIGNUM) + ", %s)", (1, BIGNUM))
cur.execute(u"insert into tblbigint (test_id, test_bigint, test_bigint2) values (%s, " + str(BIGNUM) + ", %s)", (2, BIGNUM))
# Make sure both our inserts where correct (ie, the big number was not truncated/modified on insert)
cur.execute("select test_id from tblbigint where test_bigint = test_bigint2")
result = cur.fetchall()
self.assertEquals([(1, ), (2, )], result)
# Make sure select gets the right values (ie, the big number was not truncated/modified when retrieved)
cur.execute("select test_id, test_bigint, test_bigint2 from tblbigint where test_bigint = test_bigint2")
result = cur.fetchall()
self.assertEquals([(1, BIGNUM, BIGNUM), (2, BIGNUM, BIGNUM)], result)
def testDate(self):
"""Tests the behaviour of insert/select with mysql/DATE <-> python/datetime.date"""
d_date = datetime.date(2010, 02, 11)
d_string = "2010-02-11"
cnn = dbapi.connect(host = DB_HOST, user = DB_USER,
password = DB_PASSWD, db = DB_DB,
charset = 'latin-1', use_unicode = True)
cur = cnn.cursor()
cur.execute("drop table if exists tbldate")
cur.execute("create table tbldate (test_id int(11) DEFAULT NULL, test_date date DEFAULT NULL, test_date2 date DEFAULT NULL) ENGINE=MyISAM DEFAULT CHARSET=latin1")
cur.execute("insert into tbldate (test_id, test_date, test_date2) values (%s, '" + d_string + "', %s)", (1, d_date))
# Make sure our insert was correct
cur.execute("select test_id from tbldate where test_date = test_date2")
result = cur.fetchall()
self.assertEquals([(1, )], result)
# Make sure select gets the right value back
cur.execute("select test_id, test_date, test_date2 from tbldate where test_date = test_date2")
result = cur.fetchall()
self.assertEquals([(1, d_date, d_date)], result)
def testDateTime(self):
"""Tests the behaviour of insert/select with mysql/DATETIME <-> python/datetime.datetime"""
d_date = datetime.datetime(2010, 02, 11, 13, 37, 42)
d_string = "2010-02-11 13:37:42"
cnn = dbapi.connect(host = DB_HOST, user = DB_USER,
password = DB_PASSWD, db = DB_DB,
charset = 'latin-1', use_unicode = True)
cur = cnn.cursor()
cur.execute("drop table if exists tbldate")
cur.execute("create table tbldate (test_id int(11) DEFAULT NULL, test_date datetime DEFAULT NULL, test_date2 datetime DEFAULT NULL) ENGINE=MyISAM DEFAULT CHARSET=latin1")
cur.execute("insert into tbldate (test_id, test_date, test_date2) values (%s, '" + d_string + "', %s)", (1, d_date))
# Make sure our insert was correct
cur.execute("select test_id from tbldate where test_date = test_date2")
result = cur.fetchall()
self.assertEquals([(1, )], result)
# Make sure select gets the right value back
cur.execute("select test_id, test_date, test_date2 from tbldate where test_date = test_date2")
result = cur.fetchall()
self.assertEquals([(1, d_date, d_date)], result)
def testZeroDates(self):
"""Tests the behaviour of zero dates"""
zero_datetime = "0000-00-00 00:00:00"
zero_date = "0000-00-00"
cnn = dbapi.connect(host = DB_HOST, user = DB_USER,
password = DB_PASSWD, db = DB_DB,
charset = 'latin-1', use_unicode = True)
cur = cnn.cursor()
cur.execute("drop table if exists tbldate")
cur.execute("create table tbldate (test_id int(11) DEFAULT NULL, test_date date DEFAULT NULL, test_datetime datetime DEFAULT NULL) ENGINE=MyISAM DEFAULT CHARSET=latin1")
cur.execute("insert into tbldate (test_id, test_date, test_datetime) values (%s, %s, %s)", (1, zero_date, zero_datetime))
# Make sure we get None-values back
cur.execute("select test_id, test_date, test_datetime from tbldate where test_id = 1")
result = cur.fetchall()
self.assertEquals([(1, None, None)], result)
def testUnicodeUTF8(self):
peacesign_unicode = u"\u262e"
peacesign_utf8 = "\xe2\x98\xae"
cnn = dbapi.connect(host = DB_HOST, user = DB_USER,
password = DB_PASSWD, db = DB_DB,
charset = 'utf-8', use_unicode = True)
cur = cnn.cursor()
cur.execute("drop table if exists tblutf")
cur.execute("create table tblutf (test_id int(11) DEFAULT NULL, test_string VARCHAR(32) DEFAULT NULL) ENGINE=MyISAM DEFAULT CHARSET=utf8")
cur.execute("insert into tblutf (test_id, test_string) values (%s, %s)", (1, peacesign_unicode)) # This should be encoded in utf8
cur.execute("insert into tblutf (test_id, test_string) values (%s, %s)", (2, peacesign_utf8))
cur.execute("select test_id, test_string from tblutf")
result = cur.fetchall()
# We expect unicode strings back
self.assertEquals([(1, peacesign_unicode), (2, peacesign_unicode)], result)
def testCharsets(self):
aumlaut_unicode = u"\u00e4"
aumlaut_utf8 = "\xc3\xa4"
aumlaut_latin1 = "\xe4"
cnn = dbapi.connect(host = DB_HOST, user = DB_USER,
password = DB_PASSWD, db = DB_DB,
charset = 'utf8', use_unicode = True)
cur = cnn.cursor()
cur.execute("drop table if exists tblutf")
cur.execute("create table tblutf (test_mode VARCHAR(32) DEFAULT NULL, test_utf VARCHAR(32) DEFAULT NULL, test_latin1 VARCHAR(32)) ENGINE=MyISAM DEFAULT CHARSET=utf8")
# We insert the same character using two different encodings
cur.execute("set names utf8")
cur.execute("insert into tblutf (test_mode, test_utf, test_latin1) values ('utf8', _utf8'" + aumlaut_utf8 + "', _latin1'" + aumlaut_latin1 + "')")
cur.execute("set names latin1")
cur.execute("insert into tblutf (test_mode, test_utf, test_latin1) values ('latin1', _utf8'" + aumlaut_utf8 + "', _latin1'" + aumlaut_latin1 + "')")
# We expect the driver to always give us unicode strings back
expected = [(u"utf8", aumlaut_unicode, aumlaut_unicode), (u"latin1", aumlaut_unicode, aumlaut_unicode)]
# Fetch and test with different charsets
for charset in ("latin1", "utf8", "cp1250"):
cur.execute("set names " + charset)
cur.execute("select test_mode, test_utf, test_latin1 from tblutf")
result = cur.fetchall()
self.assertEquals(result, expected)
+ def testBinary(self):
+ peacesign_binary = "\xe2\x98\xae"
+ peacesign_binary2 = "\xe2\x98\xae" * 10
+ cnn = dbapi.connect(host = DB_HOST, user = DB_USER,
+ password = DB_PASSWD, db = DB_DB,
+ charset = 'latin-1', use_unicode = True)
+
+ cur = cnn.cursor()
+ cur.execute("drop table if exists tblbin")
+ cur.execute("create table tblbin (test_id int(11) DEFAULT NULL, test_binary VARBINARY(30) DEFAULT NULL) ENGINE=MyISAM DEFAULT CHARSET=utf8")
+
+ cur.execute("insert into tblbin (test_id, test_binary) values (%s, %s)", (1, peacesign_binary))
+ cur.execute("insert into tblbin (test_id, test_binary) values (%s, %s)", (2, peacesign_binary2))
+
+ cur.execute("select test_id, test_binary from tblbin")
+ result = cur.fetchall()
+
+ # We expect binary strings back
+ self.assertEquals([(1, peacesign_binary),(2, peacesign_binary2)], result)
+
+ def testBlob(self):
+ peacesign_binary = "\xe2\x98\xae"
+ peacesign_binary2 = "\xe2\x98\xae" * 1024
+
+ cnn = dbapi.connect(host = DB_HOST, user = DB_USER,
+ password = DB_PASSWD, db = DB_DB,
+ charset = 'latin-1', use_unicode = True)
+
+ cur = cnn.cursor()
+ cur.execute("drop table if exists tblblob")
+ cur.execute("create table tblblob (test_id int(11) DEFAULT NULL, test_blob BLOB DEFAULT NULL) ENGINE=MyISAM DEFAULT CHARSET=utf8")
+
+ cur.execute("insert into tblblob (test_id, test_blob) values (%s, %s)", (1, peacesign_binary))
+ cur.execute("insert into tblblob (test_id, test_blob) values (%s, %s)", (2, peacesign_binary2))
+
+ cur.execute("select test_id, test_blob from tblblob")
+ result = cur.fetchall()
+
+ # We expect binary strings back
+ self.assertEquals([(1, peacesign_binary),(2, peacesign_binary2)], result)
if __name__ == '__main__':
unittest.main()
|
mthurlin/gevent-MySQL
|
dd17caca3b4ef2935238276c905ad5636abd93ff
|
avoid decoding binary strings.
|
diff --git a/lib/geventmysql/geventmysql._mysql.pyx b/lib/geventmysql/geventmysql._mysql.pyx
index 6c2465d..0ea1385 100644
--- a/lib/geventmysql/geventmysql._mysql.pyx
+++ b/lib/geventmysql/geventmysql._mysql.pyx
@@ -486,712 +486,713 @@ cdef class Buffer:
s = self._buff[self._position] + (self._buff[self._position + 1] << 8)
self._position = self._position + 2
return s
cdef object _read_bytes(self, int n):
"""reads n bytes from buffer, updates position, and returns bytes as a python string"""
if n > (self._limit - self._position):
raise BufferUnderflowError()
else:
s = PyString_FromStringAndSize(<char *>(self._buff + self._position), n)
self._position = self._position + n
return s
def read_bytes(self, int n = -1):
"""Reads n bytes from buffer, updates position, and returns bytes as a python string,
if there are no n bytes available, a :exc:`BufferUnderflowError` is raised."""
if n == -1:
return self._read_bytes(self._limit - self._position)
else:
return self._read_bytes(n)
def read_bytes_until(self, int b):
"""Reads bytes until character b is found, or end of buffer is reached in which case it will raise a :exc:`BufferUnderflowError`."""
cdef int n, maxlen
cdef char *zpos, *start
if b < 0 or b > 255:
raise BufferInvalidArgumentError("b must in range [0..255]")
maxlen = self._limit - self._position
start = <char *>(self._buff + self._position)
zpos = <char *>(memchr(start, b, maxlen))
if zpos == NULL:
raise BufferUnderflowError()
else:
n = zpos - start
s = PyString_FromStringAndSize(start, n)
self._position = self._position + n + 1
return s
def read_line(self, int include_separator = 0):
"""Reads a single line of bytes from the buffer where the end of the line is indicated by either 'LF' or 'CRLF'.
The line will be returned as a string not including the line-separator. Optionally *include_separator* can be specified
to make the method to also return the line-separator."""
cdef int n, maxlen
cdef char *zpos, *start
maxlen = self._limit - self._position
start = <char *>(self._buff + self._position)
zpos = <char *>(memchr(start, 10, maxlen))
if maxlen == 0:
raise BufferUnderflowError()
if zpos == NULL:
raise BufferUnderflowError()
n = zpos - start
if self._buff[self._position + n - 1] == 13: #\r\n
if include_separator:
s = PyString_FromStringAndSize(start, n + 1)
self._position = self._position + n + 1
else:
s = PyString_FromStringAndSize(start, n - 1)
self._position = self._position + n + 1
else: #\n
if include_separator:
s = PyString_FromStringAndSize(start, n + 1)
self._position = self._position + n + 1
else:
s = PyString_FromStringAndSize(start, n)
self._position = self._position + n + 1
return s
def write_bytes(self, s):
"""Writes a number of bytes given by the python string s to the buffer and updates position. Raises
:exc:`BufferOverflowError` if you try to write beyond the current :attr:`limit`."""
cdef char *b
cdef Py_ssize_t n
PyString_AsStringAndSize(s, &b, &n)
if n > (self._limit - self._position):
raise BufferOverflowError()
else:
memcpy(self._buff + self._position, b, n)
self._position = self._position + n
return n
def write_buffer(self, Buffer other):
"""writes available bytes from other buffer to this buffer"""
self.write_bytes(other.read_bytes(-1)) #TODO use copy
cdef int _write_byte(self, unsigned int b) except -1:
"""writes a single byte to the buffer and updates position"""
if self._position + 1 <= self._limit:
self._buff[self._position] = b
self._position = self._position + 1
return 1
else:
raise BufferOverflowError()
def write_byte(self, unsigned int b):
"""writes a single byte to the buffer and updates position"""
return self._write_byte(b)
def write_int(self, unsigned int i):
"""writes a 32 bit integer to the buffer and updates position (little-endian)"""
if self._position + 4 <= self._limit:
self._buff[self._position + 0] = (i >> 0) & 0xFF
self._buff[self._position + 1] = (i >> 8) & 0xFF
self._buff[self._position + 2] = (i >> 16) & 0xFF
self._buff[self._position + 3] = (i >> 24) & 0xFF
self._position = self._position + 4
return 4
else:
raise BufferOverflowError()
def write_short(self, unsigned int i):
"""writes a 16 bit integer to the buffer and updates position (little-endian)"""
if self._position + 2 <= self._limit:
self._buff[self._position + 0] = (i >> 0) & 0xFF
self._buff[self._position + 1] = (i >> 8) & 0xFF
self._position = self._position + 2
return 2
else:
raise BufferOverflowError()
def hex_dump(self, out = None):
highlight1 = "\033[34m"
highlight2 = "\033[32m"
default = "\033[0m"
if out is None: out = sys.stdout
import string
out.write('<concurrence.io.Buffer id=%x, position=%d, limit=%d, capacity=%d>\n' % (id(self), self.position, self.limit, self._capacity))
printable = set(string.printable)
whitespace = set(string.whitespace)
x = 0
s1 = []
s2 = []
while x < self._capacity:
v = self[x]
if x < self.position:
s1.append('%s%02x%s' % (highlight1, v, default))
elif x < self.limit:
s1.append('%s%02x%s' % (highlight2, v, default))
else:
s1.append('%02x' % v)
c = chr(v)
if c in printable and not c in whitespace:
s2.append(c)
else:
s2.append('.')
x += 1
if x % 16 == 0:
out.write('%04x' % (x - 16) + ' ' + ' '.join(s1[:8]) + ' ' + ' '.join(s1[8:]) + ' ' + ''.join(s2[:8]) + ' ' + (''.join(s2[8:]) + '\n'))
s1 = []
s2 = []
out.flush()
def __repr__(self):
import cStringIO
sio = cStringIO.StringIO()
self.hex_dump(sio)
return sio.getvalue()
def __str__(self):
return repr(self)
class PacketReadError(Exception):
pass
MAX_PACKET_SIZE = 4 * 1024 * 1024 #4mb
cdef class PacketReader:
cdef int oversize
cdef readonly int number
cdef readonly int length #length in bytes of the current packet in the buffer
cdef readonly int command
cdef readonly int start #position of start of packet in buffer
cdef readonly int end
cdef public object encoding
cdef public object use_unicode
cdef readonly Buffer buffer #the current read buffer
cdef readonly Buffer packet #the current packet (could be normal or oversize packet):
cdef Buffer normal_packet #the normal packet
cdef Buffer oversize_packet #if we are reading an oversize packet, this is where we keep the data
def __init__(self, Buffer buffer):
self.oversize = 0
self.encoding = None
self.use_unicode = False
self.buffer = buffer
self.normal_packet = buffer.duplicate()
self.oversize_packet = buffer.duplicate()
self.packet = self.normal_packet
cdef int _read(self) except PACKET_READ_ERROR:
"""this method scans the buffer for packets, reporting the start, end of packet
or whether the packet in the buffer is incomplete and more data is needed"""
cdef int r
cdef Buffer buffer
buffer = self.buffer
self.command = 0
self.start = 0
self.end = 0
r = buffer._remaining()
if self.oversize == 0: #normal packet reading mode
#print 'normal mode', r
if r < 4:
#print 'rem < 4 return'
return PACKET_READ_NONE #incomplete header
#these four reads will always succeed because r >= 4
self.length = (buffer._read_byte()) + (buffer._read_byte() << 8) + (buffer._read_byte() << 16) + 4
self.number = buffer._read_byte()
if self.length <= r:
#a complete packet sitting in buffer
self.start = buffer._position - 4
self.end = self.start + self.length
self.command = buffer._buff[buffer._position]
buffer._skip(self.length - 4) #skip rest of packet
#print 'single packet recvd', self.length, self.command
if self.length < r:
return PACKET_READ_TRUE | PACKET_READ_START | PACKET_READ_END | PACKET_READ_MORE
else:
return PACKET_READ_TRUE | PACKET_READ_START | PACKET_READ_END
#return self.length < r #if l was smaller, tere is more, otherwise l == r and buffer is empty
else:
#print 'incomplete packet in buffer', buffer._position, self.length
if self.length > buffer._capacity:
#print 'start of oversize packet', self.length
self.start = buffer._position - 4
self.end = buffer._limit
self.command = buffer._buff[buffer._position]
buffer._position = buffer._limit #skip rest of buffer
self.oversize = self.length - r#left todo
return PACKET_READ_TRUE | PACKET_READ_START
else:
#print 'small incomplete packet', self.length, buffer._position
buffer._skip(-4) #rewind to start of incomplete packet
return PACKET_READ_NONE #incomplete packet
else: #busy reading an oversized packet
#print 'oversize mode', r, self.oversize, buffer.position, buffer.limit
self.start = buffer._position
if self.oversize < r:
buffer._skip(self.oversize) #skip rest of buffer
self.oversize = 0
else:
buffer._skip(r) #skip rest of buffer or remaining oversize
self.oversize = self.oversize - r
self.end = buffer._position
if self.oversize == 0:
#print 'oversize packet recvd'
return PACKET_READ_TRUE | PACKET_READ_END | PACKET_READ_MORE
else:
#print 'some data of oversize packet recvd'
return PACKET_READ_TRUE
def read(self):
return self._read()
cdef int _read_packet(self) except PACKET_READ_ERROR:
cdef int r, size, max_packet_size
r = self._read()
if r & PACKET_READ_TRUE:
if (r & PACKET_READ_START) and (r & PACKET_READ_END):
#normal sized packet, read entirely
self.packet = self.normal_packet
self.packet._position, self.packet._limit = self.start + 4, self.end
elif (r & PACKET_READ_START) and not (r & PACKET_READ_END):
#print 'start of oversize', self.end - self.start, self.length
#first create oversize_packet if necessary:
if self.oversize_packet._capacity < self.length:
#find first size multiple of 2 that will fit the oversize packet
size = self.buffer._capacity
while size < self.length:
size = size * 2
if size >= MAX_PACKET_SIZE:
raise PacketReadError("oversized packet will not fit in MAX_PACKET_SIZE, length: %d, MAX_PACKET_SIZE: %d" % (self.length, MAX_PACKET_SIZE))
#print 'createing oversize packet', size
self.oversize_packet = Buffer(size)
self.oversize_packet.copy(self.buffer, self.start, 0, self.end - self.start)
self.packet = self.oversize_packet
self.packet._position, self.packet._limit = 4, self.end - self.start
else:
#end or middle part of oversized packet
self.oversize_packet.copy(self.buffer, self.start, self.oversize_packet._limit, self.end - self.start)
self.oversize_packet._limit = self.oversize_packet._limit + (self.end - self.start)
return r
def read_packet(self):
return self._read_packet()
cdef _read_length_coded_binary(self):
cdef unsigned int n, v
cdef unsigned long long vw
cdef Buffer packet
packet = self.packet
if packet._position + 1 > packet._limit: raise BufferUnderflowError()
n = packet._buff[packet._position]
if n < 251:
packet._position = packet._position + 1
return n
elif n == 251:
assert False, 'unexpected, only valid for row data packet'
elif n == 252:
#16 bit word
if packet._position + 3 > packet._limit: raise BufferUnderflowError()
v = packet._buff[packet._position + 1] | ((packet._buff[packet._position + 2]) << 8)
packet._position = packet._position + 3
return v
elif n == 253:
#24 bit word
if packet._position + 4 > packet._limit: raise BufferUnderflowError()
v = packet._buff[packet._position + 1] | ((packet._buff[packet._position + 2]) << 8) | ((packet._buff[packet._position + 3]) << 16)
packet._position = packet._position + 4
return v
else:
#64 bit word
if packet._position + 9 > packet._limit: raise BufferUnderflowError()
vw = 0
vw |= (<unsigned long long>packet._buff[packet._position + 1]) << 0
vw |= (<unsigned long long>packet._buff[packet._position + 2]) << 8
vw |= (<unsigned long long>packet._buff[packet._position + 3]) << 16
vw |= (<unsigned long long>packet._buff[packet._position + 4]) << 24
vw |= (<unsigned long long>packet._buff[packet._position + 5]) << 32
vw |= (<unsigned long long>packet._buff[packet._position + 6]) << 40
vw |= (<unsigned long long>packet._buff[packet._position + 7]) << 48
vw |= (<unsigned long long>packet._buff[packet._position + 8]) << 56
packet._position = packet._position + 9
return vw
def read_length_coded_binary(self):
return self._read_length_coded_binary()
cdef _read_bytes_length_coded(self):
cdef unsigned int n, w
cdef Buffer packet
packet = self.packet
if packet._position + 1 > packet._limit: raise BufferUnderflowError()
n = packet._buff[packet._position]
w = 1
if n >= 251:
if n == 251:
packet._position = packet._position + 1
return None
elif n == 252:
if packet._position + 2 > packet._limit: raise BufferUnderflowError()
n = packet._buff[packet._position + 1] | ((packet._buff[packet._position + 2]) << 8)
w = 3
elif n == 253:
#24 bit word
if packet._position + 4 > packet._limit: raise BufferUnderflowError()
n = packet._buff[packet._position + 1] | ((packet._buff[packet._position + 2]) << 8) | ((packet._buff[packet._position + 3]) << 16)
w = 4
elif n == 254:
#64 bit word
if packet._position + 9 > packet._limit: raise BufferUnderflowError()
n = 0
n |= (<unsigned long long>packet._buff[packet._position + 1]) << 0
n |= (<unsigned long long>packet._buff[packet._position + 2]) << 8
n |= (<unsigned long long>packet._buff[packet._position + 3]) << 16
n |= (<unsigned long long>packet._buff[packet._position + 4]) << 24
n |= (<unsigned long long>packet._buff[packet._position + 5]) << 32
n |= (<unsigned long long>packet._buff[packet._position + 6]) << 40
n |= (<unsigned long long>packet._buff[packet._position + 7]) << 48
n |= (<unsigned long long>packet._buff[packet._position + 8]) << 56
w = 9
else:
assert False, 'not implemented yet, n: %02x' % n
if (n + w) > (packet._limit - packet._position):
raise BufferUnderflowError()
packet._position = packet._position + w
s = PyString_FromStringAndSize(<char *>(packet._buff + packet._position), n)
packet._position = packet._position + n
return s
def read_bytes_length_coded(self):
return self._read_bytes_length_coded()
def read_field_type(self):
cdef int n
cdef Buffer packet
packet = self.packet
n = packet._read_byte()
packet._skip(n) #catalog
n = packet._read_byte()
packet._skip(n) #db
n = packet._read_byte()
packet._skip(n) #table
n = packet._read_byte()
packet._skip(n) #org_table
n = packet._read_byte()
name = packet._read_bytes(n)
n = packet._read_byte()
packet._skip(n) #org_name
packet._skip(1)
charsetnr = packet._read_bytes(2)
n = packet._skip(4)
n = packet.read_byte() #type
return (name, n, charsetnr)
cdef _string_to_int(self, object s):
if s == None:
return None
else:
return int(s)
cdef _string_to_float(self, object s):
if s == None:
return None
else:
return float(s)
cdef _read_datestring(self):
cdef unsigned int n
cdef Buffer packet
packet = self.packet
if packet._position + 1 > packet._limit: raise BufferUnderflowError()
n = packet._buff[packet._position]
if n == 251:
packet._position = packet._position + 1
return None
packet._position = packet._position + 1
s = PyString_FromStringAndSize(<char *>(packet._buff + packet._position), n)
packet._position = packet._position + n
return s
cdef _datestring_to_date(self, object s):
if not s or s == "0000-00-00":
return None
parts = s.split("-")
try:
assert len(parts) == 3
d = datetime.date(*map(int, parts))
except (AssertionError, ValueError):
raise ValueError("Unhandled date format: %r" % (s, ))
return d
cdef _datestring_to_datetime(self, object s):
if not s:
return None
datestring, timestring = s.split(" ")
_date = self._datestring_to_date(datestring)
if _date is None:
return None
parts = timestring.split(":")
try:
assert len(parts) == 3
d = datetime.datetime(_date.year, _date.month, _date.day, *map(int, parts))
except (AssertionError, ValueError):
raise ValueError("Unhandled datetime format: %r" % (s, ))
return d
cdef int _read_row(self, object row, object fields, int field_count) except PACKET_READ_ERROR:
cdef int i, r
cdef int decode
if self.encoding:
decode = 1
encoding = self.encoding
else:
decode = 0
r = self._read_packet()
if r & PACKET_READ_END: #whole packet recv
if self.packet._buff[self.packet._position] == 0xFE:
return r | PACKET_READ_EOF
else:
i = 0
int_types = INT_TYPES
float_types = FLOAT_TYPES
string_types = STRING_TYPES
date_type = FIELD_TYPE.DATE
datetime_type = FIELD_TYPE.DATETIME
while i < field_count:
t = fields[i][1] #type_code
if t in int_types:
row[i] = self._string_to_int(self._read_bytes_length_coded())
elif t in string_types:
row[i] = self._read_bytes_length_coded()
if row[i] is not None and (self.encoding or self.use_unicode):
bytes = fields[i][2]
nr = ord(bytes[1]) << 8 | ord(bytes[0])
- row[i] = row[i].decode(charset_nr[nr])
+ if charset_nr[nr] != 'binary':
+ row[i] = row[i].decode(charset_nr[nr])
if not self.use_unicode:
row[i] = row[i].encode(self.encoding)
elif t in float_types:
row[i] = self._string_to_float(self._read_bytes_length_coded())
elif t == date_type:
row[i] = self._datestring_to_date(self._read_datestring())
elif t == datetime_type:
row[i] = self._datestring_to_datetime(self._read_datestring())
else:
row[i] = self._read_bytes_length_coded()
i = i + 1
return r
def read_rows(self, object fields, int row_count):
cdef int r, i, field_count
field_count = len(fields)
i = 0
r = 0
rows = []
row = [None] * field_count
add = rows.append
#print "Reading fields", len(fields)
while i < row_count:
r = self._read_row(row, fields, field_count)
if r & PACKET_READ_END:
if r & PACKET_READ_EOF:
break
else:
add(tuple(row))
if not (r & PACKET_READ_MORE):
break
i = i + 1
return r, rows
cdef enum:
PROXY_STATE_UNDEFINED = -2
PROXY_STATE_ERROR = -1
PROXY_STATE_INIT = 0
PROXY_STATE_READ_AUTH = 1
PROXY_STATE_READ_AUTH_RESULT = 2
PROXY_STATE_READ_AUTH_OLD_PASSWORD = 3
PROXY_STATE_READ_AUTH_OLD_PASSWORD_RESULT = 4
PROXY_STATE_READ_COMMAND = 5
PROXY_STATE_READ_RESULT = 6
PROXY_STATE_READ_RESULT_FIELDS = 7
PROXY_STATE_READ_RESULT_ROWS = 8
PROXY_STATE_READ_RESULT_FIELDS_ONLY = 9
PROXY_STATE_FINISHED = 10
class PROXY_STATE:
UNDEFINED = PROXY_STATE_UNDEFINED
ERROR = PROXY_STATE_ERROR
INIT = PROXY_STATE_INIT
FINISHED = PROXY_STATE_FINISHED
READ_AUTH = PROXY_STATE_READ_AUTH
READ_AUTH_RESULT = PROXY_STATE_READ_AUTH_RESULT
READ_AUTH_OLD_PASSWORD = PROXY_STATE_READ_AUTH_OLD_PASSWORD
READ_AUTH_OLD_PASSWORD_RESULT = PROXY_STATE_READ_AUTH_OLD_PASSWORD_RESULT
READ_COMMAND = PROXY_STATE_READ_COMMAND
READ_RESULT = PROXY_STATE_READ_RESULT
READ_RESULT_FIELDS = PROXY_STATE_READ_RESULT_FIELDS
READ_RESULT_ROWS = PROXY_STATE_READ_RESULT_ROWS
READ_RESULT_FIELDS_ONLY = PROXY_STATE_READ_RESULT_FIELDS_ONLY
SERVER_STATES = set([PROXY_STATE.INIT, PROXY_STATE.READ_AUTH_RESULT, PROXY_STATE.READ_AUTH_OLD_PASSWORD_RESULT,
PROXY_STATE.READ_RESULT, PROXY_STATE.READ_RESULT_FIELDS, PROXY_STATE.READ_RESULT_ROWS,
PROXY_STATE.READ_RESULT_FIELDS_ONLY, PROXY_STATE.FINISHED])
CLIENT_STATES = set([PROXY_STATE.READ_AUTH, PROXY_STATE.READ_AUTH_OLD_PASSWORD, PROXY_STATE.READ_COMMAND])
AUTH_RESULT_STATES = set([PROXY_STATE.READ_AUTH_OLD_PASSWORD_RESULT, PROXY_STATE.READ_AUTH_RESULT])
READ_RESULT_STATES = set([PROXY_STATE.READ_RESULT, PROXY_STATE.READ_RESULT_FIELDS, PROXY_STATE.READ_RESULT_ROWS, PROXY_STATE.READ_RESULT_FIELDS_ONLY])
class ProxyProtocolException(Exception):
pass
cdef class ProxyProtocol:
cdef readonly int state
cdef readonly int number
def __init__(self, initial_state = PROXY_STATE_INIT):
self.reset(initial_state)
def reset(self, int state):
self.state = state
self.number = 0
cdef int _check_number(self, PacketReader reader) except -1:
if self.state == PROXY_STATE_READ_COMMAND:
self.number = 0
if self.number != reader.number:
self.state = PROXY_STATE_ERROR
raise ProxyProtocolException('packet number out of sync')
self.number = self.number + 1
self.number = self.number % 256
def read_server(self, PacketReader reader):
cdef int read_result, prev_state
prev_state = self.state
while 1:
read_result = reader._read()
if read_result & PACKET_READ_START:
self._check_number(reader)
if read_result & PACKET_READ_END: #packet recvd
if self.state == PROXY_STATE_INIT:
#server handshake recvd
#server could have send error instead of inital handshake
self.state = PROXY_STATE_READ_AUTH
elif self.state == PROXY_STATE_READ_AUTH_RESULT:
#server auth result recvd
if reader.command == 0xFE:
self.state = PROXY_STATE_READ_AUTH_OLD_PASSWORD
elif reader.command == 0x00: #OK
self.state = PROXY_STATE_READ_COMMAND
elif self.state == PROXY_STATE_READ_AUTH_OLD_PASSWORD_RESULT:
#server auth old password result recvd
self.state = PROXY_STATE_READ_COMMAND
elif self.state == PROXY_STATE_READ_RESULT:
if reader.command == 0x00: #no result set but ok
#server result recvd OK
self.state = PROXY_STATE_READ_COMMAND
elif reader.command == 0xFF:
#no result set error
self.state = PROXY_STATE_READ_COMMAND
else:
#server result recv result set header
self.state = PROXY_STATE_READ_RESULT_FIELDS
elif self.state == PROXY_STATE_READ_RESULT_FIELDS:
if reader.command == 0xFE: #EOF for fields
#server result fields recvd
self.state = PROXY_STATE_READ_RESULT_ROWS
elif self.state == PROXY_STATE_READ_RESULT_ROWS:
if reader.command == 0xFE: #EOF for rows
#server result rows recvd
self.state = PROXY_STATE_READ_COMMAND
elif self.state == PROXY_STATE_READ_RESULT_FIELDS_ONLY:
if reader.command == 0xFE: #EOF for fields
#server result fields only recvd
self.state = PROXY_STATE_READ_COMMAND
else:
self.state = PROXY_STATE_ERROR
raise ProxyProtocolException('unexpected packet')
if self.state != prev_state:
break
if not (read_result & PACKET_READ_MORE):
break
return read_result, self.state, prev_state
def read_client(self, PacketReader reader):
cdef int read_result, prev_state
prev_state = self.state
while 1:
read_result = reader._read()
if read_result & PACKET_READ_START:
self._check_number(reader)
if read_result & PACKET_READ_END: #packet recvd
if self.state == PROXY_STATE_READ_AUTH:
#client auth recvd
self.state = PROXY_STATE_READ_AUTH_RESULT
elif self.state == PROXY_STATE_READ_AUTH_OLD_PASSWORD:
#client auth old pwd recvd
self.state = PROXY_STATE_READ_AUTH_OLD_PASSWORD_RESULT
elif self.state == PROXY_STATE_READ_COMMAND:
#client cmd recvd
if reader.command == COMMAND_LIST: #list cmd
self.state = PROXY_STATE_READ_RESULT_FIELDS_ONLY
elif reader.command == COMMAND_QUIT: #COM_QUIT
self.state = PROXY_STATE_FINISHED
else:
self.state = PROXY_STATE_READ_RESULT
else:
self.state = PROXY_STATE_ERROR
raise ProxyProtocolException('unexpected packet')
if self.state != prev_state:
break
if not (read_result & PACKET_READ_MORE):
break
return read_result, self.state, prev_state
|
mthurlin/gevent-MySQL
|
8380290de33764ca2376b20b55327444c5a32094
|
added binary/blob field's tests.
|
diff --git a/test/testmysql.py b/test/testmysql.py
index 62c2480..b6c0e44 100644
--- a/test/testmysql.py
+++ b/test/testmysql.py
@@ -93,521 +93,561 @@ class TestMySQL(unittest.TestCase):
def query(s):
cnn = dbapi.connect(host = DB_HOST, user = DB_USER,
password = DB_PASSWD, db = DB_DB)
cur = cnn.cursor()
cur.execute("select sleep(%d)" % s)
cur.close()
cnn.close()
start = time.time()
ch1 = gevent.spawn(query, 1)
ch2 = gevent.spawn(query, 2)
ch3 = gevent.spawn(query, 3)
gevent.joinall([ch1, ch2, ch3])
end = time.time()
self.assertAlmostEqual(3.0, end - start, places = 1)
def testMySQLDBAPI(self):
cnn = dbapi.connect(host = DB_HOST, user = DB_USER,
password = DB_PASSWD, db = DB_DB)
cur = cnn.cursor()
cur.execute("truncate tbltest")
for i in range(10):
cur.execute("insert into tbltest (test_id, test_string) values (%d, 'test%d')" % (i, i))
cur.close()
cur = cnn.cursor()
cur.execute("select test_id, test_string from tbltest")
self.assertEquals((0, 'test0'), cur.fetchone())
#check that fetchall gets the remainder
self.assertEquals([(1, 'test1'), (2, 'test2'), (3, 'test3'), (4, 'test4'), (5, 'test5'), (6, 'test6'), (7, 'test7'), (8, 'test8'), (9, 'test9')], cur.fetchall())
#another query on the same cursor should work
cur.execute("select test_id, test_string from tbltest")
#fetch some but not all
self.assertEquals((0, 'test0'), cur.fetchone())
self.assertEquals((1, 'test1'), cur.fetchone())
self.assertEquals((2, 'test2'), cur.fetchone())
#close whould work even with half read resultset
cur.close()
#this should not work, cursor was closed
try:
cur.execute("select * from tbltest")
self.fail("expected exception")
except dbapi.ProgrammingError:
pass
def testLargePackets(self):
cnn = client.connect(host = DB_HOST, user = DB_USER,
password = DB_PASSWD, db = DB_DB)
cnn.query("truncate tbltest")
c = cnn.buffer.capacity
blob = '0123456789'
while 1:
cnn.query("insert into tbltest (test_id, test_blob) values (%d, '%s')" % (len(blob), blob))
if len(blob) > (c * 2): break
blob = blob * 2
rs = cnn.query("select test_id, test_blob from tbltest")
for row in rs:
self.assertEquals(row[0], len(row[1]))
self.assertEquals(blob[:row[0]], row[1])
rs.close()
#reread, second time, oversize packet is already present
rs = cnn.query("select test_id, test_blob from tbltest")
for row in rs:
self.assertEquals(row[0], len(row[1]))
self.assertEquals(blob[:row[0]], row[1])
rs.close()
cnn.close()
#have a very low max packet size for oversize packets
#and check that exception is thrown when trying to read larger packets
from geventmysql import _mysql
_mysql.MAX_PACKET_SIZE = 1024 * 4
cnn = client.connect(host = DB_HOST, user = DB_USER,
password = DB_PASSWD, db = DB_DB)
try:
rs = cnn.query("select test_id, test_blob from tbltest")
for row in rs:
self.assertEquals(row[0], len(row[1]))
self.assertEquals(blob[:row[0]], row[1])
self.fail()
except PacketReadError:
pass
finally:
try:
rs.close()
except:
pass
cnn.close()
def testEscapeArgs(self):
cnn = dbapi.connect(host = DB_HOST, user = DB_USER,
password = DB_PASSWD, db = DB_DB)
cur = cnn.cursor()
cur.execute("truncate tbltest")
cur.execute("insert into tbltest (test_id, test_string) values (%s, %s)", (1, 'piet'))
cur.execute("insert into tbltest (test_id, test_string) values (%s, %s)", (2, 'klaas'))
cur.execute("insert into tbltest (test_id, test_string) values (%s, %s)", (3, "pi'et"))
#classic sql injection, would return all rows if no proper escaping is done
cur.execute("select test_id, test_string from tbltest where test_string = %s", ("piet' OR 'a' = 'a",))
self.assertEquals([], cur.fetchall()) #assert no rows are found
#but we should still be able to find the piet with the apostrophe in its name
cur.execute("select test_id, test_string from tbltest where test_string = %s", ("pi'et",))
self.assertEquals([(3, "pi'et")], cur.fetchall())
#also we should be able to insert and retrieve blob/string with all possible bytes transparently
chars = ''.join([chr(i) for i in range(256)])
cur.execute("insert into tbltest (test_id, test_string, test_blob) values (%s, %s, %s)", (4, chars, chars))
cur.execute("select test_string, test_blob from tbltest where test_id = %s", (4,))
#self.assertEquals([(chars, chars)], cur.fetchall())
s, b = cur.fetchall()[0]
#test blob
self.assertEquals(256, len(b))
self.assertEquals(chars, b)
#test string
self.assertEquals(256, len(s))
self.assertEquals(chars, s)
cur.close()
cnn.close()
def testSelectUnicode(self):
s = u'r\xc3\xa4ksm\xc3\xb6rg\xc3\xa5s'
cnn = dbapi.connect(host = DB_HOST, user = DB_USER,
password = DB_PASSWD, db = DB_DB,
charset = 'latin-1', use_unicode = True)
cur = cnn.cursor()
cur.execute("truncate tbltest")
cur.execute("insert into tbltest (test_id, test_string) values (%s, %s)", (1, 'piet'))
cur.execute("insert into tbltest (test_id, test_string) values (%s, %s)", (2, s))
cur.execute(u"insert into tbltest (test_id, test_string) values (%s, %s)", (3, s))
cur.execute("select test_id, test_string from tbltest")
result = cur.fetchall()
self.assertEquals([(1, u'piet'), (2, s), (3, s)], result)
#test that we can still cleanly roundtrip a blob, (it should not be encoded if we pass
#it as 'str' argument), eventhough we pass the qry itself as unicode
blob = ''.join([chr(i) for i in range(256)])
cur.execute(u"insert into tbltest (test_id, test_blob) values (%s, %s)", (4, blob))
cur.execute("select test_blob from tbltest where test_id = %s", (4,))
b2 = cur.fetchall()[0][0]
self.assertEquals(str, type(b2))
self.assertEquals(256, len(b2))
self.assertEquals(blob, b2)
def testAutoInc(self):
cnn = dbapi.connect(host = DB_HOST, user = DB_USER,
password = DB_PASSWD, db = DB_DB)
cur = cnn.cursor()
cur.execute("truncate tblautoincint")
cur.execute("ALTER TABLE tblautoincint AUTO_INCREMENT = 100")
cur.execute("insert into tblautoincint (test_string) values (%s)", ('piet',))
self.assertEqual(1, cur.rowcount)
self.assertEqual(100, cur.lastrowid)
cur.execute("insert into tblautoincint (test_string) values (%s)", ('piet',))
self.assertEqual(1, cur.rowcount)
self.assertEqual(101, cur.lastrowid)
cur.execute("ALTER TABLE tblautoincint AUTO_INCREMENT = 4294967294")
cur.execute("insert into tblautoincint (test_string) values (%s)", ('piet',))
self.assertEqual(1, cur.rowcount)
self.assertEqual(4294967294, cur.lastrowid)
cur.execute("insert into tblautoincint (test_string) values (%s)", ('piet',))
self.assertEqual(1, cur.rowcount)
self.assertEqual(4294967295, cur.lastrowid)
cur.execute("truncate tblautoincbigint")
cur.execute("ALTER TABLE tblautoincbigint AUTO_INCREMENT = 100")
cur.execute("insert into tblautoincbigint (test_string) values (%s)", ('piet',))
self.assertEqual(1, cur.rowcount)
self.assertEqual(100, cur.lastrowid)
cur.execute("insert into tblautoincbigint (test_string) values (%s)", ('piet',))
self.assertEqual(1, cur.rowcount)
self.assertEqual(101, cur.lastrowid)
cur.execute("ALTER TABLE tblautoincbigint AUTO_INCREMENT = 18446744073709551614")
cur.execute("insert into tblautoincbigint (test_string) values (%s)", ('piet',))
self.assertEqual(1, cur.rowcount)
self.assertEqual(18446744073709551614, cur.lastrowid)
#this fails on mysql, but that is a mysql problem
#cur.execute("insert into tblautoincbigint (test_string) values (%s)", ('piet',))
#self.assertEqual(1, cur.rowcount)
#self.assertEqual(18446744073709551615, cur.lastrowid)
cur.close()
cnn.close()
def testLengthCodedBinary(self):
from geventmysql._mysql import Buffer, BufferUnderflowError
from geventmysql.mysql import PacketReader
def create_reader(bytes):
b = Buffer(1024)
for byte in bytes:
b.write_byte(byte)
b.flip()
p = PacketReader(b)
p.packet.position = b.position
p.packet.limit = b.limit
return p
p = create_reader([100])
self.assertEquals(100, p.read_length_coded_binary())
self.assertEquals(p.packet.position, p.packet.limit)
try:
p.read_length_coded_binary()
except BufferUnderflowError:
pass
except:
self.fail('expected underflow')
try:
p = create_reader([252])
p.read_length_coded_binary()
self.fail('expected underflow')
except BufferUnderflowError:
pass
except:
self.fail('expected underflow')
try:
p = create_reader([252, 0xff])
p.read_length_coded_binary()
self.fail('expected underflow')
except BufferUnderflowError:
pass
except:
self.fail('expected underflow')
p = create_reader([252, 0xff, 0xff])
self.assertEquals(0xFFFF, p.read_length_coded_binary())
self.assertEquals(3, p.packet.limit)
self.assertEquals(3, p.packet.position)
try:
p = create_reader([253])
p.read_length_coded_binary()
self.fail('expected underflow')
except BufferUnderflowError:
pass
except:
self.fail('expected underflow')
try:
p = create_reader([253, 0xff])
p.read_length_coded_binary()
self.fail('expected underflow')
except BufferUnderflowError:
pass
except:
self.fail('expected underflow')
try:
p = create_reader([253, 0xff, 0xff])
p.read_length_coded_binary()
self.fail('expected underflow')
except BufferUnderflowError:
pass
except:
self.fail('expected underflow')
p = create_reader([253, 0xff, 0xff, 0xff])
self.assertEquals(0xFFFFFF, p.read_length_coded_binary())
self.assertEquals(4, p.packet.limit)
self.assertEquals(4, p.packet.position)
try:
p = create_reader([254])
p.read_length_coded_binary()
self.fail('expected underflow')
except BufferUnderflowError:
pass
except:
self.fail('expected underflow')
try:
p = create_reader([254, 0xff])
p.read_length_coded_binary()
self.fail('expected underflow')
except BufferUnderflowError:
pass
except:
self.fail('expected underflow')
try:
p = create_reader([254, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff])
p.read_length_coded_binary()
self.fail('expected underflow')
except BufferUnderflowError:
pass
except:
self.fail('expected underflow')
p = create_reader([254, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff])
self.assertEquals(9, p.packet.limit)
self.assertEquals(0, p.packet.position)
self.assertEquals(0xFFFFFFFFFFFFFFFFL, p.read_length_coded_binary())
self.assertEquals(9, p.packet.limit)
self.assertEquals(9, p.packet.position)
def testBigInt(self):
"""Tests the behaviour of insert/select with bigint/long."""
BIGNUM = 112233445566778899
cnn = dbapi.connect(host = DB_HOST, user = DB_USER,
password = DB_PASSWD, db = DB_DB,
charset = 'latin-1', use_unicode = True)
cur = cnn.cursor()
cur.execute("drop table if exists tblbigint")
cur.execute("""create table tblbigint (
test_id int(11) DEFAULT NULL,
test_bigint bigint DEFAULT NULL,
test_bigint2 bigint DEFAULT NULL) ENGINE=MyISAM DEFAULT CHARSET=latin1""")
cur.execute("insert into tblbigint (test_id, test_bigint, test_bigint2) values (%s, " + str(BIGNUM) + ", %s)", (1, BIGNUM))
cur.execute(u"insert into tblbigint (test_id, test_bigint, test_bigint2) values (%s, " + str(BIGNUM) + ", %s)", (2, BIGNUM))
# Make sure both our inserts where correct (ie, the big number was not truncated/modified on insert)
cur.execute("select test_id from tblbigint where test_bigint = test_bigint2")
result = cur.fetchall()
self.assertEquals([(1, ), (2, )], result)
# Make sure select gets the right values (ie, the big number was not truncated/modified when retrieved)
cur.execute("select test_id, test_bigint, test_bigint2 from tblbigint where test_bigint = test_bigint2")
result = cur.fetchall()
self.assertEquals([(1, BIGNUM, BIGNUM), (2, BIGNUM, BIGNUM)], result)
def testDate(self):
"""Tests the behaviour of insert/select with mysql/DATE <-> python/datetime.date"""
d_date = datetime.date(2010, 02, 11)
d_string = "2010-02-11"
cnn = dbapi.connect(host = DB_HOST, user = DB_USER,
password = DB_PASSWD, db = DB_DB,
charset = 'latin-1', use_unicode = True)
cur = cnn.cursor()
cur.execute("drop table if exists tbldate")
cur.execute("create table tbldate (test_id int(11) DEFAULT NULL, test_date date DEFAULT NULL, test_date2 date DEFAULT NULL) ENGINE=MyISAM DEFAULT CHARSET=latin1")
cur.execute("insert into tbldate (test_id, test_date, test_date2) values (%s, '" + d_string + "', %s)", (1, d_date))
# Make sure our insert was correct
cur.execute("select test_id from tbldate where test_date = test_date2")
result = cur.fetchall()
self.assertEquals([(1, )], result)
# Make sure select gets the right value back
cur.execute("select test_id, test_date, test_date2 from tbldate where test_date = test_date2")
result = cur.fetchall()
self.assertEquals([(1, d_date, d_date)], result)
def testDateTime(self):
"""Tests the behaviour of insert/select with mysql/DATETIME <-> python/datetime.datetime"""
d_date = datetime.datetime(2010, 02, 11, 13, 37, 42)
d_string = "2010-02-11 13:37:42"
cnn = dbapi.connect(host = DB_HOST, user = DB_USER,
password = DB_PASSWD, db = DB_DB,
charset = 'latin-1', use_unicode = True)
cur = cnn.cursor()
cur.execute("drop table if exists tbldate")
cur.execute("create table tbldate (test_id int(11) DEFAULT NULL, test_date datetime DEFAULT NULL, test_date2 datetime DEFAULT NULL) ENGINE=MyISAM DEFAULT CHARSET=latin1")
cur.execute("insert into tbldate (test_id, test_date, test_date2) values (%s, '" + d_string + "', %s)", (1, d_date))
# Make sure our insert was correct
cur.execute("select test_id from tbldate where test_date = test_date2")
result = cur.fetchall()
self.assertEquals([(1, )], result)
# Make sure select gets the right value back
cur.execute("select test_id, test_date, test_date2 from tbldate where test_date = test_date2")
result = cur.fetchall()
self.assertEquals([(1, d_date, d_date)], result)
def testZeroDates(self):
"""Tests the behaviour of zero dates"""
zero_datetime = "0000-00-00 00:00:00"
zero_date = "0000-00-00"
cnn = dbapi.connect(host = DB_HOST, user = DB_USER,
password = DB_PASSWD, db = DB_DB,
charset = 'latin-1', use_unicode = True)
cur = cnn.cursor()
cur.execute("drop table if exists tbldate")
cur.execute("create table tbldate (test_id int(11) DEFAULT NULL, test_date date DEFAULT NULL, test_datetime datetime DEFAULT NULL) ENGINE=MyISAM DEFAULT CHARSET=latin1")
cur.execute("insert into tbldate (test_id, test_date, test_datetime) values (%s, %s, %s)", (1, zero_date, zero_datetime))
# Make sure we get None-values back
cur.execute("select test_id, test_date, test_datetime from tbldate where test_id = 1")
result = cur.fetchall()
self.assertEquals([(1, None, None)], result)
def testUnicodeUTF8(self):
peacesign_unicode = u"\u262e"
peacesign_utf8 = "\xe2\x98\xae"
cnn = dbapi.connect(host = DB_HOST, user = DB_USER,
password = DB_PASSWD, db = DB_DB,
charset = 'utf-8', use_unicode = True)
cur = cnn.cursor()
cur.execute("drop table if exists tblutf")
cur.execute("create table tblutf (test_id int(11) DEFAULT NULL, test_string VARCHAR(32) DEFAULT NULL) ENGINE=MyISAM DEFAULT CHARSET=utf8")
cur.execute("insert into tblutf (test_id, test_string) values (%s, %s)", (1, peacesign_unicode)) # This should be encoded in utf8
cur.execute("insert into tblutf (test_id, test_string) values (%s, %s)", (2, peacesign_utf8))
cur.execute("select test_id, test_string from tblutf")
result = cur.fetchall()
# We expect unicode strings back
self.assertEquals([(1, peacesign_unicode), (2, peacesign_unicode)], result)
def testCharsets(self):
aumlaut_unicode = u"\u00e4"
aumlaut_utf8 = "\xc3\xa4"
aumlaut_latin1 = "\xe4"
cnn = dbapi.connect(host = DB_HOST, user = DB_USER,
password = DB_PASSWD, db = DB_DB,
charset = 'utf8', use_unicode = True)
cur = cnn.cursor()
cur.execute("drop table if exists tblutf")
cur.execute("create table tblutf (test_mode VARCHAR(32) DEFAULT NULL, test_utf VARCHAR(32) DEFAULT NULL, test_latin1 VARCHAR(32)) ENGINE=MyISAM DEFAULT CHARSET=utf8")
# We insert the same character using two different encodings
cur.execute("set names utf8")
cur.execute("insert into tblutf (test_mode, test_utf, test_latin1) values ('utf8', _utf8'" + aumlaut_utf8 + "', _latin1'" + aumlaut_latin1 + "')")
cur.execute("set names latin1")
cur.execute("insert into tblutf (test_mode, test_utf, test_latin1) values ('latin1', _utf8'" + aumlaut_utf8 + "', _latin1'" + aumlaut_latin1 + "')")
# We expect the driver to always give us unicode strings back
expected = [(u"utf8", aumlaut_unicode, aumlaut_unicode), (u"latin1", aumlaut_unicode, aumlaut_unicode)]
# Fetch and test with different charsets
for charset in ("latin1", "utf8", "cp1250"):
cur.execute("set names " + charset)
cur.execute("select test_mode, test_utf, test_latin1 from tblutf")
result = cur.fetchall()
self.assertEquals(result, expected)
+ def testBinary(self):
+ peacesign_binary = "\xe2\x98\xae"
+ peacesign_binary2 = "\xe2\x98\xae" * 10
+ cnn = dbapi.connect(host = DB_HOST, user = DB_USER,
+ password = DB_PASSWD, db = DB_DB,
+ charset = 'latin-1', use_unicode = True)
+
+ cur = cnn.cursor()
+ cur.execute("drop table if exists tblbin")
+ cur.execute("create table tblbin (test_id int(11) DEFAULT NULL, test_binary VARBINARY(30) DEFAULT NULL) ENGINE=MyISAM DEFAULT CHARSET=utf8")
+
+ cur.execute("insert into tblbin (test_id, test_binary) values (%s, %s)", (1, peacesign_binary))
+ cur.execute("insert into tblbin (test_id, test_binary) values (%s, %s)", (2, peacesign_binary2))
+
+ cur.execute("select test_id, test_binary from tblbin")
+ result = cur.fetchall()
+
+ # We expect binary strings back
+ self.assertEquals([(1, peacesign_binary),(2, peacesign_binary2)], result)
+
+ def testBlob(self):
+ peacesign_binary = "\xe2\x98\xae"
+ peacesign_binary2 = "\xe2\x98\xae" * 1024
+
+ cnn = dbapi.connect(host = DB_HOST, user = DB_USER,
+ password = DB_PASSWD, db = DB_DB,
+ charset = 'latin-1', use_unicode = True)
+
+ cur = cnn.cursor()
+ cur.execute("drop table if exists tblblob")
+ cur.execute("create table tblblob (test_id int(11) DEFAULT NULL, test_blob BLOB DEFAULT NULL) ENGINE=MyISAM DEFAULT CHARSET=utf8")
+
+ cur.execute("insert into tblblob (test_id, test_blob) values (%s, %s)", (1, peacesign_binary))
+ cur.execute("insert into tblblob (test_id, test_blob) values (%s, %s)", (2, peacesign_binary2))
+
+ cur.execute("select test_id, test_blob from tblblob")
+ result = cur.fetchall()
+
+ # We expect binary strings back
+ self.assertEquals([(1, peacesign_binary),(2, peacesign_binary2)], result)
if __name__ == '__main__':
unittest.main()
|
mthurlin/gevent-MySQL
|
162d098e3d4f8d97d27d3d6676f3c0afb38d9f16
|
avoid decoding binary strings.
|
diff --git a/lib/geventmysql/geventmysql._mysql.pyx b/lib/geventmysql/geventmysql._mysql.pyx
index 6c2465d..0ea1385 100644
--- a/lib/geventmysql/geventmysql._mysql.pyx
+++ b/lib/geventmysql/geventmysql._mysql.pyx
@@ -486,712 +486,713 @@ cdef class Buffer:
s = self._buff[self._position] + (self._buff[self._position + 1] << 8)
self._position = self._position + 2
return s
cdef object _read_bytes(self, int n):
"""reads n bytes from buffer, updates position, and returns bytes as a python string"""
if n > (self._limit - self._position):
raise BufferUnderflowError()
else:
s = PyString_FromStringAndSize(<char *>(self._buff + self._position), n)
self._position = self._position + n
return s
def read_bytes(self, int n = -1):
"""Reads n bytes from buffer, updates position, and returns bytes as a python string,
if there are no n bytes available, a :exc:`BufferUnderflowError` is raised."""
if n == -1:
return self._read_bytes(self._limit - self._position)
else:
return self._read_bytes(n)
def read_bytes_until(self, int b):
"""Reads bytes until character b is found, or end of buffer is reached in which case it will raise a :exc:`BufferUnderflowError`."""
cdef int n, maxlen
cdef char *zpos, *start
if b < 0 or b > 255:
raise BufferInvalidArgumentError("b must in range [0..255]")
maxlen = self._limit - self._position
start = <char *>(self._buff + self._position)
zpos = <char *>(memchr(start, b, maxlen))
if zpos == NULL:
raise BufferUnderflowError()
else:
n = zpos - start
s = PyString_FromStringAndSize(start, n)
self._position = self._position + n + 1
return s
def read_line(self, int include_separator = 0):
"""Reads a single line of bytes from the buffer where the end of the line is indicated by either 'LF' or 'CRLF'.
The line will be returned as a string not including the line-separator. Optionally *include_separator* can be specified
to make the method to also return the line-separator."""
cdef int n, maxlen
cdef char *zpos, *start
maxlen = self._limit - self._position
start = <char *>(self._buff + self._position)
zpos = <char *>(memchr(start, 10, maxlen))
if maxlen == 0:
raise BufferUnderflowError()
if zpos == NULL:
raise BufferUnderflowError()
n = zpos - start
if self._buff[self._position + n - 1] == 13: #\r\n
if include_separator:
s = PyString_FromStringAndSize(start, n + 1)
self._position = self._position + n + 1
else:
s = PyString_FromStringAndSize(start, n - 1)
self._position = self._position + n + 1
else: #\n
if include_separator:
s = PyString_FromStringAndSize(start, n + 1)
self._position = self._position + n + 1
else:
s = PyString_FromStringAndSize(start, n)
self._position = self._position + n + 1
return s
def write_bytes(self, s):
"""Writes a number of bytes given by the python string s to the buffer and updates position. Raises
:exc:`BufferOverflowError` if you try to write beyond the current :attr:`limit`."""
cdef char *b
cdef Py_ssize_t n
PyString_AsStringAndSize(s, &b, &n)
if n > (self._limit - self._position):
raise BufferOverflowError()
else:
memcpy(self._buff + self._position, b, n)
self._position = self._position + n
return n
def write_buffer(self, Buffer other):
"""writes available bytes from other buffer to this buffer"""
self.write_bytes(other.read_bytes(-1)) #TODO use copy
cdef int _write_byte(self, unsigned int b) except -1:
"""writes a single byte to the buffer and updates position"""
if self._position + 1 <= self._limit:
self._buff[self._position] = b
self._position = self._position + 1
return 1
else:
raise BufferOverflowError()
def write_byte(self, unsigned int b):
"""writes a single byte to the buffer and updates position"""
return self._write_byte(b)
def write_int(self, unsigned int i):
"""writes a 32 bit integer to the buffer and updates position (little-endian)"""
if self._position + 4 <= self._limit:
self._buff[self._position + 0] = (i >> 0) & 0xFF
self._buff[self._position + 1] = (i >> 8) & 0xFF
self._buff[self._position + 2] = (i >> 16) & 0xFF
self._buff[self._position + 3] = (i >> 24) & 0xFF
self._position = self._position + 4
return 4
else:
raise BufferOverflowError()
def write_short(self, unsigned int i):
"""writes a 16 bit integer to the buffer and updates position (little-endian)"""
if self._position + 2 <= self._limit:
self._buff[self._position + 0] = (i >> 0) & 0xFF
self._buff[self._position + 1] = (i >> 8) & 0xFF
self._position = self._position + 2
return 2
else:
raise BufferOverflowError()
def hex_dump(self, out = None):
highlight1 = "\033[34m"
highlight2 = "\033[32m"
default = "\033[0m"
if out is None: out = sys.stdout
import string
out.write('<concurrence.io.Buffer id=%x, position=%d, limit=%d, capacity=%d>\n' % (id(self), self.position, self.limit, self._capacity))
printable = set(string.printable)
whitespace = set(string.whitespace)
x = 0
s1 = []
s2 = []
while x < self._capacity:
v = self[x]
if x < self.position:
s1.append('%s%02x%s' % (highlight1, v, default))
elif x < self.limit:
s1.append('%s%02x%s' % (highlight2, v, default))
else:
s1.append('%02x' % v)
c = chr(v)
if c in printable and not c in whitespace:
s2.append(c)
else:
s2.append('.')
x += 1
if x % 16 == 0:
out.write('%04x' % (x - 16) + ' ' + ' '.join(s1[:8]) + ' ' + ' '.join(s1[8:]) + ' ' + ''.join(s2[:8]) + ' ' + (''.join(s2[8:]) + '\n'))
s1 = []
s2 = []
out.flush()
def __repr__(self):
import cStringIO
sio = cStringIO.StringIO()
self.hex_dump(sio)
return sio.getvalue()
def __str__(self):
return repr(self)
class PacketReadError(Exception):
pass
MAX_PACKET_SIZE = 4 * 1024 * 1024 #4mb
cdef class PacketReader:
cdef int oversize
cdef readonly int number
cdef readonly int length #length in bytes of the current packet in the buffer
cdef readonly int command
cdef readonly int start #position of start of packet in buffer
cdef readonly int end
cdef public object encoding
cdef public object use_unicode
cdef readonly Buffer buffer #the current read buffer
cdef readonly Buffer packet #the current packet (could be normal or oversize packet):
cdef Buffer normal_packet #the normal packet
cdef Buffer oversize_packet #if we are reading an oversize packet, this is where we keep the data
def __init__(self, Buffer buffer):
self.oversize = 0
self.encoding = None
self.use_unicode = False
self.buffer = buffer
self.normal_packet = buffer.duplicate()
self.oversize_packet = buffer.duplicate()
self.packet = self.normal_packet
cdef int _read(self) except PACKET_READ_ERROR:
"""this method scans the buffer for packets, reporting the start, end of packet
or whether the packet in the buffer is incomplete and more data is needed"""
cdef int r
cdef Buffer buffer
buffer = self.buffer
self.command = 0
self.start = 0
self.end = 0
r = buffer._remaining()
if self.oversize == 0: #normal packet reading mode
#print 'normal mode', r
if r < 4:
#print 'rem < 4 return'
return PACKET_READ_NONE #incomplete header
#these four reads will always succeed because r >= 4
self.length = (buffer._read_byte()) + (buffer._read_byte() << 8) + (buffer._read_byte() << 16) + 4
self.number = buffer._read_byte()
if self.length <= r:
#a complete packet sitting in buffer
self.start = buffer._position - 4
self.end = self.start + self.length
self.command = buffer._buff[buffer._position]
buffer._skip(self.length - 4) #skip rest of packet
#print 'single packet recvd', self.length, self.command
if self.length < r:
return PACKET_READ_TRUE | PACKET_READ_START | PACKET_READ_END | PACKET_READ_MORE
else:
return PACKET_READ_TRUE | PACKET_READ_START | PACKET_READ_END
#return self.length < r #if l was smaller, tere is more, otherwise l == r and buffer is empty
else:
#print 'incomplete packet in buffer', buffer._position, self.length
if self.length > buffer._capacity:
#print 'start of oversize packet', self.length
self.start = buffer._position - 4
self.end = buffer._limit
self.command = buffer._buff[buffer._position]
buffer._position = buffer._limit #skip rest of buffer
self.oversize = self.length - r#left todo
return PACKET_READ_TRUE | PACKET_READ_START
else:
#print 'small incomplete packet', self.length, buffer._position
buffer._skip(-4) #rewind to start of incomplete packet
return PACKET_READ_NONE #incomplete packet
else: #busy reading an oversized packet
#print 'oversize mode', r, self.oversize, buffer.position, buffer.limit
self.start = buffer._position
if self.oversize < r:
buffer._skip(self.oversize) #skip rest of buffer
self.oversize = 0
else:
buffer._skip(r) #skip rest of buffer or remaining oversize
self.oversize = self.oversize - r
self.end = buffer._position
if self.oversize == 0:
#print 'oversize packet recvd'
return PACKET_READ_TRUE | PACKET_READ_END | PACKET_READ_MORE
else:
#print 'some data of oversize packet recvd'
return PACKET_READ_TRUE
def read(self):
return self._read()
cdef int _read_packet(self) except PACKET_READ_ERROR:
cdef int r, size, max_packet_size
r = self._read()
if r & PACKET_READ_TRUE:
if (r & PACKET_READ_START) and (r & PACKET_READ_END):
#normal sized packet, read entirely
self.packet = self.normal_packet
self.packet._position, self.packet._limit = self.start + 4, self.end
elif (r & PACKET_READ_START) and not (r & PACKET_READ_END):
#print 'start of oversize', self.end - self.start, self.length
#first create oversize_packet if necessary:
if self.oversize_packet._capacity < self.length:
#find first size multiple of 2 that will fit the oversize packet
size = self.buffer._capacity
while size < self.length:
size = size * 2
if size >= MAX_PACKET_SIZE:
raise PacketReadError("oversized packet will not fit in MAX_PACKET_SIZE, length: %d, MAX_PACKET_SIZE: %d" % (self.length, MAX_PACKET_SIZE))
#print 'createing oversize packet', size
self.oversize_packet = Buffer(size)
self.oversize_packet.copy(self.buffer, self.start, 0, self.end - self.start)
self.packet = self.oversize_packet
self.packet._position, self.packet._limit = 4, self.end - self.start
else:
#end or middle part of oversized packet
self.oversize_packet.copy(self.buffer, self.start, self.oversize_packet._limit, self.end - self.start)
self.oversize_packet._limit = self.oversize_packet._limit + (self.end - self.start)
return r
def read_packet(self):
return self._read_packet()
cdef _read_length_coded_binary(self):
cdef unsigned int n, v
cdef unsigned long long vw
cdef Buffer packet
packet = self.packet
if packet._position + 1 > packet._limit: raise BufferUnderflowError()
n = packet._buff[packet._position]
if n < 251:
packet._position = packet._position + 1
return n
elif n == 251:
assert False, 'unexpected, only valid for row data packet'
elif n == 252:
#16 bit word
if packet._position + 3 > packet._limit: raise BufferUnderflowError()
v = packet._buff[packet._position + 1] | ((packet._buff[packet._position + 2]) << 8)
packet._position = packet._position + 3
return v
elif n == 253:
#24 bit word
if packet._position + 4 > packet._limit: raise BufferUnderflowError()
v = packet._buff[packet._position + 1] | ((packet._buff[packet._position + 2]) << 8) | ((packet._buff[packet._position + 3]) << 16)
packet._position = packet._position + 4
return v
else:
#64 bit word
if packet._position + 9 > packet._limit: raise BufferUnderflowError()
vw = 0
vw |= (<unsigned long long>packet._buff[packet._position + 1]) << 0
vw |= (<unsigned long long>packet._buff[packet._position + 2]) << 8
vw |= (<unsigned long long>packet._buff[packet._position + 3]) << 16
vw |= (<unsigned long long>packet._buff[packet._position + 4]) << 24
vw |= (<unsigned long long>packet._buff[packet._position + 5]) << 32
vw |= (<unsigned long long>packet._buff[packet._position + 6]) << 40
vw |= (<unsigned long long>packet._buff[packet._position + 7]) << 48
vw |= (<unsigned long long>packet._buff[packet._position + 8]) << 56
packet._position = packet._position + 9
return vw
def read_length_coded_binary(self):
return self._read_length_coded_binary()
cdef _read_bytes_length_coded(self):
cdef unsigned int n, w
cdef Buffer packet
packet = self.packet
if packet._position + 1 > packet._limit: raise BufferUnderflowError()
n = packet._buff[packet._position]
w = 1
if n >= 251:
if n == 251:
packet._position = packet._position + 1
return None
elif n == 252:
if packet._position + 2 > packet._limit: raise BufferUnderflowError()
n = packet._buff[packet._position + 1] | ((packet._buff[packet._position + 2]) << 8)
w = 3
elif n == 253:
#24 bit word
if packet._position + 4 > packet._limit: raise BufferUnderflowError()
n = packet._buff[packet._position + 1] | ((packet._buff[packet._position + 2]) << 8) | ((packet._buff[packet._position + 3]) << 16)
w = 4
elif n == 254:
#64 bit word
if packet._position + 9 > packet._limit: raise BufferUnderflowError()
n = 0
n |= (<unsigned long long>packet._buff[packet._position + 1]) << 0
n |= (<unsigned long long>packet._buff[packet._position + 2]) << 8
n |= (<unsigned long long>packet._buff[packet._position + 3]) << 16
n |= (<unsigned long long>packet._buff[packet._position + 4]) << 24
n |= (<unsigned long long>packet._buff[packet._position + 5]) << 32
n |= (<unsigned long long>packet._buff[packet._position + 6]) << 40
n |= (<unsigned long long>packet._buff[packet._position + 7]) << 48
n |= (<unsigned long long>packet._buff[packet._position + 8]) << 56
w = 9
else:
assert False, 'not implemented yet, n: %02x' % n
if (n + w) > (packet._limit - packet._position):
raise BufferUnderflowError()
packet._position = packet._position + w
s = PyString_FromStringAndSize(<char *>(packet._buff + packet._position), n)
packet._position = packet._position + n
return s
def read_bytes_length_coded(self):
return self._read_bytes_length_coded()
def read_field_type(self):
cdef int n
cdef Buffer packet
packet = self.packet
n = packet._read_byte()
packet._skip(n) #catalog
n = packet._read_byte()
packet._skip(n) #db
n = packet._read_byte()
packet._skip(n) #table
n = packet._read_byte()
packet._skip(n) #org_table
n = packet._read_byte()
name = packet._read_bytes(n)
n = packet._read_byte()
packet._skip(n) #org_name
packet._skip(1)
charsetnr = packet._read_bytes(2)
n = packet._skip(4)
n = packet.read_byte() #type
return (name, n, charsetnr)
cdef _string_to_int(self, object s):
if s == None:
return None
else:
return int(s)
cdef _string_to_float(self, object s):
if s == None:
return None
else:
return float(s)
cdef _read_datestring(self):
cdef unsigned int n
cdef Buffer packet
packet = self.packet
if packet._position + 1 > packet._limit: raise BufferUnderflowError()
n = packet._buff[packet._position]
if n == 251:
packet._position = packet._position + 1
return None
packet._position = packet._position + 1
s = PyString_FromStringAndSize(<char *>(packet._buff + packet._position), n)
packet._position = packet._position + n
return s
cdef _datestring_to_date(self, object s):
if not s or s == "0000-00-00":
return None
parts = s.split("-")
try:
assert len(parts) == 3
d = datetime.date(*map(int, parts))
except (AssertionError, ValueError):
raise ValueError("Unhandled date format: %r" % (s, ))
return d
cdef _datestring_to_datetime(self, object s):
if not s:
return None
datestring, timestring = s.split(" ")
_date = self._datestring_to_date(datestring)
if _date is None:
return None
parts = timestring.split(":")
try:
assert len(parts) == 3
d = datetime.datetime(_date.year, _date.month, _date.day, *map(int, parts))
except (AssertionError, ValueError):
raise ValueError("Unhandled datetime format: %r" % (s, ))
return d
cdef int _read_row(self, object row, object fields, int field_count) except PACKET_READ_ERROR:
cdef int i, r
cdef int decode
if self.encoding:
decode = 1
encoding = self.encoding
else:
decode = 0
r = self._read_packet()
if r & PACKET_READ_END: #whole packet recv
if self.packet._buff[self.packet._position] == 0xFE:
return r | PACKET_READ_EOF
else:
i = 0
int_types = INT_TYPES
float_types = FLOAT_TYPES
string_types = STRING_TYPES
date_type = FIELD_TYPE.DATE
datetime_type = FIELD_TYPE.DATETIME
while i < field_count:
t = fields[i][1] #type_code
if t in int_types:
row[i] = self._string_to_int(self._read_bytes_length_coded())
elif t in string_types:
row[i] = self._read_bytes_length_coded()
if row[i] is not None and (self.encoding or self.use_unicode):
bytes = fields[i][2]
nr = ord(bytes[1]) << 8 | ord(bytes[0])
- row[i] = row[i].decode(charset_nr[nr])
+ if charset_nr[nr] != 'binary':
+ row[i] = row[i].decode(charset_nr[nr])
if not self.use_unicode:
row[i] = row[i].encode(self.encoding)
elif t in float_types:
row[i] = self._string_to_float(self._read_bytes_length_coded())
elif t == date_type:
row[i] = self._datestring_to_date(self._read_datestring())
elif t == datetime_type:
row[i] = self._datestring_to_datetime(self._read_datestring())
else:
row[i] = self._read_bytes_length_coded()
i = i + 1
return r
def read_rows(self, object fields, int row_count):
cdef int r, i, field_count
field_count = len(fields)
i = 0
r = 0
rows = []
row = [None] * field_count
add = rows.append
#print "Reading fields", len(fields)
while i < row_count:
r = self._read_row(row, fields, field_count)
if r & PACKET_READ_END:
if r & PACKET_READ_EOF:
break
else:
add(tuple(row))
if not (r & PACKET_READ_MORE):
break
i = i + 1
return r, rows
cdef enum:
PROXY_STATE_UNDEFINED = -2
PROXY_STATE_ERROR = -1
PROXY_STATE_INIT = 0
PROXY_STATE_READ_AUTH = 1
PROXY_STATE_READ_AUTH_RESULT = 2
PROXY_STATE_READ_AUTH_OLD_PASSWORD = 3
PROXY_STATE_READ_AUTH_OLD_PASSWORD_RESULT = 4
PROXY_STATE_READ_COMMAND = 5
PROXY_STATE_READ_RESULT = 6
PROXY_STATE_READ_RESULT_FIELDS = 7
PROXY_STATE_READ_RESULT_ROWS = 8
PROXY_STATE_READ_RESULT_FIELDS_ONLY = 9
PROXY_STATE_FINISHED = 10
class PROXY_STATE:
UNDEFINED = PROXY_STATE_UNDEFINED
ERROR = PROXY_STATE_ERROR
INIT = PROXY_STATE_INIT
FINISHED = PROXY_STATE_FINISHED
READ_AUTH = PROXY_STATE_READ_AUTH
READ_AUTH_RESULT = PROXY_STATE_READ_AUTH_RESULT
READ_AUTH_OLD_PASSWORD = PROXY_STATE_READ_AUTH_OLD_PASSWORD
READ_AUTH_OLD_PASSWORD_RESULT = PROXY_STATE_READ_AUTH_OLD_PASSWORD_RESULT
READ_COMMAND = PROXY_STATE_READ_COMMAND
READ_RESULT = PROXY_STATE_READ_RESULT
READ_RESULT_FIELDS = PROXY_STATE_READ_RESULT_FIELDS
READ_RESULT_ROWS = PROXY_STATE_READ_RESULT_ROWS
READ_RESULT_FIELDS_ONLY = PROXY_STATE_READ_RESULT_FIELDS_ONLY
SERVER_STATES = set([PROXY_STATE.INIT, PROXY_STATE.READ_AUTH_RESULT, PROXY_STATE.READ_AUTH_OLD_PASSWORD_RESULT,
PROXY_STATE.READ_RESULT, PROXY_STATE.READ_RESULT_FIELDS, PROXY_STATE.READ_RESULT_ROWS,
PROXY_STATE.READ_RESULT_FIELDS_ONLY, PROXY_STATE.FINISHED])
CLIENT_STATES = set([PROXY_STATE.READ_AUTH, PROXY_STATE.READ_AUTH_OLD_PASSWORD, PROXY_STATE.READ_COMMAND])
AUTH_RESULT_STATES = set([PROXY_STATE.READ_AUTH_OLD_PASSWORD_RESULT, PROXY_STATE.READ_AUTH_RESULT])
READ_RESULT_STATES = set([PROXY_STATE.READ_RESULT, PROXY_STATE.READ_RESULT_FIELDS, PROXY_STATE.READ_RESULT_ROWS, PROXY_STATE.READ_RESULT_FIELDS_ONLY])
class ProxyProtocolException(Exception):
pass
cdef class ProxyProtocol:
cdef readonly int state
cdef readonly int number
def __init__(self, initial_state = PROXY_STATE_INIT):
self.reset(initial_state)
def reset(self, int state):
self.state = state
self.number = 0
cdef int _check_number(self, PacketReader reader) except -1:
if self.state == PROXY_STATE_READ_COMMAND:
self.number = 0
if self.number != reader.number:
self.state = PROXY_STATE_ERROR
raise ProxyProtocolException('packet number out of sync')
self.number = self.number + 1
self.number = self.number % 256
def read_server(self, PacketReader reader):
cdef int read_result, prev_state
prev_state = self.state
while 1:
read_result = reader._read()
if read_result & PACKET_READ_START:
self._check_number(reader)
if read_result & PACKET_READ_END: #packet recvd
if self.state == PROXY_STATE_INIT:
#server handshake recvd
#server could have send error instead of inital handshake
self.state = PROXY_STATE_READ_AUTH
elif self.state == PROXY_STATE_READ_AUTH_RESULT:
#server auth result recvd
if reader.command == 0xFE:
self.state = PROXY_STATE_READ_AUTH_OLD_PASSWORD
elif reader.command == 0x00: #OK
self.state = PROXY_STATE_READ_COMMAND
elif self.state == PROXY_STATE_READ_AUTH_OLD_PASSWORD_RESULT:
#server auth old password result recvd
self.state = PROXY_STATE_READ_COMMAND
elif self.state == PROXY_STATE_READ_RESULT:
if reader.command == 0x00: #no result set but ok
#server result recvd OK
self.state = PROXY_STATE_READ_COMMAND
elif reader.command == 0xFF:
#no result set error
self.state = PROXY_STATE_READ_COMMAND
else:
#server result recv result set header
self.state = PROXY_STATE_READ_RESULT_FIELDS
elif self.state == PROXY_STATE_READ_RESULT_FIELDS:
if reader.command == 0xFE: #EOF for fields
#server result fields recvd
self.state = PROXY_STATE_READ_RESULT_ROWS
elif self.state == PROXY_STATE_READ_RESULT_ROWS:
if reader.command == 0xFE: #EOF for rows
#server result rows recvd
self.state = PROXY_STATE_READ_COMMAND
elif self.state == PROXY_STATE_READ_RESULT_FIELDS_ONLY:
if reader.command == 0xFE: #EOF for fields
#server result fields only recvd
self.state = PROXY_STATE_READ_COMMAND
else:
self.state = PROXY_STATE_ERROR
raise ProxyProtocolException('unexpected packet')
if self.state != prev_state:
break
if not (read_result & PACKET_READ_MORE):
break
return read_result, self.state, prev_state
def read_client(self, PacketReader reader):
cdef int read_result, prev_state
prev_state = self.state
while 1:
read_result = reader._read()
if read_result & PACKET_READ_START:
self._check_number(reader)
if read_result & PACKET_READ_END: #packet recvd
if self.state == PROXY_STATE_READ_AUTH:
#client auth recvd
self.state = PROXY_STATE_READ_AUTH_RESULT
elif self.state == PROXY_STATE_READ_AUTH_OLD_PASSWORD:
#client auth old pwd recvd
self.state = PROXY_STATE_READ_AUTH_OLD_PASSWORD_RESULT
elif self.state == PROXY_STATE_READ_COMMAND:
#client cmd recvd
if reader.command == COMMAND_LIST: #list cmd
self.state = PROXY_STATE_READ_RESULT_FIELDS_ONLY
elif reader.command == COMMAND_QUIT: #COM_QUIT
self.state = PROXY_STATE_FINISHED
else:
self.state = PROXY_STATE_READ_RESULT
else:
self.state = PROXY_STATE_ERROR
raise ProxyProtocolException('unexpected packet')
if self.state != prev_state:
break
if not (read_result & PACKET_READ_MORE):
break
return read_result, self.state, prev_state
|
mthurlin/gevent-MySQL
|
4380fc7961711f4f8a1588a99374ed92a49daa2b
|
Windows specific errors are only checked if running win32.
|
diff --git a/lib/geventmysql/client.py b/lib/geventmysql/client.py
index 2702603..1dab43c 100644
--- a/lib/geventmysql/client.py
+++ b/lib/geventmysql/client.py
@@ -1,397 +1,404 @@
# Copyright (C) 2009, Hyves (Startphone Ltd.)
#
# This module is part of the Concurrence Framework and is released under
# the New BSD License: http://www.opensource.org/licenses/bsd-license.php
#TODO supporting closing a halfread resultset (e.g. automatically read and discard rest)
import errno
from geventmysql._mysql import Buffer
from geventmysql.mysql import BufferedPacketReader, BufferedPacketWriter, PACKET_READ_RESULT, CAPS, COMMAND
import logging
import time
from gevent import socket
import gevent
+import sys
# From query: SHOW COLLATION;
charset_map = {}
charset_map["big5"] = 1
charset_map["dec8"] = 3
charset_map["cp850"] = 4
charset_map["hp8"] = 6
charset_map["koi8r"] = 7
charset_map["latin1"] = 8
charset_map["latin1"] = 8
charset_map["latin2"] = 9
charset_map["swe7"] = 10
charset_map["ascii"] = 11
charset_map["ujis"] = 12
charset_map["sjis"] = 13
charset_map["hebrew"] = 16
charset_map["tis620"] = 18
charset_map["euckr"] = 19
charset_map["koi8u"] = 22
charset_map["gb2312"] = 24
charset_map["greek"] = 25
charset_map["cp1250"] = 26
charset_map["gbk"] = 28
charset_map["latin5"] = 30
charset_map["armscii8"] = 32
charset_map["utf8"] = 33
charset_map["utf8"] = 33
charset_map["ucs2"] = 35
charset_map["cp866"] = 36
charset_map["keybcs2"] = 37
charset_map["macce"] = 38
charset_map["macroman"] = 39
charset_map["cp852"] = 40
charset_map["latin7"] = 41
charset_map["cp1251"] = 51
charset_map["cp1256"] = 57
charset_map["cp1257"] = 59
charset_map["binary"] = 63
charset_map["geostd8"] = 92
charset_map["cp932"] = 95
charset_map["eucjpms"] = 97
try:
#python 2.6
import hashlib
SHA = hashlib.sha1
except ImportError:
#python 2.5
import sha
SHA = sha.new
#import time
class ClientError(Exception):
@classmethod
def from_error_packet(cls, packet, skip = 8):
packet.skip(skip)
return cls(packet.read_bytes(packet.remaining))
class ClientLoginError(ClientError): pass
class ClientCommandError(ClientError): pass
class ClientProgrammingError(ClientError): pass
class ResultSet(object):
"""Represents the current resultset being read from a Connection.
The resultset implements an iterator over rows. A Resultset must
be iterated entirely and closed explicitly."""
STATE_INIT = 0
STATE_OPEN = 1
STATE_EOF = 2
STATE_CLOSED = 3
def __init__(self, connection, field_count):
self.state = self.STATE_INIT
self.connection = connection
self.fields = connection.reader.read_fields(field_count)
self.state = self.STATE_OPEN
def __iter__(self):
assert self.state == self.STATE_OPEN, "cannot iterate a resultset when it is not open"
for row in self.connection.reader.read_rows(self.fields):
yield row
self.state = self.STATE_EOF
def close(self, connection_close = False):
"""Closes the current resultset. Make sure you have iterated over all rows before closing it!"""
#print 'close on ResultSet', id(self.connection)
if self.state != self.STATE_EOF and not connection_close:
raise ClientProgrammingError("you can only close a resultset when it was read entirely!")
connection = self.connection
del self.connection
del self.fields
connection._close_current_resultset(self)
self.state = self.STATE_CLOSED
class Connection(object):
"""Represents a single connection to a MySQL Database host."""
STATE_ERROR = -1
STATE_INIT = 0
STATE_CONNECTING = 1
STATE_CONNECTED = 2
STATE_CLOSING = 3
STATE_CLOSED = 4
def __init__(self):
self.state = self.STATE_INIT
self.buffer = Buffer(1024 * 16)
self.socket = None
self.reader = None
self.writer = None
self._time_command = False #whether to keep timing stats on a cmd
self._command_time = -1
self._incommand = False
self.current_resultset = None
def _scramble(self, password, seed):
"""taken from java jdbc driver, scrambles the password using the given seed
according to the mysql login protocol"""
stage1 = SHA(password).digest()
stage2 = SHA(stage1).digest()
md = SHA()
md.update(seed)
md.update(stage2)
#i love python :-):
return ''.join(map(chr, [x ^ ord(stage1[i]) for i, x in enumerate(map(ord, md.digest()))]))
def _handshake(self, user, password, database, charset):
"""performs the mysql login handshake"""
#init buffer for reading (both pos and lim = 0)
self.buffer.clear()
self.buffer.flip()
#read server welcome
packet = self.reader.read_packet()
self.protocol_version = packet.read_byte() #normally this would be 10 (0xa)
if self.protocol_version == 0xff:
#error on initial greeting, possibly too many connection error
raise ClientLoginError.from_error_packet(packet, skip = 2)
elif self.protocol_version == 0xa:
pass #expected
else:
assert False, "Unexpected protocol version %02x" % self.protocol_version
self.server_version = packet.read_bytes_until(0)
packet.skip(4) #thread_id
scramble_buff = packet.read_bytes(8)
packet.skip(1) #filler
server_caps = packet.read_short()
#CAPS.dbg(server_caps)
if not server_caps & CAPS.PROTOCOL_41:
assert False, "<4.1 auth not supported"
server_language = packet.read_byte()
server_status = packet.read_short()
packet.skip(13) #filler
if packet.remaining:
scramble_buff += packet.read_bytes_until(0)
else:
assert False, "<4.1 auth not supported"
client_caps = server_caps
#always turn off compression
client_caps &= ~CAPS.COMPRESS
client_caps &= ~CAPS.NO_SCHEMA
if not server_caps & CAPS.CONNECT_WITH_DB and database:
assert False, "initial db given but not supported by server"
if server_caps & CAPS.CONNECT_WITH_DB and not database:
client_caps &= ~CAPS.CONNECT_WITH_DB
#build and write our answer to the initial handshake packet
self.writer.clear()
self.writer.start()
self.writer.write_int(client_caps)
self.writer.write_int(1024 * 1024 * 32) #16mb max packet
if charset:
self.writer.write_byte(charset_map[charset.replace("-", "")])
else:
self.writer.write_byte(server_language)
self.writer.write_bytes('\0' * 23) #filler
self.writer.write_bytes(user + '\0')
if password:
self.writer.write_byte(20)
self.writer.write_bytes(self._scramble(password, scramble_buff))
else:
self.writer.write_byte(0)
if database:
self.writer.write_bytes(database + '\0')
self.writer.finish(1)
self.writer.flush()
#read final answer from server
self.buffer.flip()
packet = self.reader.read_packet()
result = packet.read_byte()
if result == 0xff:
raise ClientLoginError.from_error_packet(packet)
elif result == 0xfe:
assert False, "old password handshake not implemented"
def _close_current_resultset(self, resultset):
assert resultset == self.current_resultset
self.current_resultset = None
def _send_command(self, cmd, cmd_text):
"""sends a command with the given text"""
#self.log.debug('cmd %s %s', cmd, cmd_text)
#note: we are not using normal writer.start/finish here, because the cmd
#could not fit in buffer, causing flushes in write_string, in that case 'finish' would
#not be able to go back to the header of the packet to write the length in that case
self.writer.clear()
self.writer.write_header(len(cmd_text) + 1 + 4, 0) #1 is len of cmd, 4 is len of header, 0 is packet number
self.writer.write_byte(cmd)
self.writer.write_bytes(cmd_text)
self.writer.flush()
def _close(self):
#self.log.debug("close mysql client %s", id(self))
try:
self.state = self.STATE_CLOSING
if self.current_resultset:
self.current_resultset.close(True)
self.socket.close()
self.state = self.STATE_CLOSED
except:
self.state = self.STATE_ERROR
raise
def connect(self, host = "localhost", port = 3306, user = "", password = "", db = "", autocommit = None, charset = None, use_unicode=False):
"""connects to the given host and port with user and password"""
#self.log.debug("connect mysql client %s %s %s %s %s", id(self), host, port, user, password)
try:
#parse addresses of form str <host:port>
assert type(host) == str, "make sure host is a string"
if host[0] == '/': #assume unix domain socket
addr = host
elif ':' in host:
host, port = host.split(':')
port = int(port)
addr = (host, port)
else:
addr = (host, port)
assert self.state == self.STATE_INIT, "make sure connection is not already connected or closed"
self.state = self.STATE_CONNECTING
self.socket = socket.create_connection(addr)
self.reader = BufferedPacketReader(self.socket, self.buffer)
self.writer = BufferedPacketWriter(self.socket, self.buffer)
self._handshake(user, password, db, charset)
#handshake complete client can now send commands
self.state = self.STATE_CONNECTED
if autocommit == False:
self.set_autocommit(False)
elif autocommit == True:
self.set_autocommit(True)
else:
pass #whatever is the default of the db (ON in the case of mysql)
if charset is not None:
self.set_charset(charset)
self.set_use_unicode(use_unicode)
return self
except gevent.Timeout:
self.state = self.STATE_INIT
raise
except ClientLoginError:
self.state = self.STATE_INIT
raise
except:
self.state = self.STATE_ERROR
raise
def close(self):
"""close this connection"""
assert self.is_connected(), "make sure connection is connected before closing"
if self._incommand != False: assert False, "cannot close while still in a command"
self._close()
def command(self, cmd, cmd_text):
"""sends a COM_XXX command with the given text and possibly return a resultset (select)"""
#print 'command', cmd, repr(cmd_text), type(cmd_text)
assert type(cmd_text) == str #as opposed to unicode
assert self.is_connected(), "make sure connection is connected before query"
if self._incommand != False: assert False, "overlapped commands not supported"
if self.current_resultset: assert False, "overlapped commands not supported, pls read prev resultset and close it"
try:
self._incommand = True
if self._time_command:
start_time = time.time()
self._send_command(cmd, cmd_text)
#read result, expect 1 of OK, ERROR or result set header
self.buffer.flip()
packet = self.reader.read_packet()
result = packet.read_byte()
#print 'res', result
if self._time_command:
end_time = time.time()
self._command_time = end_time - start_time
if result == 0x00:
#OK, return (affected rows, last row id)
rowcount = self.reader.read_length_coded_binary()
lastrowid = self.reader.read_length_coded_binary()
return (rowcount, lastrowid)
elif result == 0xff:
raise ClientCommandError.from_error_packet(packet)
else: #result set
self.current_resultset = ResultSet(self, result)
return self.current_resultset
except socket.error, e:
(errorcode, errorstring) = e
- if errorcode in [errno.ECONNABORTED, errno.ECONNREFUSED, errno.ECONNRESET, errno.EPIPE, errno.WSAECONNABORTED]:
+ if errorcode in [errno.ECONNABORTED, errno.ECONNREFUSED, errno.ECONNRESET, errno.EPIPE]:
self._incommand = False
self.close()
+
+ if sys.platform == "win32":
+ if errorcode in [errno.WSAECONNABORTED]:
+ self._incommand = False
+ self.close()
+
raise
finally:
self._incommand = False
def is_connected(self):
return self.state == self.STATE_CONNECTED
def query(self, cmd_text):
"""Sends a COM_QUERY command with the given text and return a resultset (select)"""
return self.command(COMMAND.QUERY, cmd_text)
def init_db(self, cmd_text):
"""Sends a COM_INIT command with the given text"""
return self.command(COMMAND.INITDB, cmd_text)
def set_autocommit(self, commit):
"""Sets autocommit setting for this connection. True = on, False = off"""
self.command(COMMAND.QUERY, "SET AUTOCOMMIT = %s" % ('1' if commit else '0'))
def commit(self):
"""Commits this connection"""
self.command(COMMAND.QUERY, "COMMIT")
def rollback(self):
"""Issues a rollback on this connection"""
self.command(COMMAND.QUERY, "ROLLBACK")
def set_charset(self, charset):
"""Sets the charset for this connections (used to decode string fields into unicode strings)"""
self.reader.reader.encoding = charset
def set_use_unicode(self, use_unicode):
self.reader.reader.use_unicode = use_unicode
def set_time_command(self, time_command):
self._time_command = time_command
def get_command_time(self):
return self._command_time
Connection.log = logging.getLogger(Connection.__name__)
def connect(*args, **kwargs):
return Connection().connect(*args, **kwargs)
|
mthurlin/gevent-MySQL
|
c68dfa247f5b7c9f9a673b26a8e93e02033cb03d
|
Make sure host in Connection.connect is a str. Otherwise addr isn't set
|
diff --git a/lib/geventmysql/client.py b/lib/geventmysql/client.py
index af94979..2702603 100644
--- a/lib/geventmysql/client.py
+++ b/lib/geventmysql/client.py
@@ -1,397 +1,397 @@
# Copyright (C) 2009, Hyves (Startphone Ltd.)
#
# This module is part of the Concurrence Framework and is released under
# the New BSD License: http://www.opensource.org/licenses/bsd-license.php
#TODO supporting closing a halfread resultset (e.g. automatically read and discard rest)
import errno
from geventmysql._mysql import Buffer
from geventmysql.mysql import BufferedPacketReader, BufferedPacketWriter, PACKET_READ_RESULT, CAPS, COMMAND
import logging
import time
from gevent import socket
import gevent
# From query: SHOW COLLATION;
charset_map = {}
charset_map["big5"] = 1
charset_map["dec8"] = 3
charset_map["cp850"] = 4
charset_map["hp8"] = 6
charset_map["koi8r"] = 7
charset_map["latin1"] = 8
charset_map["latin1"] = 8
charset_map["latin2"] = 9
charset_map["swe7"] = 10
charset_map["ascii"] = 11
charset_map["ujis"] = 12
charset_map["sjis"] = 13
charset_map["hebrew"] = 16
charset_map["tis620"] = 18
charset_map["euckr"] = 19
charset_map["koi8u"] = 22
charset_map["gb2312"] = 24
charset_map["greek"] = 25
charset_map["cp1250"] = 26
charset_map["gbk"] = 28
charset_map["latin5"] = 30
charset_map["armscii8"] = 32
charset_map["utf8"] = 33
charset_map["utf8"] = 33
charset_map["ucs2"] = 35
charset_map["cp866"] = 36
charset_map["keybcs2"] = 37
charset_map["macce"] = 38
charset_map["macroman"] = 39
charset_map["cp852"] = 40
charset_map["latin7"] = 41
charset_map["cp1251"] = 51
charset_map["cp1256"] = 57
charset_map["cp1257"] = 59
charset_map["binary"] = 63
charset_map["geostd8"] = 92
charset_map["cp932"] = 95
charset_map["eucjpms"] = 97
try:
#python 2.6
import hashlib
SHA = hashlib.sha1
except ImportError:
#python 2.5
import sha
SHA = sha.new
#import time
class ClientError(Exception):
@classmethod
def from_error_packet(cls, packet, skip = 8):
packet.skip(skip)
return cls(packet.read_bytes(packet.remaining))
class ClientLoginError(ClientError): pass
class ClientCommandError(ClientError): pass
class ClientProgrammingError(ClientError): pass
class ResultSet(object):
"""Represents the current resultset being read from a Connection.
The resultset implements an iterator over rows. A Resultset must
be iterated entirely and closed explicitly."""
STATE_INIT = 0
STATE_OPEN = 1
STATE_EOF = 2
STATE_CLOSED = 3
def __init__(self, connection, field_count):
self.state = self.STATE_INIT
self.connection = connection
self.fields = connection.reader.read_fields(field_count)
self.state = self.STATE_OPEN
def __iter__(self):
assert self.state == self.STATE_OPEN, "cannot iterate a resultset when it is not open"
for row in self.connection.reader.read_rows(self.fields):
yield row
self.state = self.STATE_EOF
def close(self, connection_close = False):
"""Closes the current resultset. Make sure you have iterated over all rows before closing it!"""
#print 'close on ResultSet', id(self.connection)
if self.state != self.STATE_EOF and not connection_close:
raise ClientProgrammingError("you can only close a resultset when it was read entirely!")
connection = self.connection
del self.connection
del self.fields
connection._close_current_resultset(self)
self.state = self.STATE_CLOSED
class Connection(object):
"""Represents a single connection to a MySQL Database host."""
STATE_ERROR = -1
STATE_INIT = 0
STATE_CONNECTING = 1
STATE_CONNECTED = 2
STATE_CLOSING = 3
STATE_CLOSED = 4
def __init__(self):
self.state = self.STATE_INIT
self.buffer = Buffer(1024 * 16)
self.socket = None
self.reader = None
self.writer = None
self._time_command = False #whether to keep timing stats on a cmd
self._command_time = -1
self._incommand = False
self.current_resultset = None
def _scramble(self, password, seed):
"""taken from java jdbc driver, scrambles the password using the given seed
according to the mysql login protocol"""
stage1 = SHA(password).digest()
stage2 = SHA(stage1).digest()
md = SHA()
md.update(seed)
md.update(stage2)
#i love python :-):
return ''.join(map(chr, [x ^ ord(stage1[i]) for i, x in enumerate(map(ord, md.digest()))]))
def _handshake(self, user, password, database, charset):
"""performs the mysql login handshake"""
#init buffer for reading (both pos and lim = 0)
self.buffer.clear()
self.buffer.flip()
#read server welcome
packet = self.reader.read_packet()
self.protocol_version = packet.read_byte() #normally this would be 10 (0xa)
if self.protocol_version == 0xff:
#error on initial greeting, possibly too many connection error
raise ClientLoginError.from_error_packet(packet, skip = 2)
elif self.protocol_version == 0xa:
pass #expected
else:
assert False, "Unexpected protocol version %02x" % self.protocol_version
self.server_version = packet.read_bytes_until(0)
packet.skip(4) #thread_id
scramble_buff = packet.read_bytes(8)
packet.skip(1) #filler
server_caps = packet.read_short()
#CAPS.dbg(server_caps)
if not server_caps & CAPS.PROTOCOL_41:
assert False, "<4.1 auth not supported"
server_language = packet.read_byte()
server_status = packet.read_short()
packet.skip(13) #filler
if packet.remaining:
scramble_buff += packet.read_bytes_until(0)
else:
assert False, "<4.1 auth not supported"
client_caps = server_caps
#always turn off compression
client_caps &= ~CAPS.COMPRESS
client_caps &= ~CAPS.NO_SCHEMA
if not server_caps & CAPS.CONNECT_WITH_DB and database:
assert False, "initial db given but not supported by server"
if server_caps & CAPS.CONNECT_WITH_DB and not database:
client_caps &= ~CAPS.CONNECT_WITH_DB
#build and write our answer to the initial handshake packet
self.writer.clear()
self.writer.start()
self.writer.write_int(client_caps)
self.writer.write_int(1024 * 1024 * 32) #16mb max packet
if charset:
self.writer.write_byte(charset_map[charset.replace("-", "")])
else:
self.writer.write_byte(server_language)
self.writer.write_bytes('\0' * 23) #filler
self.writer.write_bytes(user + '\0')
if password:
self.writer.write_byte(20)
self.writer.write_bytes(self._scramble(password, scramble_buff))
else:
self.writer.write_byte(0)
if database:
self.writer.write_bytes(database + '\0')
self.writer.finish(1)
self.writer.flush()
#read final answer from server
self.buffer.flip()
packet = self.reader.read_packet()
result = packet.read_byte()
if result == 0xff:
raise ClientLoginError.from_error_packet(packet)
elif result == 0xfe:
assert False, "old password handshake not implemented"
def _close_current_resultset(self, resultset):
assert resultset == self.current_resultset
self.current_resultset = None
def _send_command(self, cmd, cmd_text):
"""sends a command with the given text"""
#self.log.debug('cmd %s %s', cmd, cmd_text)
#note: we are not using normal writer.start/finish here, because the cmd
#could not fit in buffer, causing flushes in write_string, in that case 'finish' would
#not be able to go back to the header of the packet to write the length in that case
self.writer.clear()
self.writer.write_header(len(cmd_text) + 1 + 4, 0) #1 is len of cmd, 4 is len of header, 0 is packet number
self.writer.write_byte(cmd)
self.writer.write_bytes(cmd_text)
self.writer.flush()
def _close(self):
#self.log.debug("close mysql client %s", id(self))
try:
self.state = self.STATE_CLOSING
if self.current_resultset:
self.current_resultset.close(True)
self.socket.close()
self.state = self.STATE_CLOSED
except:
self.state = self.STATE_ERROR
raise
def connect(self, host = "localhost", port = 3306, user = "", password = "", db = "", autocommit = None, charset = None, use_unicode=False):
"""connects to the given host and port with user and password"""
#self.log.debug("connect mysql client %s %s %s %s %s", id(self), host, port, user, password)
try:
- #print 'connect', host, user, password, db
#parse addresses of form str <host:port>
- if type(host) == str:
- if host[0] == '/': #assume unix domain socket
- addr = host
- elif ':' in host:
- host, port = host.split(':')
- port = int(port)
- addr = (host, port)
- else:
- addr = (host, port)
+ assert type(host) == str, "make sure host is a string"
+
+ if host[0] == '/': #assume unix domain socket
+ addr = host
+ elif ':' in host:
+ host, port = host.split(':')
+ port = int(port)
+ addr = (host, port)
+ else:
+ addr = (host, port)
assert self.state == self.STATE_INIT, "make sure connection is not already connected or closed"
self.state = self.STATE_CONNECTING
self.socket = socket.create_connection(addr)
self.reader = BufferedPacketReader(self.socket, self.buffer)
self.writer = BufferedPacketWriter(self.socket, self.buffer)
self._handshake(user, password, db, charset)
#handshake complete client can now send commands
self.state = self.STATE_CONNECTED
if autocommit == False:
self.set_autocommit(False)
elif autocommit == True:
self.set_autocommit(True)
else:
pass #whatever is the default of the db (ON in the case of mysql)
if charset is not None:
self.set_charset(charset)
self.set_use_unicode(use_unicode)
return self
except gevent.Timeout:
self.state = self.STATE_INIT
raise
except ClientLoginError:
self.state = self.STATE_INIT
raise
except:
self.state = self.STATE_ERROR
raise
def close(self):
"""close this connection"""
assert self.is_connected(), "make sure connection is connected before closing"
if self._incommand != False: assert False, "cannot close while still in a command"
self._close()
def command(self, cmd, cmd_text):
"""sends a COM_XXX command with the given text and possibly return a resultset (select)"""
#print 'command', cmd, repr(cmd_text), type(cmd_text)
assert type(cmd_text) == str #as opposed to unicode
assert self.is_connected(), "make sure connection is connected before query"
if self._incommand != False: assert False, "overlapped commands not supported"
if self.current_resultset: assert False, "overlapped commands not supported, pls read prev resultset and close it"
try:
self._incommand = True
if self._time_command:
start_time = time.time()
self._send_command(cmd, cmd_text)
#read result, expect 1 of OK, ERROR or result set header
self.buffer.flip()
packet = self.reader.read_packet()
result = packet.read_byte()
#print 'res', result
if self._time_command:
end_time = time.time()
self._command_time = end_time - start_time
if result == 0x00:
#OK, return (affected rows, last row id)
rowcount = self.reader.read_length_coded_binary()
lastrowid = self.reader.read_length_coded_binary()
return (rowcount, lastrowid)
elif result == 0xff:
raise ClientCommandError.from_error_packet(packet)
else: #result set
self.current_resultset = ResultSet(self, result)
return self.current_resultset
except socket.error, e:
(errorcode, errorstring) = e
if errorcode in [errno.ECONNABORTED, errno.ECONNREFUSED, errno.ECONNRESET, errno.EPIPE, errno.WSAECONNABORTED]:
self._incommand = False
self.close()
raise
finally:
self._incommand = False
def is_connected(self):
return self.state == self.STATE_CONNECTED
def query(self, cmd_text):
"""Sends a COM_QUERY command with the given text and return a resultset (select)"""
return self.command(COMMAND.QUERY, cmd_text)
def init_db(self, cmd_text):
"""Sends a COM_INIT command with the given text"""
return self.command(COMMAND.INITDB, cmd_text)
def set_autocommit(self, commit):
"""Sets autocommit setting for this connection. True = on, False = off"""
self.command(COMMAND.QUERY, "SET AUTOCOMMIT = %s" % ('1' if commit else '0'))
def commit(self):
"""Commits this connection"""
self.command(COMMAND.QUERY, "COMMIT")
def rollback(self):
"""Issues a rollback on this connection"""
self.command(COMMAND.QUERY, "ROLLBACK")
def set_charset(self, charset):
"""Sets the charset for this connections (used to decode string fields into unicode strings)"""
self.reader.reader.encoding = charset
def set_use_unicode(self, use_unicode):
self.reader.reader.use_unicode = use_unicode
def set_time_command(self, time_command):
self._time_command = time_command
def get_command_time(self):
return self._command_time
Connection.log = logging.getLogger(Connection.__name__)
def connect(*args, **kwargs):
return Connection().connect(*args, **kwargs)
|
mthurlin/gevent-MySQL
|
6607598a1157eed970ff98cd2c7c4e265e980145
|
Make sure host in Connection.connect is a str. Otherwise addr isn't set
|
diff --git a/lib/geventmysql/client.py b/lib/geventmysql/client.py
index af94979..2702603 100644
--- a/lib/geventmysql/client.py
+++ b/lib/geventmysql/client.py
@@ -1,397 +1,397 @@
# Copyright (C) 2009, Hyves (Startphone Ltd.)
#
# This module is part of the Concurrence Framework and is released under
# the New BSD License: http://www.opensource.org/licenses/bsd-license.php
#TODO supporting closing a halfread resultset (e.g. automatically read and discard rest)
import errno
from geventmysql._mysql import Buffer
from geventmysql.mysql import BufferedPacketReader, BufferedPacketWriter, PACKET_READ_RESULT, CAPS, COMMAND
import logging
import time
from gevent import socket
import gevent
# From query: SHOW COLLATION;
charset_map = {}
charset_map["big5"] = 1
charset_map["dec8"] = 3
charset_map["cp850"] = 4
charset_map["hp8"] = 6
charset_map["koi8r"] = 7
charset_map["latin1"] = 8
charset_map["latin1"] = 8
charset_map["latin2"] = 9
charset_map["swe7"] = 10
charset_map["ascii"] = 11
charset_map["ujis"] = 12
charset_map["sjis"] = 13
charset_map["hebrew"] = 16
charset_map["tis620"] = 18
charset_map["euckr"] = 19
charset_map["koi8u"] = 22
charset_map["gb2312"] = 24
charset_map["greek"] = 25
charset_map["cp1250"] = 26
charset_map["gbk"] = 28
charset_map["latin5"] = 30
charset_map["armscii8"] = 32
charset_map["utf8"] = 33
charset_map["utf8"] = 33
charset_map["ucs2"] = 35
charset_map["cp866"] = 36
charset_map["keybcs2"] = 37
charset_map["macce"] = 38
charset_map["macroman"] = 39
charset_map["cp852"] = 40
charset_map["latin7"] = 41
charset_map["cp1251"] = 51
charset_map["cp1256"] = 57
charset_map["cp1257"] = 59
charset_map["binary"] = 63
charset_map["geostd8"] = 92
charset_map["cp932"] = 95
charset_map["eucjpms"] = 97
try:
#python 2.6
import hashlib
SHA = hashlib.sha1
except ImportError:
#python 2.5
import sha
SHA = sha.new
#import time
class ClientError(Exception):
@classmethod
def from_error_packet(cls, packet, skip = 8):
packet.skip(skip)
return cls(packet.read_bytes(packet.remaining))
class ClientLoginError(ClientError): pass
class ClientCommandError(ClientError): pass
class ClientProgrammingError(ClientError): pass
class ResultSet(object):
"""Represents the current resultset being read from a Connection.
The resultset implements an iterator over rows. A Resultset must
be iterated entirely and closed explicitly."""
STATE_INIT = 0
STATE_OPEN = 1
STATE_EOF = 2
STATE_CLOSED = 3
def __init__(self, connection, field_count):
self.state = self.STATE_INIT
self.connection = connection
self.fields = connection.reader.read_fields(field_count)
self.state = self.STATE_OPEN
def __iter__(self):
assert self.state == self.STATE_OPEN, "cannot iterate a resultset when it is not open"
for row in self.connection.reader.read_rows(self.fields):
yield row
self.state = self.STATE_EOF
def close(self, connection_close = False):
"""Closes the current resultset. Make sure you have iterated over all rows before closing it!"""
#print 'close on ResultSet', id(self.connection)
if self.state != self.STATE_EOF and not connection_close:
raise ClientProgrammingError("you can only close a resultset when it was read entirely!")
connection = self.connection
del self.connection
del self.fields
connection._close_current_resultset(self)
self.state = self.STATE_CLOSED
class Connection(object):
"""Represents a single connection to a MySQL Database host."""
STATE_ERROR = -1
STATE_INIT = 0
STATE_CONNECTING = 1
STATE_CONNECTED = 2
STATE_CLOSING = 3
STATE_CLOSED = 4
def __init__(self):
self.state = self.STATE_INIT
self.buffer = Buffer(1024 * 16)
self.socket = None
self.reader = None
self.writer = None
self._time_command = False #whether to keep timing stats on a cmd
self._command_time = -1
self._incommand = False
self.current_resultset = None
def _scramble(self, password, seed):
"""taken from java jdbc driver, scrambles the password using the given seed
according to the mysql login protocol"""
stage1 = SHA(password).digest()
stage2 = SHA(stage1).digest()
md = SHA()
md.update(seed)
md.update(stage2)
#i love python :-):
return ''.join(map(chr, [x ^ ord(stage1[i]) for i, x in enumerate(map(ord, md.digest()))]))
def _handshake(self, user, password, database, charset):
"""performs the mysql login handshake"""
#init buffer for reading (both pos and lim = 0)
self.buffer.clear()
self.buffer.flip()
#read server welcome
packet = self.reader.read_packet()
self.protocol_version = packet.read_byte() #normally this would be 10 (0xa)
if self.protocol_version == 0xff:
#error on initial greeting, possibly too many connection error
raise ClientLoginError.from_error_packet(packet, skip = 2)
elif self.protocol_version == 0xa:
pass #expected
else:
assert False, "Unexpected protocol version %02x" % self.protocol_version
self.server_version = packet.read_bytes_until(0)
packet.skip(4) #thread_id
scramble_buff = packet.read_bytes(8)
packet.skip(1) #filler
server_caps = packet.read_short()
#CAPS.dbg(server_caps)
if not server_caps & CAPS.PROTOCOL_41:
assert False, "<4.1 auth not supported"
server_language = packet.read_byte()
server_status = packet.read_short()
packet.skip(13) #filler
if packet.remaining:
scramble_buff += packet.read_bytes_until(0)
else:
assert False, "<4.1 auth not supported"
client_caps = server_caps
#always turn off compression
client_caps &= ~CAPS.COMPRESS
client_caps &= ~CAPS.NO_SCHEMA
if not server_caps & CAPS.CONNECT_WITH_DB and database:
assert False, "initial db given but not supported by server"
if server_caps & CAPS.CONNECT_WITH_DB and not database:
client_caps &= ~CAPS.CONNECT_WITH_DB
#build and write our answer to the initial handshake packet
self.writer.clear()
self.writer.start()
self.writer.write_int(client_caps)
self.writer.write_int(1024 * 1024 * 32) #16mb max packet
if charset:
self.writer.write_byte(charset_map[charset.replace("-", "")])
else:
self.writer.write_byte(server_language)
self.writer.write_bytes('\0' * 23) #filler
self.writer.write_bytes(user + '\0')
if password:
self.writer.write_byte(20)
self.writer.write_bytes(self._scramble(password, scramble_buff))
else:
self.writer.write_byte(0)
if database:
self.writer.write_bytes(database + '\0')
self.writer.finish(1)
self.writer.flush()
#read final answer from server
self.buffer.flip()
packet = self.reader.read_packet()
result = packet.read_byte()
if result == 0xff:
raise ClientLoginError.from_error_packet(packet)
elif result == 0xfe:
assert False, "old password handshake not implemented"
def _close_current_resultset(self, resultset):
assert resultset == self.current_resultset
self.current_resultset = None
def _send_command(self, cmd, cmd_text):
"""sends a command with the given text"""
#self.log.debug('cmd %s %s', cmd, cmd_text)
#note: we are not using normal writer.start/finish here, because the cmd
#could not fit in buffer, causing flushes in write_string, in that case 'finish' would
#not be able to go back to the header of the packet to write the length in that case
self.writer.clear()
self.writer.write_header(len(cmd_text) + 1 + 4, 0) #1 is len of cmd, 4 is len of header, 0 is packet number
self.writer.write_byte(cmd)
self.writer.write_bytes(cmd_text)
self.writer.flush()
def _close(self):
#self.log.debug("close mysql client %s", id(self))
try:
self.state = self.STATE_CLOSING
if self.current_resultset:
self.current_resultset.close(True)
self.socket.close()
self.state = self.STATE_CLOSED
except:
self.state = self.STATE_ERROR
raise
def connect(self, host = "localhost", port = 3306, user = "", password = "", db = "", autocommit = None, charset = None, use_unicode=False):
"""connects to the given host and port with user and password"""
#self.log.debug("connect mysql client %s %s %s %s %s", id(self), host, port, user, password)
try:
- #print 'connect', host, user, password, db
#parse addresses of form str <host:port>
- if type(host) == str:
- if host[0] == '/': #assume unix domain socket
- addr = host
- elif ':' in host:
- host, port = host.split(':')
- port = int(port)
- addr = (host, port)
- else:
- addr = (host, port)
+ assert type(host) == str, "make sure host is a string"
+
+ if host[0] == '/': #assume unix domain socket
+ addr = host
+ elif ':' in host:
+ host, port = host.split(':')
+ port = int(port)
+ addr = (host, port)
+ else:
+ addr = (host, port)
assert self.state == self.STATE_INIT, "make sure connection is not already connected or closed"
self.state = self.STATE_CONNECTING
self.socket = socket.create_connection(addr)
self.reader = BufferedPacketReader(self.socket, self.buffer)
self.writer = BufferedPacketWriter(self.socket, self.buffer)
self._handshake(user, password, db, charset)
#handshake complete client can now send commands
self.state = self.STATE_CONNECTED
if autocommit == False:
self.set_autocommit(False)
elif autocommit == True:
self.set_autocommit(True)
else:
pass #whatever is the default of the db (ON in the case of mysql)
if charset is not None:
self.set_charset(charset)
self.set_use_unicode(use_unicode)
return self
except gevent.Timeout:
self.state = self.STATE_INIT
raise
except ClientLoginError:
self.state = self.STATE_INIT
raise
except:
self.state = self.STATE_ERROR
raise
def close(self):
"""close this connection"""
assert self.is_connected(), "make sure connection is connected before closing"
if self._incommand != False: assert False, "cannot close while still in a command"
self._close()
def command(self, cmd, cmd_text):
"""sends a COM_XXX command with the given text and possibly return a resultset (select)"""
#print 'command', cmd, repr(cmd_text), type(cmd_text)
assert type(cmd_text) == str #as opposed to unicode
assert self.is_connected(), "make sure connection is connected before query"
if self._incommand != False: assert False, "overlapped commands not supported"
if self.current_resultset: assert False, "overlapped commands not supported, pls read prev resultset and close it"
try:
self._incommand = True
if self._time_command:
start_time = time.time()
self._send_command(cmd, cmd_text)
#read result, expect 1 of OK, ERROR or result set header
self.buffer.flip()
packet = self.reader.read_packet()
result = packet.read_byte()
#print 'res', result
if self._time_command:
end_time = time.time()
self._command_time = end_time - start_time
if result == 0x00:
#OK, return (affected rows, last row id)
rowcount = self.reader.read_length_coded_binary()
lastrowid = self.reader.read_length_coded_binary()
return (rowcount, lastrowid)
elif result == 0xff:
raise ClientCommandError.from_error_packet(packet)
else: #result set
self.current_resultset = ResultSet(self, result)
return self.current_resultset
except socket.error, e:
(errorcode, errorstring) = e
if errorcode in [errno.ECONNABORTED, errno.ECONNREFUSED, errno.ECONNRESET, errno.EPIPE, errno.WSAECONNABORTED]:
self._incommand = False
self.close()
raise
finally:
self._incommand = False
def is_connected(self):
return self.state == self.STATE_CONNECTED
def query(self, cmd_text):
"""Sends a COM_QUERY command with the given text and return a resultset (select)"""
return self.command(COMMAND.QUERY, cmd_text)
def init_db(self, cmd_text):
"""Sends a COM_INIT command with the given text"""
return self.command(COMMAND.INITDB, cmd_text)
def set_autocommit(self, commit):
"""Sets autocommit setting for this connection. True = on, False = off"""
self.command(COMMAND.QUERY, "SET AUTOCOMMIT = %s" % ('1' if commit else '0'))
def commit(self):
"""Commits this connection"""
self.command(COMMAND.QUERY, "COMMIT")
def rollback(self):
"""Issues a rollback on this connection"""
self.command(COMMAND.QUERY, "ROLLBACK")
def set_charset(self, charset):
"""Sets the charset for this connections (used to decode string fields into unicode strings)"""
self.reader.reader.encoding = charset
def set_use_unicode(self, use_unicode):
self.reader.reader.use_unicode = use_unicode
def set_time_command(self, time_command):
self._time_command = time_command
def get_command_time(self):
return self._command_time
Connection.log = logging.getLogger(Connection.__name__)
def connect(*args, **kwargs):
return Connection().connect(*args, **kwargs)
|
mthurlin/gevent-MySQL
|
e191afada185cf90e8904fc226352ee2565cb0da
|
Corrected problem when using %s in string arguments.
|
diff --git a/lib/geventmysql/__init__.py b/lib/geventmysql/__init__.py
index b847e12..cda7df7 100644
--- a/lib/geventmysql/__init__.py
+++ b/lib/geventmysql/__init__.py
@@ -1,228 +1,227 @@
# Copyright (C) 2009, Hyves (Startphone Ltd.)
#
# This module is part of the Concurrence Framework and is released under
# the New BSD License: http://www.opensource.org/licenses/bsd-license.php
#this is a dbapi/mysqldb compatible wrapper around the lowlevel
#client in client.py
#TODO weak ref on connection in cursor
import sys
import logging
import exceptions
import gevent
TaskletExit = gevent.GreenletExit
from datetime import datetime, date
from geventmysql import client
threadsafety = 1
apilevel = "2.0"
paramstyle = "format"
default_charset = sys.getdefaultencoding()
class Error(exceptions.StandardError): pass
class Warning(exceptions.StandardError): pass
class InterfaceError(Error): pass
class DatabaseError(Error): pass
class InternalError(DatabaseError): pass
class OperationalError(DatabaseError): pass
class ProgrammingError(DatabaseError): pass
class IntegrityError(DatabaseError): pass
class DataError(DatabaseError): pass
class NotSupportedError(DatabaseError): pass
class TimeoutError(DatabaseError): pass
class Cursor(object):
log = logging.getLogger('Cursor')
def __init__(self, connection):
self.connection = connection
self.result = None
self.closed = False
self._close_result()
def _close_result(self):
#make sure any previous resultset is closed correctly
if self.result is not None:
#make sure any left over resultset is read from the db, otherwise
#the connection would be in an inconsistent state
try:
while True:
self.result_iter.next()
except StopIteration:
pass #done
self.result.close()
self.description = None
self.result = None
self.result_iter = None
self.lastrowid = None
self.rowcount = -1
def _escape_string(self, s, replace = {'\0': '\\0', '\n': '\\n', '\r': '\\r', '\\': '\\\\', "'": "\\'", '"': '\\"', '\x1a': '\\Z'}):
"""take from mysql src code:"""
- #TODO how fast is this?, do this in C/pyrex?
+ #TODO how fast is this?, do this in C/pyrex?
get = replace.get
return "".join([get(ch, ch) for ch in s])
def _wrap_exception(self, e, msg):
self.log.exception(msg)
if isinstance(e, gevent.Timeout):
return TimeoutError(msg + ': ' + str(e))
else:
return Error(msg + ': ' + str(e))
def execute(self, qry, args = []):
#print repr(qry), repr(args), self.connection.charset
-
if self.closed:
raise ProgrammingError('this cursor is already closed')
if type(qry) == unicode:
#we will only communicate in 8-bits with mysql
qry = qry.encode(self.connection.charset)
-
+
try:
self._close_result() #close any previous result if needed
#substitute arguments
+ params = []
for arg in args:
if type(arg) == str:
- qry = qry.replace('%s', "'%s'" % self._escape_string(arg), 1)
+ params.append("'%s'" % self._escape_string(arg))
elif type(arg) == unicode:
- qry = qry.replace('%s', "'%s'" % self._escape_string(arg).encode(self.connection.charset), 1)
- elif type(arg) == int:
- qry = qry.replace('%s', str(arg), 1)
- elif type(arg) == long:
- qry = qry.replace('%s', str(arg), 1)
+ params.append("'%s'" % self._escape_string(arg).encode(self.connection.charset))
+ elif isinstance(arg, (int, long, float)):
+ params.append(str(arg))
elif arg is None:
- qry = qry.replace('%s', 'null', 1)
+ params.append('null')
elif isinstance(arg, datetime):
- qry = qry.replace('%s', "'%s'" % arg.strftime('%Y-%m-%d %H:%M:%S'), 1)
+ params.append("'%s'" % arg.strftime('%Y-%m-%d %H:%M:%S'))
elif isinstance(arg, date):
- qry = qry.replace('%s', "'%s'" % arg.strftime('%Y-%m-%d'), 1)
+ params.append("'%s'" % arg.strftime('%Y-%m-%d'))
else:
assert False, "unknown argument type: %s %s" % (type(arg), repr(arg))
-
+
+ qry = qry % tuple(params)
result = self.connection.client.query(qry)
#process result if nescecary
if isinstance(result, client.ResultSet):
self.description = tuple(((name, type_code, None, None, None, None, None) for name, type_code, charsetnr in result.fields))
self.result = result
self.result_iter = iter(result)
self.lastrowid = None
self.rowcount = -1
else:
self.rowcount, self.lastrowid = result
self.description = None
self.result = None
except TaskletExit:
raise
except Exception, e:
raise self._wrap_exception(e, "an error occurred while executing qry %s" % (qry, ))
def fetchall(self):
try:
return list(self.result_iter)
except TaskletExit:
raise
except Exception, e:
raise self._wrap_exception(e, "an error occurred while fetching results")
def fetchone(self):
try:
return self.result_iter.next()
except StopIteration:
return None
except TaskletExit:
raise
except Exception, e:
raise self._wrap_exception(e, "an error occurred while fetching results")
def close(self):
if self.closed:
raise ProgrammingError("cannot cursor twice")
try:
self._close_result()
self.closed = True
except TaskletExit:
raise
except Exception, e:
raise self._wrap_exception(e, "an error occurred while closing cursor")
class Connection(object):
def __init__(self, *args, **kwargs):
self.kwargs = kwargs.copy()
if not 'autocommit' in self.kwargs:
#we set autocommit explicitly to OFF as required by python db api, because default of mysql would be ON
self.kwargs['autocommit'] = False
else:
pass #user specified explictly what he wanted for autocommit
if 'charset' in self.kwargs:
self.charset = self.kwargs['charset']
if 'use_unicode' in self.kwargs and self.kwargs['use_unicode'] == True:
pass #charset stays in args, and triggers unicode output in low-level client
else:
del self.kwargs['charset']
else:
self.charset = default_charset
self.client = client.Connection() #low level mysql client
self.client.connect(*args, **self.kwargs)
self.closed = False
def close(self):
#print 'dbapi Connection close'
if self.closed:
raise ProgrammingError("cannot close connection twice")
try:
self.client.close()
del self.client
self.closed = True
except TaskletExit:
raise
except Exception, e:
msg = "an error occurred while closing connection: "
self.log.exception(msg)
raise Error(msg + str(e))
def cursor(self):
if self.closed:
raise ProgrammingError("this connection is already closed")
return Cursor(self)
def get_server_info(self):
return self.client.server_version
def rollback(self):
self.client.rollback()
def commit(self):
self.client.commit()
@property
def socket(self):
return self.client.socket
def connect(*args, **kwargs):
return Connection(*args, **kwargs)
Connect = connect
|
mthurlin/gevent-MySQL
|
2a911f0f118a34d94a9365d5c25dd47f6ba14962
|
Changed "passwd" to "password" to conform to DBAPI-spec
|
diff --git a/examples/benchmark.py b/examples/benchmark.py
index f3e9b0a..331f88d 100644
--- a/examples/benchmark.py
+++ b/examples/benchmark.py
@@ -1,28 +1,28 @@
import geventmysql
import time
import os
import gevent
curtime = time.time if os.name == "posix" else time.clock
C = 50
N = 1000
def task():
- conn = geventmysql.connect(host="127.0.0.1", user="root", passwd="")
+ conn = geventmysql.connect(host="127.0.0.1", user="root", password="")
cur = conn.cursor()
for i in range(N):
cur.execute("SELECT 1")
res = cur.fetchall()
start = curtime()
gevent.joinall([gevent.spawn(task) for i in range(C)])
elapsed = curtime() - start
num = C * N
print "Performed %d queries in %.2f seconds : %.1f queries/sec" % (num, elapsed, num / elapsed)
diff --git a/examples/simple_query.py b/examples/simple_query.py
index 01f073c..f88ac58 100644
--- a/examples/simple_query.py
+++ b/examples/simple_query.py
@@ -1,13 +1,13 @@
import geventmysql
-conn = geventmysql.connect(host="127.0.0.1", user="root", passwd="")
+conn = geventmysql.connect(host="127.0.0.1", user="root", password="")
cursor = conn.cursor()
cursor.execute("SELECT 1")
print cursor.fetchall()
cursor.close()
conn.close()
\ No newline at end of file
diff --git a/lib/geventmysql/client.py b/lib/geventmysql/client.py
index 09d572f..4c0e7ad 100644
--- a/lib/geventmysql/client.py
+++ b/lib/geventmysql/client.py
@@ -1,390 +1,390 @@
# Copyright (C) 2009, Hyves (Startphone Ltd.)
#
# This module is part of the Concurrence Framework and is released under
# the New BSD License: http://www.opensource.org/licenses/bsd-license.php
#TODO supporting closing a halfread resultset (e.g. automatically read and discard rest)
from geventmysql._mysql import Buffer
from geventmysql.mysql import BufferedPacketReader, BufferedPacketWriter, PACKET_READ_RESULT, CAPS, COMMAND
import logging
import time
from gevent import socket
import gevent
# From query: SHOW COLLATION;
charset_map = {}
charset_map["big5"] = 1
charset_map["dec8"] = 3
charset_map["cp850"] = 4
charset_map["hp8"] = 6
charset_map["koi8r"] = 7
charset_map["latin1"] = 8
charset_map["latin1"] = 8
charset_map["latin2"] = 9
charset_map["swe7"] = 10
charset_map["ascii"] = 11
charset_map["ujis"] = 12
charset_map["sjis"] = 13
charset_map["hebrew"] = 16
charset_map["tis620"] = 18
charset_map["euckr"] = 19
charset_map["koi8u"] = 22
charset_map["gb2312"] = 24
charset_map["greek"] = 25
charset_map["cp1250"] = 26
charset_map["gbk"] = 28
charset_map["latin5"] = 30
charset_map["armscii8"] = 32
charset_map["utf8"] = 33
charset_map["utf8"] = 33
charset_map["ucs2"] = 35
charset_map["cp866"] = 36
charset_map["keybcs2"] = 37
charset_map["macce"] = 38
charset_map["macroman"] = 39
charset_map["cp852"] = 40
charset_map["latin7"] = 41
charset_map["cp1251"] = 51
charset_map["cp1256"] = 57
charset_map["cp1257"] = 59
charset_map["binary"] = 63
charset_map["geostd8"] = 92
charset_map["cp932"] = 95
charset_map["eucjpms"] = 97
try:
#python 2.6
import hashlib
SHA = hashlib.sha1
except ImportError:
#python 2.5
import sha
SHA = sha.new
#import time
class ClientError(Exception):
@classmethod
def from_error_packet(cls, packet, skip = 8):
packet.skip(skip)
return cls(packet.read_bytes(packet.remaining))
class ClientLoginError(ClientError): pass
class ClientCommandError(ClientError): pass
class ClientProgrammingError(ClientError): pass
class ResultSet(object):
"""Represents the current resultset being read from a Connection.
The resultset implements an iterator over rows. A Resultset must
be iterated entirely and closed explicitly."""
STATE_INIT = 0
STATE_OPEN = 1
STATE_EOF = 2
STATE_CLOSED = 3
def __init__(self, connection, field_count):
self.state = self.STATE_INIT
self.connection = connection
self.fields = connection.reader.read_fields(field_count)
self.state = self.STATE_OPEN
def __iter__(self):
assert self.state == self.STATE_OPEN, "cannot iterate a resultset when it is not open"
for row in self.connection.reader.read_rows(self.fields):
yield row
self.state = self.STATE_EOF
def close(self, connection_close = False):
"""Closes the current resultset. Make sure you have iterated over all rows before closing it!"""
#print 'close on ResultSet', id(self.connection)
if self.state != self.STATE_EOF and not connection_close:
raise ClientProgrammingError("you can only close a resultset when it was read entirely!")
connection = self.connection
del self.connection
del self.fields
connection._close_current_resultset(self)
self.state = self.STATE_CLOSED
class Connection(object):
"""Represents a single connection to a MySQL Database host."""
STATE_ERROR = -1
STATE_INIT = 0
STATE_CONNECTING = 1
STATE_CONNECTED = 2
STATE_CLOSING = 3
STATE_CLOSED = 4
def __init__(self):
self.state = self.STATE_INIT
self.buffer = Buffer(1024 * 16)
self.socket = None
self.reader = None
self.writer = None
self._time_command = False #whether to keep timing stats on a cmd
self._command_time = -1
self._incommand = False
self.current_resultset = None
def _scramble(self, password, seed):
"""taken from java jdbc driver, scrambles the password using the given seed
according to the mysql login protocol"""
stage1 = SHA(password).digest()
stage2 = SHA(stage1).digest()
md = SHA()
md.update(seed)
md.update(stage2)
#i love python :-):
return ''.join(map(chr, [x ^ ord(stage1[i]) for i, x in enumerate(map(ord, md.digest()))]))
def _handshake(self, user, password, database, charset):
"""performs the mysql login handshake"""
#init buffer for reading (both pos and lim = 0)
self.buffer.clear()
self.buffer.flip()
#read server welcome
packet = self.reader.read_packet()
self.protocol_version = packet.read_byte() #normally this would be 10 (0xa)
if self.protocol_version == 0xff:
#error on initial greeting, possibly too many connection error
raise ClientLoginError.from_error_packet(packet, skip = 2)
elif self.protocol_version == 0xa:
pass #expected
else:
assert False, "Unexpected protocol version %02x" % self.protocol_version
self.server_version = packet.read_bytes_until(0)
packet.skip(4) #thread_id
scramble_buff = packet.read_bytes(8)
packet.skip(1) #filler
server_caps = packet.read_short()
#CAPS.dbg(server_caps)
if not server_caps & CAPS.PROTOCOL_41:
assert False, "<4.1 auth not supported"
server_language = packet.read_byte()
server_status = packet.read_short()
packet.skip(13) #filler
if packet.remaining:
scramble_buff += packet.read_bytes_until(0)
else:
assert False, "<4.1 auth not supported"
client_caps = server_caps
#always turn off compression
client_caps &= ~CAPS.COMPRESS
client_caps &= ~CAPS.NO_SCHEMA
if not server_caps & CAPS.CONNECT_WITH_DB and database:
assert False, "initial db given but not supported by server"
if server_caps & CAPS.CONNECT_WITH_DB and not database:
client_caps &= ~CAPS.CONNECT_WITH_DB
#build and write our answer to the initial handshake packet
self.writer.clear()
self.writer.start()
self.writer.write_int(client_caps)
self.writer.write_int(1024 * 1024 * 32) #16mb max packet
if charset:
self.writer.write_byte(charset_map[charset.replace("-", "")])
else:
self.writer.write_byte(server_language)
self.writer.write_bytes('\0' * 23) #filler
self.writer.write_bytes(user + '\0')
if password:
self.writer.write_byte(20)
self.writer.write_bytes(self._scramble(password, scramble_buff))
else:
self.writer.write_byte(0)
if database:
self.writer.write_bytes(database + '\0')
self.writer.finish(1)
self.writer.flush()
#read final answer from server
self.buffer.flip()
packet = self.reader.read_packet()
result = packet.read_byte()
if result == 0xff:
raise ClientLoginError.from_error_packet(packet)
elif result == 0xfe:
assert False, "old password handshake not implemented"
def _close_current_resultset(self, resultset):
assert resultset == self.current_resultset
self.current_resultset = None
def _send_command(self, cmd, cmd_text):
"""sends a command with the given text"""
#self.log.debug('cmd %s %s', cmd, cmd_text)
#note: we are not using normal writer.start/finish here, because the cmd
#could not fit in buffer, causing flushes in write_string, in that case 'finish' would
#not be able to go back to the header of the packet to write the length in that case
self.writer.clear()
self.writer.write_header(len(cmd_text) + 1 + 4, 0) #1 is len of cmd, 4 is len of header, 0 is packet number
self.writer.write_byte(cmd)
self.writer.write_bytes(cmd_text)
self.writer.flush()
def _close(self):
#self.log.debug("close mysql client %s", id(self))
try:
self.state = self.STATE_CLOSING
if self.current_resultset:
self.current_resultset.close(True)
self.socket.close()
self.state = self.STATE_CLOSED
except:
self.state = self.STATE_ERROR
raise
- def connect(self, host = "localhost", port = 3306, user = "", passwd = "", db = "", autocommit = None, charset = None, use_unicode=False):
- """connects to the given host and port with user and passwd"""
- #self.log.debug("connect mysql client %s %s %s %s %s", id(self), host, port, user, passwd)
+ def connect(self, host = "localhost", port = 3306, user = "", password = "", db = "", autocommit = None, charset = None, use_unicode=False):
+ """connects to the given host and port with user and password"""
+ #self.log.debug("connect mysql client %s %s %s %s %s", id(self), host, port, user, password)
try:
- #print 'connect', host, user, passwd, db
+ #print 'connect', host, user, password, db
#parse addresses of form str <host:port>
if type(host) == str:
if host[0] == '/': #assume unix domain socket
addr = host
elif ':' in host:
host, port = host.split(':')
port = int(port)
addr = (host, port)
else:
addr = (host, port)
assert self.state == self.STATE_INIT, "make sure connection is not already connected or closed"
self.state = self.STATE_CONNECTING
self.socket = socket.create_connection(addr)
self.reader = BufferedPacketReader(self.socket, self.buffer)
self.writer = BufferedPacketWriter(self.socket, self.buffer)
- self._handshake(user, passwd, db, charset)
+ self._handshake(user, password, db, charset)
#handshake complete client can now send commands
self.state = self.STATE_CONNECTED
if autocommit == False:
self.set_autocommit(False)
elif autocommit == True:
self.set_autocommit(True)
else:
pass #whatever is the default of the db (ON in the case of mysql)
if charset is not None:
self.set_charset(charset)
self.set_use_unicode(use_unicode)
return self
except gevent.Timeout:
self.state = self.STATE_INIT
raise
except ClientLoginError:
self.state = self.STATE_INIT
raise
except:
self.state = self.STATE_ERROR
raise
def close(self):
"""close this connection"""
assert self.is_connected(), "make sure connection is connected before closing"
if self._incommand != False: assert False, "cannot close while still in a command"
self._close()
def command(self, cmd, cmd_text):
"""sends a COM_XXX command with the given text and possibly return a resultset (select)"""
#print 'command', cmd, repr(cmd_text), type(cmd_text)
assert type(cmd_text) == str #as opposed to unicode
assert self.is_connected(), "make sure connection is connected before query"
if self._incommand != False: assert False, "overlapped commands not supported"
if self.current_resultset: assert False, "overlapped commands not supported, pls read prev resultset and close it"
try:
self._incommand = True
if self._time_command:
start_time = time.time()
self._send_command(cmd, cmd_text)
#read result, expect 1 of OK, ERROR or result set header
self.buffer.flip()
packet = self.reader.read_packet()
result = packet.read_byte()
#print 'res', result
if self._time_command:
end_time = time.time()
self._command_time = end_time - start_time
if result == 0x00:
#OK, return (affected rows, last row id)
rowcount = self.reader.read_length_coded_binary()
lastrowid = self.reader.read_length_coded_binary()
return (rowcount, lastrowid)
elif result == 0xff:
raise ClientCommandError.from_error_packet(packet)
else: #result set
self.current_resultset = ResultSet(self, result)
return self.current_resultset
finally:
self._incommand = False
def is_connected(self):
return self.state == self.STATE_CONNECTED
def query(self, cmd_text):
"""Sends a COM_QUERY command with the given text and return a resultset (select)"""
return self.command(COMMAND.QUERY, cmd_text)
def init_db(self, cmd_text):
"""Sends a COM_INIT command with the given text"""
return self.command(COMMAND.INITDB, cmd_text)
def set_autocommit(self, commit):
"""Sets autocommit setting for this connection. True = on, False = off"""
self.command(COMMAND.QUERY, "SET AUTOCOMMIT = %s" % ('1' if commit else '0'))
def commit(self):
"""Commits this connection"""
self.command(COMMAND.QUERY, "COMMIT")
def rollback(self):
"""Issues a rollback on this connection"""
self.command(COMMAND.QUERY, "ROLLBACK")
def set_charset(self, charset):
"""Sets the charset for this connections (used to decode string fields into unicode strings)"""
self.reader.reader.encoding = charset
def set_use_unicode(self, use_unicode):
self.reader.reader.use_unicode = use_unicode
def set_time_command(self, time_command):
self._time_command = time_command
def get_command_time(self):
return self._command_time
Connection.log = logging.getLogger(Connection.__name__)
def connect(*args, **kwargs):
return Connection().connect(*args, **kwargs)
diff --git a/test/testmysql.py b/test/testmysql.py
index a6a459f..62c2480 100644
--- a/test/testmysql.py
+++ b/test/testmysql.py
@@ -1,613 +1,613 @@
# -*- coding: latin1 -*-
from __future__ import with_statement
import time
import datetime
import logging
import unittest
import gevent
import geventmysql as dbapi
from geventmysql import client
from geventmysql._mysql import PacketReadError
DB_HOST = '127.0.0.1:3306'
DB_USER = 'gevent_test'
DB_PASSWD = 'gevent_test'
DB_DB = 'gevent_test'
class TestMySQL(unittest.TestCase):
log = logging.getLogger('TestMySQL')
def testMySQLClient(self):
cnn = client.connect(host = DB_HOST, user = DB_USER,
- passwd = DB_PASSWD, db = DB_DB)
+ password = DB_PASSWD, db = DB_DB)
rs = cnn.query("select 1")
self.assertEqual([(1,)], list(rs))
rs.close()
cnn.close()
def testConnectNoDb(self):
- cnn = client.connect(host = DB_HOST, user = DB_USER, passwd = DB_PASSWD)
+ cnn = client.connect(host = DB_HOST, user = DB_USER, password = DB_PASSWD)
rs = cnn.query("select 1")
self.assertEqual([(1,)], list(rs))
rs.close()
cnn.close()
def testMySQLClient2(self):
cnn = client.connect(host = DB_HOST, user = DB_USER,
- passwd = DB_PASSWD, db = DB_DB)
+ password = DB_PASSWD, db = DB_DB)
cnn.query("truncate tbltest")
for i in range(10):
self.assertEquals((1, 0), cnn.query("insert into tbltest (test_id, test_string) values (%d, 'test%d')" % (i, i)))
rs = cnn.query("select test_id, test_string from tbltest")
#trying to close it now would give an error, e.g. we always need to read
#the result from the database otherwise connection would be in wrong stat
try:
rs.close()
self.fail('expected exception')
except client.ClientProgrammingError:
pass
for i, row in enumerate(rs):
self.assertEquals((i, 'test%d' % i), row)
rs.close()
cnn.close()
def testMySQLTimeout(self):
cnn = client.connect(host = DB_HOST, user = DB_USER,
- passwd = DB_PASSWD, db = DB_DB)
+ password = DB_PASSWD, db = DB_DB)
rs = cnn.query("select sleep(2)")
list(rs)
rs.close()
from gevent import Timeout
start = time.time()
try:
def delay():
cnn.query("select sleep(4)")
self.fail('expected timeout')
gevent.with_timeout(2, delay)
except Timeout:
end = time.time()
self.assertAlmostEqual(2.0, end - start, places = 1)
cnn.close()
def testParallelQuery(self):
def query(s):
cnn = dbapi.connect(host = DB_HOST, user = DB_USER,
- passwd = DB_PASSWD, db = DB_DB)
+ password = DB_PASSWD, db = DB_DB)
cur = cnn.cursor()
cur.execute("select sleep(%d)" % s)
cur.close()
cnn.close()
start = time.time()
ch1 = gevent.spawn(query, 1)
ch2 = gevent.spawn(query, 2)
ch3 = gevent.spawn(query, 3)
gevent.joinall([ch1, ch2, ch3])
end = time.time()
self.assertAlmostEqual(3.0, end - start, places = 1)
def testMySQLDBAPI(self):
cnn = dbapi.connect(host = DB_HOST, user = DB_USER,
- passwd = DB_PASSWD, db = DB_DB)
+ password = DB_PASSWD, db = DB_DB)
cur = cnn.cursor()
cur.execute("truncate tbltest")
for i in range(10):
cur.execute("insert into tbltest (test_id, test_string) values (%d, 'test%d')" % (i, i))
cur.close()
cur = cnn.cursor()
cur.execute("select test_id, test_string from tbltest")
self.assertEquals((0, 'test0'), cur.fetchone())
#check that fetchall gets the remainder
self.assertEquals([(1, 'test1'), (2, 'test2'), (3, 'test3'), (4, 'test4'), (5, 'test5'), (6, 'test6'), (7, 'test7'), (8, 'test8'), (9, 'test9')], cur.fetchall())
#another query on the same cursor should work
cur.execute("select test_id, test_string from tbltest")
#fetch some but not all
self.assertEquals((0, 'test0'), cur.fetchone())
self.assertEquals((1, 'test1'), cur.fetchone())
self.assertEquals((2, 'test2'), cur.fetchone())
#close whould work even with half read resultset
cur.close()
#this should not work, cursor was closed
try:
cur.execute("select * from tbltest")
self.fail("expected exception")
except dbapi.ProgrammingError:
pass
def testLargePackets(self):
cnn = client.connect(host = DB_HOST, user = DB_USER,
- passwd = DB_PASSWD, db = DB_DB)
+ password = DB_PASSWD, db = DB_DB)
cnn.query("truncate tbltest")
c = cnn.buffer.capacity
blob = '0123456789'
while 1:
cnn.query("insert into tbltest (test_id, test_blob) values (%d, '%s')" % (len(blob), blob))
if len(blob) > (c * 2): break
blob = blob * 2
rs = cnn.query("select test_id, test_blob from tbltest")
for row in rs:
self.assertEquals(row[0], len(row[1]))
self.assertEquals(blob[:row[0]], row[1])
rs.close()
#reread, second time, oversize packet is already present
rs = cnn.query("select test_id, test_blob from tbltest")
for row in rs:
self.assertEquals(row[0], len(row[1]))
self.assertEquals(blob[:row[0]], row[1])
rs.close()
cnn.close()
#have a very low max packet size for oversize packets
#and check that exception is thrown when trying to read larger packets
from geventmysql import _mysql
_mysql.MAX_PACKET_SIZE = 1024 * 4
cnn = client.connect(host = DB_HOST, user = DB_USER,
- passwd = DB_PASSWD, db = DB_DB)
+ password = DB_PASSWD, db = DB_DB)
try:
rs = cnn.query("select test_id, test_blob from tbltest")
for row in rs:
self.assertEquals(row[0], len(row[1]))
self.assertEquals(blob[:row[0]], row[1])
self.fail()
except PacketReadError:
pass
finally:
try:
rs.close()
except:
pass
cnn.close()
def testEscapeArgs(self):
cnn = dbapi.connect(host = DB_HOST, user = DB_USER,
- passwd = DB_PASSWD, db = DB_DB)
+ password = DB_PASSWD, db = DB_DB)
cur = cnn.cursor()
cur.execute("truncate tbltest")
cur.execute("insert into tbltest (test_id, test_string) values (%s, %s)", (1, 'piet'))
cur.execute("insert into tbltest (test_id, test_string) values (%s, %s)", (2, 'klaas'))
cur.execute("insert into tbltest (test_id, test_string) values (%s, %s)", (3, "pi'et"))
#classic sql injection, would return all rows if no proper escaping is done
cur.execute("select test_id, test_string from tbltest where test_string = %s", ("piet' OR 'a' = 'a",))
self.assertEquals([], cur.fetchall()) #assert no rows are found
#but we should still be able to find the piet with the apostrophe in its name
cur.execute("select test_id, test_string from tbltest where test_string = %s", ("pi'et",))
self.assertEquals([(3, "pi'et")], cur.fetchall())
#also we should be able to insert and retrieve blob/string with all possible bytes transparently
chars = ''.join([chr(i) for i in range(256)])
cur.execute("insert into tbltest (test_id, test_string, test_blob) values (%s, %s, %s)", (4, chars, chars))
cur.execute("select test_string, test_blob from tbltest where test_id = %s", (4,))
#self.assertEquals([(chars, chars)], cur.fetchall())
s, b = cur.fetchall()[0]
#test blob
self.assertEquals(256, len(b))
self.assertEquals(chars, b)
#test string
self.assertEquals(256, len(s))
self.assertEquals(chars, s)
cur.close()
cnn.close()
def testSelectUnicode(self):
s = u'r\xc3\xa4ksm\xc3\xb6rg\xc3\xa5s'
cnn = dbapi.connect(host = DB_HOST, user = DB_USER,
- passwd = DB_PASSWD, db = DB_DB,
+ password = DB_PASSWD, db = DB_DB,
charset = 'latin-1', use_unicode = True)
cur = cnn.cursor()
cur.execute("truncate tbltest")
cur.execute("insert into tbltest (test_id, test_string) values (%s, %s)", (1, 'piet'))
cur.execute("insert into tbltest (test_id, test_string) values (%s, %s)", (2, s))
cur.execute(u"insert into tbltest (test_id, test_string) values (%s, %s)", (3, s))
cur.execute("select test_id, test_string from tbltest")
result = cur.fetchall()
self.assertEquals([(1, u'piet'), (2, s), (3, s)], result)
#test that we can still cleanly roundtrip a blob, (it should not be encoded if we pass
#it as 'str' argument), eventhough we pass the qry itself as unicode
blob = ''.join([chr(i) for i in range(256)])
cur.execute(u"insert into tbltest (test_id, test_blob) values (%s, %s)", (4, blob))
cur.execute("select test_blob from tbltest where test_id = %s", (4,))
b2 = cur.fetchall()[0][0]
self.assertEquals(str, type(b2))
self.assertEquals(256, len(b2))
self.assertEquals(blob, b2)
def testAutoInc(self):
cnn = dbapi.connect(host = DB_HOST, user = DB_USER,
- passwd = DB_PASSWD, db = DB_DB)
+ password = DB_PASSWD, db = DB_DB)
cur = cnn.cursor()
cur.execute("truncate tblautoincint")
cur.execute("ALTER TABLE tblautoincint AUTO_INCREMENT = 100")
cur.execute("insert into tblautoincint (test_string) values (%s)", ('piet',))
self.assertEqual(1, cur.rowcount)
self.assertEqual(100, cur.lastrowid)
cur.execute("insert into tblautoincint (test_string) values (%s)", ('piet',))
self.assertEqual(1, cur.rowcount)
self.assertEqual(101, cur.lastrowid)
cur.execute("ALTER TABLE tblautoincint AUTO_INCREMENT = 4294967294")
cur.execute("insert into tblautoincint (test_string) values (%s)", ('piet',))
self.assertEqual(1, cur.rowcount)
self.assertEqual(4294967294, cur.lastrowid)
cur.execute("insert into tblautoincint (test_string) values (%s)", ('piet',))
self.assertEqual(1, cur.rowcount)
self.assertEqual(4294967295, cur.lastrowid)
cur.execute("truncate tblautoincbigint")
cur.execute("ALTER TABLE tblautoincbigint AUTO_INCREMENT = 100")
cur.execute("insert into tblautoincbigint (test_string) values (%s)", ('piet',))
self.assertEqual(1, cur.rowcount)
self.assertEqual(100, cur.lastrowid)
cur.execute("insert into tblautoincbigint (test_string) values (%s)", ('piet',))
self.assertEqual(1, cur.rowcount)
self.assertEqual(101, cur.lastrowid)
cur.execute("ALTER TABLE tblautoincbigint AUTO_INCREMENT = 18446744073709551614")
cur.execute("insert into tblautoincbigint (test_string) values (%s)", ('piet',))
self.assertEqual(1, cur.rowcount)
self.assertEqual(18446744073709551614, cur.lastrowid)
#this fails on mysql, but that is a mysql problem
#cur.execute("insert into tblautoincbigint (test_string) values (%s)", ('piet',))
#self.assertEqual(1, cur.rowcount)
#self.assertEqual(18446744073709551615, cur.lastrowid)
cur.close()
cnn.close()
def testLengthCodedBinary(self):
from geventmysql._mysql import Buffer, BufferUnderflowError
from geventmysql.mysql import PacketReader
def create_reader(bytes):
b = Buffer(1024)
for byte in bytes:
b.write_byte(byte)
b.flip()
p = PacketReader(b)
p.packet.position = b.position
p.packet.limit = b.limit
return p
p = create_reader([100])
self.assertEquals(100, p.read_length_coded_binary())
self.assertEquals(p.packet.position, p.packet.limit)
try:
p.read_length_coded_binary()
except BufferUnderflowError:
pass
except:
self.fail('expected underflow')
try:
p = create_reader([252])
p.read_length_coded_binary()
self.fail('expected underflow')
except BufferUnderflowError:
pass
except:
self.fail('expected underflow')
try:
p = create_reader([252, 0xff])
p.read_length_coded_binary()
self.fail('expected underflow')
except BufferUnderflowError:
pass
except:
self.fail('expected underflow')
p = create_reader([252, 0xff, 0xff])
self.assertEquals(0xFFFF, p.read_length_coded_binary())
self.assertEquals(3, p.packet.limit)
self.assertEquals(3, p.packet.position)
try:
p = create_reader([253])
p.read_length_coded_binary()
self.fail('expected underflow')
except BufferUnderflowError:
pass
except:
self.fail('expected underflow')
try:
p = create_reader([253, 0xff])
p.read_length_coded_binary()
self.fail('expected underflow')
except BufferUnderflowError:
pass
except:
self.fail('expected underflow')
try:
p = create_reader([253, 0xff, 0xff])
p.read_length_coded_binary()
self.fail('expected underflow')
except BufferUnderflowError:
pass
except:
self.fail('expected underflow')
p = create_reader([253, 0xff, 0xff, 0xff])
self.assertEquals(0xFFFFFF, p.read_length_coded_binary())
self.assertEquals(4, p.packet.limit)
self.assertEquals(4, p.packet.position)
try:
p = create_reader([254])
p.read_length_coded_binary()
self.fail('expected underflow')
except BufferUnderflowError:
pass
except:
self.fail('expected underflow')
try:
p = create_reader([254, 0xff])
p.read_length_coded_binary()
self.fail('expected underflow')
except BufferUnderflowError:
pass
except:
self.fail('expected underflow')
try:
p = create_reader([254, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff])
p.read_length_coded_binary()
self.fail('expected underflow')
except BufferUnderflowError:
pass
except:
self.fail('expected underflow')
p = create_reader([254, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff])
self.assertEquals(9, p.packet.limit)
self.assertEquals(0, p.packet.position)
self.assertEquals(0xFFFFFFFFFFFFFFFFL, p.read_length_coded_binary())
self.assertEquals(9, p.packet.limit)
self.assertEquals(9, p.packet.position)
def testBigInt(self):
"""Tests the behaviour of insert/select with bigint/long."""
BIGNUM = 112233445566778899
cnn = dbapi.connect(host = DB_HOST, user = DB_USER,
- passwd = DB_PASSWD, db = DB_DB,
+ password = DB_PASSWD, db = DB_DB,
charset = 'latin-1', use_unicode = True)
cur = cnn.cursor()
cur.execute("drop table if exists tblbigint")
cur.execute("""create table tblbigint (
test_id int(11) DEFAULT NULL,
test_bigint bigint DEFAULT NULL,
test_bigint2 bigint DEFAULT NULL) ENGINE=MyISAM DEFAULT CHARSET=latin1""")
cur.execute("insert into tblbigint (test_id, test_bigint, test_bigint2) values (%s, " + str(BIGNUM) + ", %s)", (1, BIGNUM))
cur.execute(u"insert into tblbigint (test_id, test_bigint, test_bigint2) values (%s, " + str(BIGNUM) + ", %s)", (2, BIGNUM))
# Make sure both our inserts where correct (ie, the big number was not truncated/modified on insert)
cur.execute("select test_id from tblbigint where test_bigint = test_bigint2")
result = cur.fetchall()
self.assertEquals([(1, ), (2, )], result)
# Make sure select gets the right values (ie, the big number was not truncated/modified when retrieved)
cur.execute("select test_id, test_bigint, test_bigint2 from tblbigint where test_bigint = test_bigint2")
result = cur.fetchall()
self.assertEquals([(1, BIGNUM, BIGNUM), (2, BIGNUM, BIGNUM)], result)
def testDate(self):
"""Tests the behaviour of insert/select with mysql/DATE <-> python/datetime.date"""
d_date = datetime.date(2010, 02, 11)
d_string = "2010-02-11"
cnn = dbapi.connect(host = DB_HOST, user = DB_USER,
- passwd = DB_PASSWD, db = DB_DB,
+ password = DB_PASSWD, db = DB_DB,
charset = 'latin-1', use_unicode = True)
cur = cnn.cursor()
cur.execute("drop table if exists tbldate")
cur.execute("create table tbldate (test_id int(11) DEFAULT NULL, test_date date DEFAULT NULL, test_date2 date DEFAULT NULL) ENGINE=MyISAM DEFAULT CHARSET=latin1")
cur.execute("insert into tbldate (test_id, test_date, test_date2) values (%s, '" + d_string + "', %s)", (1, d_date))
# Make sure our insert was correct
cur.execute("select test_id from tbldate where test_date = test_date2")
result = cur.fetchall()
self.assertEquals([(1, )], result)
# Make sure select gets the right value back
cur.execute("select test_id, test_date, test_date2 from tbldate where test_date = test_date2")
result = cur.fetchall()
self.assertEquals([(1, d_date, d_date)], result)
def testDateTime(self):
"""Tests the behaviour of insert/select with mysql/DATETIME <-> python/datetime.datetime"""
d_date = datetime.datetime(2010, 02, 11, 13, 37, 42)
d_string = "2010-02-11 13:37:42"
cnn = dbapi.connect(host = DB_HOST, user = DB_USER,
- passwd = DB_PASSWD, db = DB_DB,
+ password = DB_PASSWD, db = DB_DB,
charset = 'latin-1', use_unicode = True)
cur = cnn.cursor()
cur.execute("drop table if exists tbldate")
cur.execute("create table tbldate (test_id int(11) DEFAULT NULL, test_date datetime DEFAULT NULL, test_date2 datetime DEFAULT NULL) ENGINE=MyISAM DEFAULT CHARSET=latin1")
cur.execute("insert into tbldate (test_id, test_date, test_date2) values (%s, '" + d_string + "', %s)", (1, d_date))
# Make sure our insert was correct
cur.execute("select test_id from tbldate where test_date = test_date2")
result = cur.fetchall()
self.assertEquals([(1, )], result)
# Make sure select gets the right value back
cur.execute("select test_id, test_date, test_date2 from tbldate where test_date = test_date2")
result = cur.fetchall()
self.assertEquals([(1, d_date, d_date)], result)
def testZeroDates(self):
"""Tests the behaviour of zero dates"""
zero_datetime = "0000-00-00 00:00:00"
zero_date = "0000-00-00"
cnn = dbapi.connect(host = DB_HOST, user = DB_USER,
- passwd = DB_PASSWD, db = DB_DB,
+ password = DB_PASSWD, db = DB_DB,
charset = 'latin-1', use_unicode = True)
cur = cnn.cursor()
cur.execute("drop table if exists tbldate")
cur.execute("create table tbldate (test_id int(11) DEFAULT NULL, test_date date DEFAULT NULL, test_datetime datetime DEFAULT NULL) ENGINE=MyISAM DEFAULT CHARSET=latin1")
cur.execute("insert into tbldate (test_id, test_date, test_datetime) values (%s, %s, %s)", (1, zero_date, zero_datetime))
# Make sure we get None-values back
cur.execute("select test_id, test_date, test_datetime from tbldate where test_id = 1")
result = cur.fetchall()
self.assertEquals([(1, None, None)], result)
def testUnicodeUTF8(self):
peacesign_unicode = u"\u262e"
peacesign_utf8 = "\xe2\x98\xae"
cnn = dbapi.connect(host = DB_HOST, user = DB_USER,
- passwd = DB_PASSWD, db = DB_DB,
+ password = DB_PASSWD, db = DB_DB,
charset = 'utf-8', use_unicode = True)
cur = cnn.cursor()
cur.execute("drop table if exists tblutf")
cur.execute("create table tblutf (test_id int(11) DEFAULT NULL, test_string VARCHAR(32) DEFAULT NULL) ENGINE=MyISAM DEFAULT CHARSET=utf8")
cur.execute("insert into tblutf (test_id, test_string) values (%s, %s)", (1, peacesign_unicode)) # This should be encoded in utf8
cur.execute("insert into tblutf (test_id, test_string) values (%s, %s)", (2, peacesign_utf8))
cur.execute("select test_id, test_string from tblutf")
result = cur.fetchall()
# We expect unicode strings back
self.assertEquals([(1, peacesign_unicode), (2, peacesign_unicode)], result)
def testCharsets(self):
aumlaut_unicode = u"\u00e4"
aumlaut_utf8 = "\xc3\xa4"
aumlaut_latin1 = "\xe4"
cnn = dbapi.connect(host = DB_HOST, user = DB_USER,
- passwd = DB_PASSWD, db = DB_DB,
+ password = DB_PASSWD, db = DB_DB,
charset = 'utf8', use_unicode = True)
cur = cnn.cursor()
cur.execute("drop table if exists tblutf")
cur.execute("create table tblutf (test_mode VARCHAR(32) DEFAULT NULL, test_utf VARCHAR(32) DEFAULT NULL, test_latin1 VARCHAR(32)) ENGINE=MyISAM DEFAULT CHARSET=utf8")
# We insert the same character using two different encodings
cur.execute("set names utf8")
cur.execute("insert into tblutf (test_mode, test_utf, test_latin1) values ('utf8', _utf8'" + aumlaut_utf8 + "', _latin1'" + aumlaut_latin1 + "')")
cur.execute("set names latin1")
cur.execute("insert into tblutf (test_mode, test_utf, test_latin1) values ('latin1', _utf8'" + aumlaut_utf8 + "', _latin1'" + aumlaut_latin1 + "')")
# We expect the driver to always give us unicode strings back
expected = [(u"utf8", aumlaut_unicode, aumlaut_unicode), (u"latin1", aumlaut_unicode, aumlaut_unicode)]
# Fetch and test with different charsets
for charset in ("latin1", "utf8", "cp1250"):
cur.execute("set names " + charset)
cur.execute("select test_mode, test_utf, test_latin1 from tblutf")
result = cur.fetchall()
self.assertEquals(result, expected)
if __name__ == '__main__':
unittest.main()
|
mthurlin/gevent-MySQL
|
0f8cadcc4369db1afe664d498c3ad5987d418f9e
|
Fixed missing gevent import
|
diff --git a/lib/geventmysql/client.py b/lib/geventmysql/client.py
index bd16e0b..09d572f 100644
--- a/lib/geventmysql/client.py
+++ b/lib/geventmysql/client.py
@@ -1,389 +1,390 @@
# Copyright (C) 2009, Hyves (Startphone Ltd.)
#
# This module is part of the Concurrence Framework and is released under
# the New BSD License: http://www.opensource.org/licenses/bsd-license.php
#TODO supporting closing a halfread resultset (e.g. automatically read and discard rest)
from geventmysql._mysql import Buffer
from geventmysql.mysql import BufferedPacketReader, BufferedPacketWriter, PACKET_READ_RESULT, CAPS, COMMAND
import logging
import time
from gevent import socket
+import gevent
# From query: SHOW COLLATION;
charset_map = {}
charset_map["big5"] = 1
charset_map["dec8"] = 3
charset_map["cp850"] = 4
charset_map["hp8"] = 6
charset_map["koi8r"] = 7
charset_map["latin1"] = 8
charset_map["latin1"] = 8
charset_map["latin2"] = 9
charset_map["swe7"] = 10
charset_map["ascii"] = 11
charset_map["ujis"] = 12
charset_map["sjis"] = 13
charset_map["hebrew"] = 16
charset_map["tis620"] = 18
charset_map["euckr"] = 19
charset_map["koi8u"] = 22
charset_map["gb2312"] = 24
charset_map["greek"] = 25
charset_map["cp1250"] = 26
charset_map["gbk"] = 28
charset_map["latin5"] = 30
charset_map["armscii8"] = 32
charset_map["utf8"] = 33
charset_map["utf8"] = 33
charset_map["ucs2"] = 35
charset_map["cp866"] = 36
charset_map["keybcs2"] = 37
charset_map["macce"] = 38
charset_map["macroman"] = 39
charset_map["cp852"] = 40
charset_map["latin7"] = 41
charset_map["cp1251"] = 51
charset_map["cp1256"] = 57
charset_map["cp1257"] = 59
charset_map["binary"] = 63
charset_map["geostd8"] = 92
charset_map["cp932"] = 95
charset_map["eucjpms"] = 97
try:
#python 2.6
import hashlib
SHA = hashlib.sha1
except ImportError:
#python 2.5
import sha
SHA = sha.new
#import time
class ClientError(Exception):
@classmethod
def from_error_packet(cls, packet, skip = 8):
packet.skip(skip)
return cls(packet.read_bytes(packet.remaining))
class ClientLoginError(ClientError): pass
class ClientCommandError(ClientError): pass
class ClientProgrammingError(ClientError): pass
class ResultSet(object):
"""Represents the current resultset being read from a Connection.
The resultset implements an iterator over rows. A Resultset must
be iterated entirely and closed explicitly."""
STATE_INIT = 0
STATE_OPEN = 1
STATE_EOF = 2
STATE_CLOSED = 3
def __init__(self, connection, field_count):
self.state = self.STATE_INIT
self.connection = connection
self.fields = connection.reader.read_fields(field_count)
self.state = self.STATE_OPEN
def __iter__(self):
assert self.state == self.STATE_OPEN, "cannot iterate a resultset when it is not open"
for row in self.connection.reader.read_rows(self.fields):
yield row
self.state = self.STATE_EOF
def close(self, connection_close = False):
"""Closes the current resultset. Make sure you have iterated over all rows before closing it!"""
#print 'close on ResultSet', id(self.connection)
if self.state != self.STATE_EOF and not connection_close:
raise ClientProgrammingError("you can only close a resultset when it was read entirely!")
connection = self.connection
del self.connection
del self.fields
connection._close_current_resultset(self)
self.state = self.STATE_CLOSED
class Connection(object):
"""Represents a single connection to a MySQL Database host."""
STATE_ERROR = -1
STATE_INIT = 0
STATE_CONNECTING = 1
STATE_CONNECTED = 2
STATE_CLOSING = 3
STATE_CLOSED = 4
def __init__(self):
self.state = self.STATE_INIT
self.buffer = Buffer(1024 * 16)
self.socket = None
self.reader = None
self.writer = None
self._time_command = False #whether to keep timing stats on a cmd
self._command_time = -1
self._incommand = False
self.current_resultset = None
def _scramble(self, password, seed):
"""taken from java jdbc driver, scrambles the password using the given seed
according to the mysql login protocol"""
stage1 = SHA(password).digest()
stage2 = SHA(stage1).digest()
md = SHA()
md.update(seed)
md.update(stage2)
#i love python :-):
return ''.join(map(chr, [x ^ ord(stage1[i]) for i, x in enumerate(map(ord, md.digest()))]))
def _handshake(self, user, password, database, charset):
"""performs the mysql login handshake"""
#init buffer for reading (both pos and lim = 0)
self.buffer.clear()
self.buffer.flip()
#read server welcome
packet = self.reader.read_packet()
self.protocol_version = packet.read_byte() #normally this would be 10 (0xa)
if self.protocol_version == 0xff:
#error on initial greeting, possibly too many connection error
raise ClientLoginError.from_error_packet(packet, skip = 2)
elif self.protocol_version == 0xa:
pass #expected
else:
assert False, "Unexpected protocol version %02x" % self.protocol_version
self.server_version = packet.read_bytes_until(0)
packet.skip(4) #thread_id
scramble_buff = packet.read_bytes(8)
packet.skip(1) #filler
server_caps = packet.read_short()
#CAPS.dbg(server_caps)
if not server_caps & CAPS.PROTOCOL_41:
assert False, "<4.1 auth not supported"
server_language = packet.read_byte()
server_status = packet.read_short()
packet.skip(13) #filler
if packet.remaining:
scramble_buff += packet.read_bytes_until(0)
else:
assert False, "<4.1 auth not supported"
client_caps = server_caps
#always turn off compression
client_caps &= ~CAPS.COMPRESS
client_caps &= ~CAPS.NO_SCHEMA
if not server_caps & CAPS.CONNECT_WITH_DB and database:
assert False, "initial db given but not supported by server"
if server_caps & CAPS.CONNECT_WITH_DB and not database:
client_caps &= ~CAPS.CONNECT_WITH_DB
#build and write our answer to the initial handshake packet
self.writer.clear()
self.writer.start()
self.writer.write_int(client_caps)
self.writer.write_int(1024 * 1024 * 32) #16mb max packet
if charset:
self.writer.write_byte(charset_map[charset.replace("-", "")])
else:
self.writer.write_byte(server_language)
self.writer.write_bytes('\0' * 23) #filler
self.writer.write_bytes(user + '\0')
if password:
self.writer.write_byte(20)
self.writer.write_bytes(self._scramble(password, scramble_buff))
else:
self.writer.write_byte(0)
if database:
self.writer.write_bytes(database + '\0')
self.writer.finish(1)
self.writer.flush()
#read final answer from server
self.buffer.flip()
packet = self.reader.read_packet()
result = packet.read_byte()
if result == 0xff:
raise ClientLoginError.from_error_packet(packet)
elif result == 0xfe:
assert False, "old password handshake not implemented"
def _close_current_resultset(self, resultset):
assert resultset == self.current_resultset
self.current_resultset = None
def _send_command(self, cmd, cmd_text):
"""sends a command with the given text"""
#self.log.debug('cmd %s %s', cmd, cmd_text)
#note: we are not using normal writer.start/finish here, because the cmd
#could not fit in buffer, causing flushes in write_string, in that case 'finish' would
#not be able to go back to the header of the packet to write the length in that case
self.writer.clear()
self.writer.write_header(len(cmd_text) + 1 + 4, 0) #1 is len of cmd, 4 is len of header, 0 is packet number
self.writer.write_byte(cmd)
self.writer.write_bytes(cmd_text)
self.writer.flush()
def _close(self):
#self.log.debug("close mysql client %s", id(self))
try:
self.state = self.STATE_CLOSING
if self.current_resultset:
self.current_resultset.close(True)
self.socket.close()
self.state = self.STATE_CLOSED
except:
self.state = self.STATE_ERROR
raise
def connect(self, host = "localhost", port = 3306, user = "", passwd = "", db = "", autocommit = None, charset = None, use_unicode=False):
"""connects to the given host and port with user and passwd"""
#self.log.debug("connect mysql client %s %s %s %s %s", id(self), host, port, user, passwd)
try:
#print 'connect', host, user, passwd, db
#parse addresses of form str <host:port>
if type(host) == str:
if host[0] == '/': #assume unix domain socket
addr = host
elif ':' in host:
host, port = host.split(':')
port = int(port)
addr = (host, port)
else:
addr = (host, port)
assert self.state == self.STATE_INIT, "make sure connection is not already connected or closed"
self.state = self.STATE_CONNECTING
self.socket = socket.create_connection(addr)
self.reader = BufferedPacketReader(self.socket, self.buffer)
self.writer = BufferedPacketWriter(self.socket, self.buffer)
self._handshake(user, passwd, db, charset)
#handshake complete client can now send commands
self.state = self.STATE_CONNECTED
if autocommit == False:
self.set_autocommit(False)
elif autocommit == True:
self.set_autocommit(True)
else:
pass #whatever is the default of the db (ON in the case of mysql)
if charset is not None:
self.set_charset(charset)
self.set_use_unicode(use_unicode)
return self
except gevent.Timeout:
self.state = self.STATE_INIT
raise
except ClientLoginError:
self.state = self.STATE_INIT
raise
except:
self.state = self.STATE_ERROR
raise
def close(self):
"""close this connection"""
assert self.is_connected(), "make sure connection is connected before closing"
if self._incommand != False: assert False, "cannot close while still in a command"
self._close()
def command(self, cmd, cmd_text):
"""sends a COM_XXX command with the given text and possibly return a resultset (select)"""
#print 'command', cmd, repr(cmd_text), type(cmd_text)
assert type(cmd_text) == str #as opposed to unicode
assert self.is_connected(), "make sure connection is connected before query"
if self._incommand != False: assert False, "overlapped commands not supported"
if self.current_resultset: assert False, "overlapped commands not supported, pls read prev resultset and close it"
try:
self._incommand = True
if self._time_command:
start_time = time.time()
self._send_command(cmd, cmd_text)
#read result, expect 1 of OK, ERROR or result set header
self.buffer.flip()
packet = self.reader.read_packet()
result = packet.read_byte()
#print 'res', result
if self._time_command:
end_time = time.time()
self._command_time = end_time - start_time
if result == 0x00:
#OK, return (affected rows, last row id)
rowcount = self.reader.read_length_coded_binary()
lastrowid = self.reader.read_length_coded_binary()
return (rowcount, lastrowid)
elif result == 0xff:
raise ClientCommandError.from_error_packet(packet)
else: #result set
self.current_resultset = ResultSet(self, result)
return self.current_resultset
finally:
self._incommand = False
def is_connected(self):
return self.state == self.STATE_CONNECTED
def query(self, cmd_text):
"""Sends a COM_QUERY command with the given text and return a resultset (select)"""
return self.command(COMMAND.QUERY, cmd_text)
def init_db(self, cmd_text):
"""Sends a COM_INIT command with the given text"""
return self.command(COMMAND.INITDB, cmd_text)
def set_autocommit(self, commit):
"""Sets autocommit setting for this connection. True = on, False = off"""
self.command(COMMAND.QUERY, "SET AUTOCOMMIT = %s" % ('1' if commit else '0'))
def commit(self):
"""Commits this connection"""
self.command(COMMAND.QUERY, "COMMIT")
def rollback(self):
"""Issues a rollback on this connection"""
self.command(COMMAND.QUERY, "ROLLBACK")
def set_charset(self, charset):
"""Sets the charset for this connections (used to decode string fields into unicode strings)"""
self.reader.reader.encoding = charset
def set_use_unicode(self, use_unicode):
self.reader.reader.use_unicode = use_unicode
def set_time_command(self, time_command):
self._time_command = time_command
def get_command_time(self):
return self._command_time
Connection.log = logging.getLogger(Connection.__name__)
def connect(*args, **kwargs):
return Connection().connect(*args, **kwargs)
|
mthurlin/gevent-MySQL
|
cf5ed52aa09abe8eaf0f04fcd99edc70af290355
|
Escaping speedup
|
diff --git a/lib/geventmysql/__init__.py b/lib/geventmysql/__init__.py
index 0031521..b847e12 100644
--- a/lib/geventmysql/__init__.py
+++ b/lib/geventmysql/__init__.py
@@ -1,244 +1,228 @@
# Copyright (C) 2009, Hyves (Startphone Ltd.)
#
# This module is part of the Concurrence Framework and is released under
# the New BSD License: http://www.opensource.org/licenses/bsd-license.php
#this is a dbapi/mysqldb compatible wrapper around the lowlevel
#client in client.py
#TODO weak ref on connection in cursor
import sys
import logging
import exceptions
import gevent
TaskletExit = gevent.GreenletExit
from datetime import datetime, date
from geventmysql import client
threadsafety = 1
apilevel = "2.0"
paramstyle = "format"
default_charset = sys.getdefaultencoding()
class Error(exceptions.StandardError): pass
class Warning(exceptions.StandardError): pass
class InterfaceError(Error): pass
class DatabaseError(Error): pass
class InternalError(DatabaseError): pass
class OperationalError(DatabaseError): pass
class ProgrammingError(DatabaseError): pass
class IntegrityError(DatabaseError): pass
class DataError(DatabaseError): pass
class NotSupportedError(DatabaseError): pass
class TimeoutError(DatabaseError): pass
class Cursor(object):
log = logging.getLogger('Cursor')
def __init__(self, connection):
self.connection = connection
self.result = None
self.closed = False
self._close_result()
def _close_result(self):
#make sure any previous resultset is closed correctly
if self.result is not None:
#make sure any left over resultset is read from the db, otherwise
#the connection would be in an inconsistent state
try:
while True:
self.result_iter.next()
except StopIteration:
pass #done
self.result.close()
self.description = None
self.result = None
self.result_iter = None
self.lastrowid = None
self.rowcount = -1
- def _escape_string(self, s):
+ def _escape_string(self, s, replace = {'\0': '\\0', '\n': '\\n', '\r': '\\r', '\\': '\\\\', "'": "\\'", '"': '\\"', '\x1a': '\\Z'}):
"""take from mysql src code:"""
- #TODO how fast is this?, do this in C/pyrex?
- escaped = []
- for ch in s:
- if ch == '\0':
- escaped.append('\\0')
- elif ch == '\n':
- escaped.append('\\n')
- elif ch == '\r':
- escaped.append('\\r')
- elif ch == '\\':
- escaped.append('\\\\')
- elif ch == "'": #single quote
- escaped.append("\\'")
- elif ch == '"': #double quote
- escaped.append('\\"')
- elif ch == '\x1a': #EOF on windows
- escaped.append('\\Z')
- else:
- escaped.append(ch)
- return ''.join(escaped)
+ #TODO how fast is this?, do this in C/pyrex?
+ get = replace.get
+ return "".join([get(ch, ch) for ch in s])
+
def _wrap_exception(self, e, msg):
self.log.exception(msg)
if isinstance(e, gevent.Timeout):
return TimeoutError(msg + ': ' + str(e))
else:
return Error(msg + ': ' + str(e))
def execute(self, qry, args = []):
#print repr(qry), repr(args), self.connection.charset
if self.closed:
raise ProgrammingError('this cursor is already closed')
if type(qry) == unicode:
#we will only communicate in 8-bits with mysql
qry = qry.encode(self.connection.charset)
try:
self._close_result() #close any previous result if needed
#substitute arguments
for arg in args:
if type(arg) == str:
qry = qry.replace('%s', "'%s'" % self._escape_string(arg), 1)
elif type(arg) == unicode:
qry = qry.replace('%s', "'%s'" % self._escape_string(arg).encode(self.connection.charset), 1)
elif type(arg) == int:
qry = qry.replace('%s', str(arg), 1)
elif type(arg) == long:
qry = qry.replace('%s', str(arg), 1)
elif arg is None:
qry = qry.replace('%s', 'null', 1)
elif isinstance(arg, datetime):
qry = qry.replace('%s', "'%s'" % arg.strftime('%Y-%m-%d %H:%M:%S'), 1)
elif isinstance(arg, date):
qry = qry.replace('%s', "'%s'" % arg.strftime('%Y-%m-%d'), 1)
else:
assert False, "unknown argument type: %s %s" % (type(arg), repr(arg))
result = self.connection.client.query(qry)
#process result if nescecary
if isinstance(result, client.ResultSet):
self.description = tuple(((name, type_code, None, None, None, None, None) for name, type_code, charsetnr in result.fields))
self.result = result
self.result_iter = iter(result)
self.lastrowid = None
self.rowcount = -1
else:
self.rowcount, self.lastrowid = result
self.description = None
self.result = None
except TaskletExit:
raise
except Exception, e:
raise self._wrap_exception(e, "an error occurred while executing qry %s" % (qry, ))
def fetchall(self):
try:
return list(self.result_iter)
except TaskletExit:
raise
except Exception, e:
raise self._wrap_exception(e, "an error occurred while fetching results")
def fetchone(self):
try:
return self.result_iter.next()
except StopIteration:
return None
except TaskletExit:
raise
except Exception, e:
raise self._wrap_exception(e, "an error occurred while fetching results")
def close(self):
if self.closed:
raise ProgrammingError("cannot cursor twice")
try:
self._close_result()
self.closed = True
except TaskletExit:
raise
except Exception, e:
raise self._wrap_exception(e, "an error occurred while closing cursor")
class Connection(object):
def __init__(self, *args, **kwargs):
self.kwargs = kwargs.copy()
if not 'autocommit' in self.kwargs:
#we set autocommit explicitly to OFF as required by python db api, because default of mysql would be ON
self.kwargs['autocommit'] = False
else:
pass #user specified explictly what he wanted for autocommit
if 'charset' in self.kwargs:
self.charset = self.kwargs['charset']
if 'use_unicode' in self.kwargs and self.kwargs['use_unicode'] == True:
pass #charset stays in args, and triggers unicode output in low-level client
else:
del self.kwargs['charset']
else:
self.charset = default_charset
self.client = client.Connection() #low level mysql client
self.client.connect(*args, **self.kwargs)
self.closed = False
def close(self):
#print 'dbapi Connection close'
if self.closed:
raise ProgrammingError("cannot close connection twice")
try:
self.client.close()
del self.client
self.closed = True
except TaskletExit:
raise
except Exception, e:
msg = "an error occurred while closing connection: "
self.log.exception(msg)
raise Error(msg + str(e))
def cursor(self):
if self.closed:
raise ProgrammingError("this connection is already closed")
return Cursor(self)
def get_server_info(self):
return self.client.server_version
def rollback(self):
self.client.rollback()
def commit(self):
self.client.commit()
@property
def socket(self):
return self.client.socket
def connect(*args, **kwargs):
return Connection(*args, **kwargs)
Connect = connect
|
mthurlin/gevent-MySQL
|
6d2a93e617d280eed122c0c53afe6e1f1110d375
|
More pythonic list append..
|
diff --git a/examples/benchmark.py b/examples/benchmark.py
index 45e09b0..f3e9b0a 100644
--- a/examples/benchmark.py
+++ b/examples/benchmark.py
@@ -1,31 +1,28 @@
import geventmysql
import time
import os
import gevent
curtime = time.time if os.name == "posix" else time.clock
C = 50
N = 1000
def task():
conn = geventmysql.connect(host="127.0.0.1", user="root", passwd="")
cur = conn.cursor()
for i in range(N):
cur.execute("SELECT 1")
res = cur.fetchall()
start = curtime()
-t = []
-for i in range(C):
- t.append(gevent.spawn(task))
-
-gevent.joinall(t)
+
+gevent.joinall([gevent.spawn(task) for i in range(C)])
elapsed = curtime() - start
num = C * N
print "Performed %d queries in %.2f seconds : %.1f queries/sec" % (num, elapsed, num / elapsed)
|
mthurlin/gevent-MySQL
|
27abaa20fc8f186815016de2e47e77fdc54bb705
|
Benchmark example
|
diff --git a/examples/benchmark.py b/examples/benchmark.py
new file mode 100644
index 0000000..45e09b0
--- /dev/null
+++ b/examples/benchmark.py
@@ -0,0 +1,31 @@
+import geventmysql
+import time
+import os
+import gevent
+
+curtime = time.time if os.name == "posix" else time.clock
+
+
+C = 50
+N = 1000
+
+
+def task():
+ conn = geventmysql.connect(host="127.0.0.1", user="root", passwd="")
+ cur = conn.cursor()
+ for i in range(N):
+ cur.execute("SELECT 1")
+ res = cur.fetchall()
+
+
+start = curtime()
+t = []
+for i in range(C):
+ t.append(gevent.spawn(task))
+
+gevent.joinall(t)
+
+elapsed = curtime() - start
+num = C * N
+
+print "Performed %d queries in %.2f seconds : %.1f queries/sec" % (num, elapsed, num / elapsed)
|
mthurlin/gevent-MySQL
|
962d7524018fd5940696508303b4d17d2fe484eb
|
Added tests
|
diff --git a/test/gevent_test.sql b/test/gevent_test.sql
new file mode 100644
index 0000000..f70e4b9
--- /dev/null
+++ b/test/gevent_test.sql
@@ -0,0 +1,21 @@
+use gevent_test;
+
+CREATE TABLE `tbltest` (
+ `test_id` int(11),
+ `test_string` varchar(1024),
+ `test_blob` longblob
+) ENGINE=MyISAM DEFAULT CHARSET=latin1;
+
+CREATE TABLE `tblautoincint` (
+ `test_id` INT UNSIGNED NOT NULL AUTO_INCREMENT,
+ `test_string` varchar(1024),
+ PRIMARY KEY(test_id)
+) ENGINE=MyISAM DEFAULT CHARSET=latin1;
+
+CREATE TABLE `tblautoincbigint` (
+ `test_id` BIGINT UNSIGNED NOT NULL AUTO_INCREMENT,
+ `test_string` varchar(1024),
+ PRIMARY KEY(test_id)
+) ENGINE=MyISAM DEFAULT CHARSET=latin1;
+
+GRANT ALL on gevent_test.* to 'gevent_test'@'localhost' identified by 'gevent_test';
diff --git a/test/testmysql.py b/test/testmysql.py
new file mode 100644
index 0000000..a6a459f
--- /dev/null
+++ b/test/testmysql.py
@@ -0,0 +1,613 @@
+# -*- coding: latin1 -*-
+from __future__ import with_statement
+
+import time
+import datetime
+import logging
+import unittest
+import gevent
+
+import geventmysql as dbapi
+from geventmysql import client
+from geventmysql._mysql import PacketReadError
+
+DB_HOST = '127.0.0.1:3306'
+DB_USER = 'gevent_test'
+DB_PASSWD = 'gevent_test'
+DB_DB = 'gevent_test'
+
+class TestMySQL(unittest.TestCase):
+ log = logging.getLogger('TestMySQL')
+
+ def testMySQLClient(self):
+ cnn = client.connect(host = DB_HOST, user = DB_USER,
+ passwd = DB_PASSWD, db = DB_DB)
+
+ rs = cnn.query("select 1")
+
+ self.assertEqual([(1,)], list(rs))
+
+ rs.close()
+ cnn.close()
+
+ def testConnectNoDb(self):
+ cnn = client.connect(host = DB_HOST, user = DB_USER, passwd = DB_PASSWD)
+
+ rs = cnn.query("select 1")
+
+ self.assertEqual([(1,)], list(rs))
+
+ rs.close()
+ cnn.close()
+
+
+ def testMySQLClient2(self):
+ cnn = client.connect(host = DB_HOST, user = DB_USER,
+ passwd = DB_PASSWD, db = DB_DB)
+
+ cnn.query("truncate tbltest")
+
+ for i in range(10):
+ self.assertEquals((1, 0), cnn.query("insert into tbltest (test_id, test_string) values (%d, 'test%d')" % (i, i)))
+
+ rs = cnn.query("select test_id, test_string from tbltest")
+
+ #trying to close it now would give an error, e.g. we always need to read
+ #the result from the database otherwise connection would be in wrong stat
+ try:
+ rs.close()
+ self.fail('expected exception')
+ except client.ClientProgrammingError:
+ pass
+
+ for i, row in enumerate(rs):
+ self.assertEquals((i, 'test%d' % i), row)
+
+ rs.close()
+ cnn.close()
+
+ def testMySQLTimeout(self):
+ cnn = client.connect(host = DB_HOST, user = DB_USER,
+ passwd = DB_PASSWD, db = DB_DB)
+
+ rs = cnn.query("select sleep(2)")
+ list(rs)
+ rs.close()
+
+ from gevent import Timeout
+
+ start = time.time()
+ try:
+ def delay():
+ cnn.query("select sleep(4)")
+ self.fail('expected timeout')
+ gevent.with_timeout(2, delay)
+ except Timeout:
+ end = time.time()
+ self.assertAlmostEqual(2.0, end - start, places = 1)
+
+ cnn.close()
+
+ def testParallelQuery(self):
+
+ def query(s):
+ cnn = dbapi.connect(host = DB_HOST, user = DB_USER,
+ passwd = DB_PASSWD, db = DB_DB)
+ cur = cnn.cursor()
+ cur.execute("select sleep(%d)" % s)
+ cur.close()
+ cnn.close()
+
+ start = time.time()
+ ch1 = gevent.spawn(query, 1)
+ ch2 = gevent.spawn(query, 2)
+ ch3 = gevent.spawn(query, 3)
+ gevent.joinall([ch1, ch2, ch3])
+
+ end = time.time()
+ self.assertAlmostEqual(3.0, end - start, places = 1)
+
+ def testMySQLDBAPI(self):
+
+ cnn = dbapi.connect(host = DB_HOST, user = DB_USER,
+ passwd = DB_PASSWD, db = DB_DB)
+
+ cur = cnn.cursor()
+
+ cur.execute("truncate tbltest")
+
+ for i in range(10):
+ cur.execute("insert into tbltest (test_id, test_string) values (%d, 'test%d')" % (i, i))
+
+ cur.close()
+
+ cur = cnn.cursor()
+
+ cur.execute("select test_id, test_string from tbltest")
+
+ self.assertEquals((0, 'test0'), cur.fetchone())
+
+ #check that fetchall gets the remainder
+ self.assertEquals([(1, 'test1'), (2, 'test2'), (3, 'test3'), (4, 'test4'), (5, 'test5'), (6, 'test6'), (7, 'test7'), (8, 'test8'), (9, 'test9')], cur.fetchall())
+
+ #another query on the same cursor should work
+ cur.execute("select test_id, test_string from tbltest")
+
+ #fetch some but not all
+ self.assertEquals((0, 'test0'), cur.fetchone())
+ self.assertEquals((1, 'test1'), cur.fetchone())
+ self.assertEquals((2, 'test2'), cur.fetchone())
+
+ #close whould work even with half read resultset
+ cur.close()
+
+ #this should not work, cursor was closed
+ try:
+ cur.execute("select * from tbltest")
+ self.fail("expected exception")
+ except dbapi.ProgrammingError:
+ pass
+
+ def testLargePackets(self):
+ cnn = client.connect(host = DB_HOST, user = DB_USER,
+ passwd = DB_PASSWD, db = DB_DB)
+
+
+ cnn.query("truncate tbltest")
+
+ c = cnn.buffer.capacity
+
+ blob = '0123456789'
+ while 1:
+ cnn.query("insert into tbltest (test_id, test_blob) values (%d, '%s')" % (len(blob), blob))
+ if len(blob) > (c * 2): break
+ blob = blob * 2
+
+ rs = cnn.query("select test_id, test_blob from tbltest")
+ for row in rs:
+ self.assertEquals(row[0], len(row[1]))
+ self.assertEquals(blob[:row[0]], row[1])
+ rs.close()
+
+ #reread, second time, oversize packet is already present
+ rs = cnn.query("select test_id, test_blob from tbltest")
+ for row in rs:
+ self.assertEquals(row[0], len(row[1]))
+ self.assertEquals(blob[:row[0]], row[1])
+ rs.close()
+ cnn.close()
+
+ #have a very low max packet size for oversize packets
+ #and check that exception is thrown when trying to read larger packets
+ from geventmysql import _mysql
+ _mysql.MAX_PACKET_SIZE = 1024 * 4
+
+ cnn = client.connect(host = DB_HOST, user = DB_USER,
+ passwd = DB_PASSWD, db = DB_DB)
+
+ try:
+ rs = cnn.query("select test_id, test_blob from tbltest")
+ for row in rs:
+ self.assertEquals(row[0], len(row[1]))
+ self.assertEquals(blob[:row[0]], row[1])
+ self.fail()
+ except PacketReadError:
+ pass
+ finally:
+ try:
+ rs.close()
+ except:
+ pass
+ cnn.close()
+
+ def testEscapeArgs(self):
+ cnn = dbapi.connect(host = DB_HOST, user = DB_USER,
+ passwd = DB_PASSWD, db = DB_DB)
+
+ cur = cnn.cursor()
+
+ cur.execute("truncate tbltest")
+
+ cur.execute("insert into tbltest (test_id, test_string) values (%s, %s)", (1, 'piet'))
+ cur.execute("insert into tbltest (test_id, test_string) values (%s, %s)", (2, 'klaas'))
+ cur.execute("insert into tbltest (test_id, test_string) values (%s, %s)", (3, "pi'et"))
+
+ #classic sql injection, would return all rows if no proper escaping is done
+ cur.execute("select test_id, test_string from tbltest where test_string = %s", ("piet' OR 'a' = 'a",))
+ self.assertEquals([], cur.fetchall()) #assert no rows are found
+
+ #but we should still be able to find the piet with the apostrophe in its name
+ cur.execute("select test_id, test_string from tbltest where test_string = %s", ("pi'et",))
+ self.assertEquals([(3, "pi'et")], cur.fetchall())
+
+ #also we should be able to insert and retrieve blob/string with all possible bytes transparently
+ chars = ''.join([chr(i) for i in range(256)])
+
+
+ cur.execute("insert into tbltest (test_id, test_string, test_blob) values (%s, %s, %s)", (4, chars, chars))
+
+ cur.execute("select test_string, test_blob from tbltest where test_id = %s", (4,))
+ #self.assertEquals([(chars, chars)], cur.fetchall())
+ s, b = cur.fetchall()[0]
+
+ #test blob
+ self.assertEquals(256, len(b))
+ self.assertEquals(chars, b)
+
+ #test string
+ self.assertEquals(256, len(s))
+ self.assertEquals(chars, s)
+
+ cur.close()
+
+ cnn.close()
+
+
+ def testSelectUnicode(self):
+ s = u'r\xc3\xa4ksm\xc3\xb6rg\xc3\xa5s'
+
+
+
+ cnn = dbapi.connect(host = DB_HOST, user = DB_USER,
+ passwd = DB_PASSWD, db = DB_DB,
+ charset = 'latin-1', use_unicode = True)
+
+ cur = cnn.cursor()
+
+ cur.execute("truncate tbltest")
+ cur.execute("insert into tbltest (test_id, test_string) values (%s, %s)", (1, 'piet'))
+ cur.execute("insert into tbltest (test_id, test_string) values (%s, %s)", (2, s))
+ cur.execute(u"insert into tbltest (test_id, test_string) values (%s, %s)", (3, s))
+
+ cur.execute("select test_id, test_string from tbltest")
+
+ result = cur.fetchall()
+
+ self.assertEquals([(1, u'piet'), (2, s), (3, s)], result)
+
+ #test that we can still cleanly roundtrip a blob, (it should not be encoded if we pass
+ #it as 'str' argument), eventhough we pass the qry itself as unicode
+ blob = ''.join([chr(i) for i in range(256)])
+
+ cur.execute(u"insert into tbltest (test_id, test_blob) values (%s, %s)", (4, blob))
+ cur.execute("select test_blob from tbltest where test_id = %s", (4,))
+ b2 = cur.fetchall()[0][0]
+ self.assertEquals(str, type(b2))
+ self.assertEquals(256, len(b2))
+ self.assertEquals(blob, b2)
+
+ def testAutoInc(self):
+
+ cnn = dbapi.connect(host = DB_HOST, user = DB_USER,
+ passwd = DB_PASSWD, db = DB_DB)
+
+ cur = cnn.cursor()
+
+ cur.execute("truncate tblautoincint")
+
+ cur.execute("ALTER TABLE tblautoincint AUTO_INCREMENT = 100")
+ cur.execute("insert into tblautoincint (test_string) values (%s)", ('piet',))
+ self.assertEqual(1, cur.rowcount)
+ self.assertEqual(100, cur.lastrowid)
+ cur.execute("insert into tblautoincint (test_string) values (%s)", ('piet',))
+ self.assertEqual(1, cur.rowcount)
+ self.assertEqual(101, cur.lastrowid)
+
+ cur.execute("ALTER TABLE tblautoincint AUTO_INCREMENT = 4294967294")
+ cur.execute("insert into tblautoincint (test_string) values (%s)", ('piet',))
+ self.assertEqual(1, cur.rowcount)
+ self.assertEqual(4294967294, cur.lastrowid)
+ cur.execute("insert into tblautoincint (test_string) values (%s)", ('piet',))
+ self.assertEqual(1, cur.rowcount)
+ self.assertEqual(4294967295, cur.lastrowid)
+
+ cur.execute("truncate tblautoincbigint")
+
+ cur.execute("ALTER TABLE tblautoincbigint AUTO_INCREMENT = 100")
+ cur.execute("insert into tblautoincbigint (test_string) values (%s)", ('piet',))
+ self.assertEqual(1, cur.rowcount)
+ self.assertEqual(100, cur.lastrowid)
+ cur.execute("insert into tblautoincbigint (test_string) values (%s)", ('piet',))
+ self.assertEqual(1, cur.rowcount)
+ self.assertEqual(101, cur.lastrowid)
+
+ cur.execute("ALTER TABLE tblautoincbigint AUTO_INCREMENT = 18446744073709551614")
+ cur.execute("insert into tblautoincbigint (test_string) values (%s)", ('piet',))
+ self.assertEqual(1, cur.rowcount)
+ self.assertEqual(18446744073709551614, cur.lastrowid)
+ #this fails on mysql, but that is a mysql problem
+ #cur.execute("insert into tblautoincbigint (test_string) values (%s)", ('piet',))
+ #self.assertEqual(1, cur.rowcount)
+ #self.assertEqual(18446744073709551615, cur.lastrowid)
+
+ cur.close()
+ cnn.close()
+
+ def testLengthCodedBinary(self):
+
+ from geventmysql._mysql import Buffer, BufferUnderflowError
+ from geventmysql.mysql import PacketReader
+
+ def create_reader(bytes):
+ b = Buffer(1024)
+ for byte in bytes:
+ b.write_byte(byte)
+ b.flip()
+
+ p = PacketReader(b)
+ p.packet.position = b.position
+ p.packet.limit = b.limit
+ return p
+
+ p = create_reader([100])
+ self.assertEquals(100, p.read_length_coded_binary())
+ self.assertEquals(p.packet.position, p.packet.limit)
+ try:
+ p.read_length_coded_binary()
+ except BufferUnderflowError:
+ pass
+ except:
+ self.fail('expected underflow')
+
+ try:
+ p = create_reader([252])
+ p.read_length_coded_binary()
+ self.fail('expected underflow')
+ except BufferUnderflowError:
+ pass
+ except:
+ self.fail('expected underflow')
+
+ try:
+ p = create_reader([252, 0xff])
+ p.read_length_coded_binary()
+ self.fail('expected underflow')
+ except BufferUnderflowError:
+ pass
+ except:
+ self.fail('expected underflow')
+
+ p = create_reader([252, 0xff, 0xff])
+ self.assertEquals(0xFFFF, p.read_length_coded_binary())
+ self.assertEquals(3, p.packet.limit)
+ self.assertEquals(3, p.packet.position)
+
+
+ try:
+ p = create_reader([253])
+ p.read_length_coded_binary()
+ self.fail('expected underflow')
+ except BufferUnderflowError:
+ pass
+ except:
+ self.fail('expected underflow')
+
+ try:
+ p = create_reader([253, 0xff])
+ p.read_length_coded_binary()
+ self.fail('expected underflow')
+ except BufferUnderflowError:
+ pass
+ except:
+ self.fail('expected underflow')
+
+ try:
+ p = create_reader([253, 0xff, 0xff])
+ p.read_length_coded_binary()
+ self.fail('expected underflow')
+ except BufferUnderflowError:
+ pass
+ except:
+ self.fail('expected underflow')
+
+ p = create_reader([253, 0xff, 0xff, 0xff])
+ self.assertEquals(0xFFFFFF, p.read_length_coded_binary())
+ self.assertEquals(4, p.packet.limit)
+ self.assertEquals(4, p.packet.position)
+
+ try:
+ p = create_reader([254])
+ p.read_length_coded_binary()
+ self.fail('expected underflow')
+ except BufferUnderflowError:
+ pass
+ except:
+ self.fail('expected underflow')
+
+ try:
+ p = create_reader([254, 0xff])
+ p.read_length_coded_binary()
+ self.fail('expected underflow')
+ except BufferUnderflowError:
+ pass
+ except:
+ self.fail('expected underflow')
+
+ try:
+ p = create_reader([254, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff])
+ p.read_length_coded_binary()
+ self.fail('expected underflow')
+ except BufferUnderflowError:
+ pass
+ except:
+ self.fail('expected underflow')
+
+ p = create_reader([254, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff])
+
+ self.assertEquals(9, p.packet.limit)
+ self.assertEquals(0, p.packet.position)
+ self.assertEquals(0xFFFFFFFFFFFFFFFFL, p.read_length_coded_binary())
+ self.assertEquals(9, p.packet.limit)
+ self.assertEquals(9, p.packet.position)
+
+
+ def testBigInt(self):
+ """Tests the behaviour of insert/select with bigint/long."""
+
+ BIGNUM = 112233445566778899
+
+ cnn = dbapi.connect(host = DB_HOST, user = DB_USER,
+ passwd = DB_PASSWD, db = DB_DB,
+ charset = 'latin-1', use_unicode = True)
+
+ cur = cnn.cursor()
+
+ cur.execute("drop table if exists tblbigint")
+ cur.execute("""create table tblbigint (
+ test_id int(11) DEFAULT NULL,
+ test_bigint bigint DEFAULT NULL,
+ test_bigint2 bigint DEFAULT NULL) ENGINE=MyISAM DEFAULT CHARSET=latin1""")
+ cur.execute("insert into tblbigint (test_id, test_bigint, test_bigint2) values (%s, " + str(BIGNUM) + ", %s)", (1, BIGNUM))
+ cur.execute(u"insert into tblbigint (test_id, test_bigint, test_bigint2) values (%s, " + str(BIGNUM) + ", %s)", (2, BIGNUM))
+
+
+ # Make sure both our inserts where correct (ie, the big number was not truncated/modified on insert)
+ cur.execute("select test_id from tblbigint where test_bigint = test_bigint2")
+ result = cur.fetchall()
+ self.assertEquals([(1, ), (2, )], result)
+
+
+ # Make sure select gets the right values (ie, the big number was not truncated/modified when retrieved)
+ cur.execute("select test_id, test_bigint, test_bigint2 from tblbigint where test_bigint = test_bigint2")
+ result = cur.fetchall()
+ self.assertEquals([(1, BIGNUM, BIGNUM), (2, BIGNUM, BIGNUM)], result)
+
+
+ def testDate(self):
+ """Tests the behaviour of insert/select with mysql/DATE <-> python/datetime.date"""
+
+ d_date = datetime.date(2010, 02, 11)
+ d_string = "2010-02-11"
+
+ cnn = dbapi.connect(host = DB_HOST, user = DB_USER,
+ passwd = DB_PASSWD, db = DB_DB,
+ charset = 'latin-1', use_unicode = True)
+
+ cur = cnn.cursor()
+
+ cur.execute("drop table if exists tbldate")
+ cur.execute("create table tbldate (test_id int(11) DEFAULT NULL, test_date date DEFAULT NULL, test_date2 date DEFAULT NULL) ENGINE=MyISAM DEFAULT CHARSET=latin1")
+
+ cur.execute("insert into tbldate (test_id, test_date, test_date2) values (%s, '" + d_string + "', %s)", (1, d_date))
+
+ # Make sure our insert was correct
+ cur.execute("select test_id from tbldate where test_date = test_date2")
+ result = cur.fetchall()
+ self.assertEquals([(1, )], result)
+
+ # Make sure select gets the right value back
+ cur.execute("select test_id, test_date, test_date2 from tbldate where test_date = test_date2")
+ result = cur.fetchall()
+ self.assertEquals([(1, d_date, d_date)], result)
+
+ def testDateTime(self):
+ """Tests the behaviour of insert/select with mysql/DATETIME <-> python/datetime.datetime"""
+
+ d_date = datetime.datetime(2010, 02, 11, 13, 37, 42)
+ d_string = "2010-02-11 13:37:42"
+
+ cnn = dbapi.connect(host = DB_HOST, user = DB_USER,
+ passwd = DB_PASSWD, db = DB_DB,
+ charset = 'latin-1', use_unicode = True)
+
+ cur = cnn.cursor()
+
+ cur.execute("drop table if exists tbldate")
+ cur.execute("create table tbldate (test_id int(11) DEFAULT NULL, test_date datetime DEFAULT NULL, test_date2 datetime DEFAULT NULL) ENGINE=MyISAM DEFAULT CHARSET=latin1")
+
+ cur.execute("insert into tbldate (test_id, test_date, test_date2) values (%s, '" + d_string + "', %s)", (1, d_date))
+
+ # Make sure our insert was correct
+ cur.execute("select test_id from tbldate where test_date = test_date2")
+ result = cur.fetchall()
+ self.assertEquals([(1, )], result)
+
+ # Make sure select gets the right value back
+ cur.execute("select test_id, test_date, test_date2 from tbldate where test_date = test_date2")
+ result = cur.fetchall()
+ self.assertEquals([(1, d_date, d_date)], result)
+
+ def testZeroDates(self):
+ """Tests the behaviour of zero dates"""
+
+ zero_datetime = "0000-00-00 00:00:00"
+ zero_date = "0000-00-00"
+
+
+ cnn = dbapi.connect(host = DB_HOST, user = DB_USER,
+ passwd = DB_PASSWD, db = DB_DB,
+ charset = 'latin-1', use_unicode = True)
+
+ cur = cnn.cursor()
+
+ cur.execute("drop table if exists tbldate")
+ cur.execute("create table tbldate (test_id int(11) DEFAULT NULL, test_date date DEFAULT NULL, test_datetime datetime DEFAULT NULL) ENGINE=MyISAM DEFAULT CHARSET=latin1")
+
+ cur.execute("insert into tbldate (test_id, test_date, test_datetime) values (%s, %s, %s)", (1, zero_date, zero_datetime))
+
+ # Make sure we get None-values back
+ cur.execute("select test_id, test_date, test_datetime from tbldate where test_id = 1")
+ result = cur.fetchall()
+ self.assertEquals([(1, None, None)], result)
+
+ def testUnicodeUTF8(self):
+ peacesign_unicode = u"\u262e"
+ peacesign_utf8 = "\xe2\x98\xae"
+
+ cnn = dbapi.connect(host = DB_HOST, user = DB_USER,
+ passwd = DB_PASSWD, db = DB_DB,
+ charset = 'utf-8', use_unicode = True)
+
+ cur = cnn.cursor()
+ cur.execute("drop table if exists tblutf")
+ cur.execute("create table tblutf (test_id int(11) DEFAULT NULL, test_string VARCHAR(32) DEFAULT NULL) ENGINE=MyISAM DEFAULT CHARSET=utf8")
+
+ cur.execute("insert into tblutf (test_id, test_string) values (%s, %s)", (1, peacesign_unicode)) # This should be encoded in utf8
+ cur.execute("insert into tblutf (test_id, test_string) values (%s, %s)", (2, peacesign_utf8))
+
+ cur.execute("select test_id, test_string from tblutf")
+ result = cur.fetchall()
+
+ # We expect unicode strings back
+ self.assertEquals([(1, peacesign_unicode), (2, peacesign_unicode)], result)
+
+ def testCharsets(self):
+ aumlaut_unicode = u"\u00e4"
+ aumlaut_utf8 = "\xc3\xa4"
+ aumlaut_latin1 = "\xe4"
+
+
+ cnn = dbapi.connect(host = DB_HOST, user = DB_USER,
+ passwd = DB_PASSWD, db = DB_DB,
+ charset = 'utf8', use_unicode = True)
+
+ cur = cnn.cursor()
+ cur.execute("drop table if exists tblutf")
+ cur.execute("create table tblutf (test_mode VARCHAR(32) DEFAULT NULL, test_utf VARCHAR(32) DEFAULT NULL, test_latin1 VARCHAR(32)) ENGINE=MyISAM DEFAULT CHARSET=utf8")
+
+ # We insert the same character using two different encodings
+ cur.execute("set names utf8")
+ cur.execute("insert into tblutf (test_mode, test_utf, test_latin1) values ('utf8', _utf8'" + aumlaut_utf8 + "', _latin1'" + aumlaut_latin1 + "')")
+
+ cur.execute("set names latin1")
+ cur.execute("insert into tblutf (test_mode, test_utf, test_latin1) values ('latin1', _utf8'" + aumlaut_utf8 + "', _latin1'" + aumlaut_latin1 + "')")
+
+ # We expect the driver to always give us unicode strings back
+ expected = [(u"utf8", aumlaut_unicode, aumlaut_unicode), (u"latin1", aumlaut_unicode, aumlaut_unicode)]
+
+ # Fetch and test with different charsets
+ for charset in ("latin1", "utf8", "cp1250"):
+ cur.execute("set names " + charset)
+ cur.execute("select test_mode, test_utf, test_latin1 from tblutf")
+ result = cur.fetchall()
+ self.assertEquals(result, expected)
+
+
+
+
+
+if __name__ == '__main__':
+ unittest.main()
+
+
+
|
mthurlin/gevent-MySQL
|
41b50bcd8c086b0dd1b1e593e3cd0e9425fa69f1
|
Fixed a reference to TimeoutError
|
diff --git a/lib/geventmysql/client.py b/lib/geventmysql/client.py
index 2a1cd40..bd16e0b 100644
--- a/lib/geventmysql/client.py
+++ b/lib/geventmysql/client.py
@@ -1,389 +1,389 @@
# Copyright (C) 2009, Hyves (Startphone Ltd.)
#
# This module is part of the Concurrence Framework and is released under
# the New BSD License: http://www.opensource.org/licenses/bsd-license.php
#TODO supporting closing a halfread resultset (e.g. automatically read and discard rest)
from geventmysql._mysql import Buffer
from geventmysql.mysql import BufferedPacketReader, BufferedPacketWriter, PACKET_READ_RESULT, CAPS, COMMAND
import logging
import time
from gevent import socket
# From query: SHOW COLLATION;
charset_map = {}
charset_map["big5"] = 1
charset_map["dec8"] = 3
charset_map["cp850"] = 4
charset_map["hp8"] = 6
charset_map["koi8r"] = 7
charset_map["latin1"] = 8
charset_map["latin1"] = 8
charset_map["latin2"] = 9
charset_map["swe7"] = 10
charset_map["ascii"] = 11
charset_map["ujis"] = 12
charset_map["sjis"] = 13
charset_map["hebrew"] = 16
charset_map["tis620"] = 18
charset_map["euckr"] = 19
charset_map["koi8u"] = 22
charset_map["gb2312"] = 24
charset_map["greek"] = 25
charset_map["cp1250"] = 26
charset_map["gbk"] = 28
charset_map["latin5"] = 30
charset_map["armscii8"] = 32
charset_map["utf8"] = 33
charset_map["utf8"] = 33
charset_map["ucs2"] = 35
charset_map["cp866"] = 36
charset_map["keybcs2"] = 37
charset_map["macce"] = 38
charset_map["macroman"] = 39
charset_map["cp852"] = 40
charset_map["latin7"] = 41
charset_map["cp1251"] = 51
charset_map["cp1256"] = 57
charset_map["cp1257"] = 59
charset_map["binary"] = 63
charset_map["geostd8"] = 92
charset_map["cp932"] = 95
charset_map["eucjpms"] = 97
try:
#python 2.6
import hashlib
SHA = hashlib.sha1
except ImportError:
#python 2.5
import sha
SHA = sha.new
#import time
class ClientError(Exception):
@classmethod
def from_error_packet(cls, packet, skip = 8):
packet.skip(skip)
return cls(packet.read_bytes(packet.remaining))
class ClientLoginError(ClientError): pass
class ClientCommandError(ClientError): pass
class ClientProgrammingError(ClientError): pass
class ResultSet(object):
"""Represents the current resultset being read from a Connection.
The resultset implements an iterator over rows. A Resultset must
be iterated entirely and closed explicitly."""
STATE_INIT = 0
STATE_OPEN = 1
STATE_EOF = 2
STATE_CLOSED = 3
def __init__(self, connection, field_count):
self.state = self.STATE_INIT
self.connection = connection
self.fields = connection.reader.read_fields(field_count)
self.state = self.STATE_OPEN
def __iter__(self):
assert self.state == self.STATE_OPEN, "cannot iterate a resultset when it is not open"
for row in self.connection.reader.read_rows(self.fields):
yield row
self.state = self.STATE_EOF
def close(self, connection_close = False):
"""Closes the current resultset. Make sure you have iterated over all rows before closing it!"""
#print 'close on ResultSet', id(self.connection)
if self.state != self.STATE_EOF and not connection_close:
raise ClientProgrammingError("you can only close a resultset when it was read entirely!")
connection = self.connection
del self.connection
del self.fields
connection._close_current_resultset(self)
self.state = self.STATE_CLOSED
class Connection(object):
"""Represents a single connection to a MySQL Database host."""
STATE_ERROR = -1
STATE_INIT = 0
STATE_CONNECTING = 1
STATE_CONNECTED = 2
STATE_CLOSING = 3
STATE_CLOSED = 4
def __init__(self):
self.state = self.STATE_INIT
self.buffer = Buffer(1024 * 16)
self.socket = None
self.reader = None
self.writer = None
self._time_command = False #whether to keep timing stats on a cmd
self._command_time = -1
self._incommand = False
self.current_resultset = None
def _scramble(self, password, seed):
"""taken from java jdbc driver, scrambles the password using the given seed
according to the mysql login protocol"""
stage1 = SHA(password).digest()
stage2 = SHA(stage1).digest()
md = SHA()
md.update(seed)
md.update(stage2)
#i love python :-):
return ''.join(map(chr, [x ^ ord(stage1[i]) for i, x in enumerate(map(ord, md.digest()))]))
def _handshake(self, user, password, database, charset):
"""performs the mysql login handshake"""
#init buffer for reading (both pos and lim = 0)
self.buffer.clear()
self.buffer.flip()
#read server welcome
packet = self.reader.read_packet()
self.protocol_version = packet.read_byte() #normally this would be 10 (0xa)
if self.protocol_version == 0xff:
#error on initial greeting, possibly too many connection error
raise ClientLoginError.from_error_packet(packet, skip = 2)
elif self.protocol_version == 0xa:
pass #expected
else:
assert False, "Unexpected protocol version %02x" % self.protocol_version
self.server_version = packet.read_bytes_until(0)
packet.skip(4) #thread_id
scramble_buff = packet.read_bytes(8)
packet.skip(1) #filler
server_caps = packet.read_short()
#CAPS.dbg(server_caps)
if not server_caps & CAPS.PROTOCOL_41:
assert False, "<4.1 auth not supported"
server_language = packet.read_byte()
server_status = packet.read_short()
packet.skip(13) #filler
if packet.remaining:
scramble_buff += packet.read_bytes_until(0)
else:
assert False, "<4.1 auth not supported"
client_caps = server_caps
#always turn off compression
client_caps &= ~CAPS.COMPRESS
client_caps &= ~CAPS.NO_SCHEMA
if not server_caps & CAPS.CONNECT_WITH_DB and database:
assert False, "initial db given but not supported by server"
if server_caps & CAPS.CONNECT_WITH_DB and not database:
client_caps &= ~CAPS.CONNECT_WITH_DB
#build and write our answer to the initial handshake packet
self.writer.clear()
self.writer.start()
self.writer.write_int(client_caps)
self.writer.write_int(1024 * 1024 * 32) #16mb max packet
if charset:
self.writer.write_byte(charset_map[charset.replace("-", "")])
else:
self.writer.write_byte(server_language)
self.writer.write_bytes('\0' * 23) #filler
self.writer.write_bytes(user + '\0')
if password:
self.writer.write_byte(20)
self.writer.write_bytes(self._scramble(password, scramble_buff))
else:
self.writer.write_byte(0)
if database:
self.writer.write_bytes(database + '\0')
self.writer.finish(1)
self.writer.flush()
#read final answer from server
self.buffer.flip()
packet = self.reader.read_packet()
result = packet.read_byte()
if result == 0xff:
raise ClientLoginError.from_error_packet(packet)
elif result == 0xfe:
assert False, "old password handshake not implemented"
def _close_current_resultset(self, resultset):
assert resultset == self.current_resultset
self.current_resultset = None
def _send_command(self, cmd, cmd_text):
"""sends a command with the given text"""
#self.log.debug('cmd %s %s', cmd, cmd_text)
#note: we are not using normal writer.start/finish here, because the cmd
#could not fit in buffer, causing flushes in write_string, in that case 'finish' would
#not be able to go back to the header of the packet to write the length in that case
self.writer.clear()
self.writer.write_header(len(cmd_text) + 1 + 4, 0) #1 is len of cmd, 4 is len of header, 0 is packet number
self.writer.write_byte(cmd)
self.writer.write_bytes(cmd_text)
self.writer.flush()
def _close(self):
#self.log.debug("close mysql client %s", id(self))
try:
self.state = self.STATE_CLOSING
if self.current_resultset:
self.current_resultset.close(True)
self.socket.close()
self.state = self.STATE_CLOSED
except:
self.state = self.STATE_ERROR
raise
def connect(self, host = "localhost", port = 3306, user = "", passwd = "", db = "", autocommit = None, charset = None, use_unicode=False):
"""connects to the given host and port with user and passwd"""
#self.log.debug("connect mysql client %s %s %s %s %s", id(self), host, port, user, passwd)
try:
#print 'connect', host, user, passwd, db
#parse addresses of form str <host:port>
if type(host) == str:
if host[0] == '/': #assume unix domain socket
addr = host
elif ':' in host:
host, port = host.split(':')
port = int(port)
addr = (host, port)
else:
addr = (host, port)
assert self.state == self.STATE_INIT, "make sure connection is not already connected or closed"
self.state = self.STATE_CONNECTING
self.socket = socket.create_connection(addr)
self.reader = BufferedPacketReader(self.socket, self.buffer)
self.writer = BufferedPacketWriter(self.socket, self.buffer)
self._handshake(user, passwd, db, charset)
#handshake complete client can now send commands
self.state = self.STATE_CONNECTED
if autocommit == False:
self.set_autocommit(False)
elif autocommit == True:
self.set_autocommit(True)
else:
pass #whatever is the default of the db (ON in the case of mysql)
if charset is not None:
self.set_charset(charset)
self.set_use_unicode(use_unicode)
return self
- #except TimeoutError: TODO
- # self.state = self.STATE_INIT
- # raise
+ except gevent.Timeout:
+ self.state = self.STATE_INIT
+ raise
except ClientLoginError:
self.state = self.STATE_INIT
raise
except:
self.state = self.STATE_ERROR
raise
def close(self):
"""close this connection"""
assert self.is_connected(), "make sure connection is connected before closing"
if self._incommand != False: assert False, "cannot close while still in a command"
self._close()
def command(self, cmd, cmd_text):
"""sends a COM_XXX command with the given text and possibly return a resultset (select)"""
#print 'command', cmd, repr(cmd_text), type(cmd_text)
assert type(cmd_text) == str #as opposed to unicode
assert self.is_connected(), "make sure connection is connected before query"
if self._incommand != False: assert False, "overlapped commands not supported"
if self.current_resultset: assert False, "overlapped commands not supported, pls read prev resultset and close it"
try:
self._incommand = True
if self._time_command:
start_time = time.time()
self._send_command(cmd, cmd_text)
#read result, expect 1 of OK, ERROR or result set header
self.buffer.flip()
packet = self.reader.read_packet()
result = packet.read_byte()
#print 'res', result
if self._time_command:
end_time = time.time()
self._command_time = end_time - start_time
if result == 0x00:
#OK, return (affected rows, last row id)
rowcount = self.reader.read_length_coded_binary()
lastrowid = self.reader.read_length_coded_binary()
return (rowcount, lastrowid)
elif result == 0xff:
raise ClientCommandError.from_error_packet(packet)
else: #result set
self.current_resultset = ResultSet(self, result)
return self.current_resultset
finally:
self._incommand = False
def is_connected(self):
return self.state == self.STATE_CONNECTED
def query(self, cmd_text):
"""Sends a COM_QUERY command with the given text and return a resultset (select)"""
return self.command(COMMAND.QUERY, cmd_text)
def init_db(self, cmd_text):
"""Sends a COM_INIT command with the given text"""
return self.command(COMMAND.INITDB, cmd_text)
def set_autocommit(self, commit):
"""Sets autocommit setting for this connection. True = on, False = off"""
self.command(COMMAND.QUERY, "SET AUTOCOMMIT = %s" % ('1' if commit else '0'))
def commit(self):
"""Commits this connection"""
self.command(COMMAND.QUERY, "COMMIT")
def rollback(self):
"""Issues a rollback on this connection"""
self.command(COMMAND.QUERY, "ROLLBACK")
def set_charset(self, charset):
"""Sets the charset for this connections (used to decode string fields into unicode strings)"""
self.reader.reader.encoding = charset
def set_use_unicode(self, use_unicode):
self.reader.reader.use_unicode = use_unicode
def set_time_command(self, time_command):
self._time_command = time_command
def get_command_time(self):
return self._command_time
Connection.log = logging.getLogger(Connection.__name__)
def connect(*args, **kwargs):
return Connection().connect(*args, **kwargs)
|
mthurlin/gevent-MySQL
|
693b00fd8a9c16e38f33c3626ac2d6f233307651
|
Added property for capacity
|
diff --git a/lib/geventmysql/geventmysql._mysql.pyx b/lib/geventmysql/geventmysql._mysql.pyx
index 68881b6..5e8f762 100644
--- a/lib/geventmysql/geventmysql._mysql.pyx
+++ b/lib/geventmysql/geventmysql._mysql.pyx
@@ -1,1165 +1,1171 @@
# Copyright (C) 2009, Hyves (Startphone Ltd.)
#
# This module is part of the Concurrence Framework and is released under
# the New BSD License: http://www.opensource.org/licenses/bsd-license.php
"""
base aynchronous mysql io library
"""
import datetime
import types
import sys
cdef extern from "string.h":
cdef void *memmove(void *, void *, int)
cdef void *memcpy(void *, void *, int)
cdef void *memchr(void *, int, int)
cdef extern from "stdlib.h":
cdef void *calloc(int, int)
cdef void free(void *)
cdef extern from "Python.h":
object PyString_FromStringAndSize(char *, int)
object PyString_FromString(char *)
int PyString_AsStringAndSize(object obj, char **s, Py_ssize_t *len) except -1
cdef enum:
COMMAND_SLEEP = 0
COMMAND_QUIT = 1
COMMAND_INIT_DB = 2
COMMAND_QUERY = 3
COMMAND_LIST = 4
class COMMAND:
SLEEP = COMMAND_SLEEP
QUIT = COMMAND_QUIT
INIT_DB = COMMAND_INIT_DB
QUERY = COMMAND_QUERY
LIST = COMMAND_LIST
cdef enum:
PACKET_READ_NONE = 0
PACKET_READ_MORE = 1
PACKET_READ_ERROR = 2
PACKET_READ_TRUE = 4
PACKET_READ_START = 8
PACKET_READ_END = 16
PACKET_READ_EOF = 32
class PACKET_READ_RESULT:
NONE = PACKET_READ_NONE
MORE = PACKET_READ_MORE
ERROR = PACKET_READ_ERROR
TRUE = PACKET_READ_TRUE
START = PACKET_READ_START
END = PACKET_READ_END
EOF = PACKET_READ_EOF
cdef enum:
FIELD_TYPE_DECIMAL = 0x00
FIELD_TYPE_TINY = 0x01
FIELD_TYPE_SHORT = 0x02
FIELD_TYPE_LONG = 0x03
FIELD_TYPE_FLOAT = 0x04
FIELD_TYPE_DOUBLE = 0x05
FIELD_TYPE_NULL = 0x06
FIELD_TYPE_TIMESTAMP = 0x07
FIELD_TYPE_LONGLONG = 0x08
FIELD_TYPE_INT24 = 0x09
FIELD_TYPE_DATE = 0x0a
FIELD_TYPE_TIME = 0x0b
FIELD_TYPE_DATETIME = 0x0c
FIELD_TYPE_YEAR = 0x0d
FIELD_TYPE_NEWDATE = 0x0e
FIELD_TYPE_VARCHAR = 0x0f
FIELD_TYPE_BIT = 0x10
FIELD_TYPE_NEWDECIMAL = 0xf6
FIELD_TYPE_ENUM = 0xf7
FIELD_TYPE_SET = 0xf8
FIELD_TYPE_TINY_BLOB = 0xf9
FIELD_TYPE_MEDIUM_BLOB = 0xfa
FIELD_TYPE_LONG_BLOB = 0xfb
FIELD_TYPE_BLOB = 0xfc
FIELD_TYPE_VAR_STRING = 0xfd
FIELD_TYPE_STRING = 0xfe
FIELD_TYPE_GEOMETRY = 0xff
class FIELD_TYPE:
DECIMAL = FIELD_TYPE_DECIMAL
TINY = FIELD_TYPE_TINY
SHORT = FIELD_TYPE_SHORT
LONG = FIELD_TYPE_LONG
FLOAT = FIELD_TYPE_FLOAT
DOUBLE = FIELD_TYPE_DOUBLE
_NULL = FIELD_TYPE_NULL
TIMESTAMP = FIELD_TYPE_TIMESTAMP
LONGLONG = FIELD_TYPE_LONGLONG
INT24 = FIELD_TYPE_INT24
DATE = FIELD_TYPE_DATE
TIME = FIELD_TYPE_TIME
DATETIME = FIELD_TYPE_DATETIME
YEAR = FIELD_TYPE_YEAR
NEWDATE = FIELD_TYPE_NEWDATE
VARCHAR = FIELD_TYPE_VARCHAR
BIT = FIELD_TYPE_BIT
NEWDECIMAL = FIELD_TYPE_NEWDECIMAL
ENUM = FIELD_TYPE_ENUM
SET = FIELD_TYPE_SET
TINY_BLOB = FIELD_TYPE_TINY_BLOB
MEDIUM_BLOB = FIELD_TYPE_MEDIUM_BLOB
LONG_BLOB = FIELD_TYPE_LONG_BLOB
BLOB = FIELD_TYPE_BLOB
VAR_STRING = FIELD_TYPE_VAR_STRING
STRING = FIELD_TYPE_STRING
GEOMETRY = FIELD_TYPE_GEOMETRY
INT_TYPES = set([FIELD_TYPE.TINY, FIELD_TYPE.SHORT, FIELD_TYPE.LONG, FIELD_TYPE.LONGLONG])
FLOAT_TYPES = set([FIELD_TYPE.FLOAT, FIELD_TYPE.DOUBLE])
BLOB_TYPES = set([FIELD_TYPE.TINY_BLOB, FIELD_TYPE.MEDIUM_BLOB, FIELD_TYPE.LONG_BLOB, FIELD_TYPE.BLOB])
STRING_TYPES = set([FIELD_TYPE.VARCHAR, FIELD_TYPE.VAR_STRING, FIELD_TYPE.STRING])
DATE_TYPES = set([FIELD_TYPE.TIMESTAMP, FIELD_TYPE.DATE, FIELD_TYPE.TIME, FIELD_TYPE.DATETIME, FIELD_TYPE.YEAR, FIELD_TYPE.NEWDATE])
# Not handled:
# 0x00 FIELD_TYPE_DECIMAL
# 0x06 FIELD_TYPE_NULL
# 0x09 FIELD_TYPE_INT24
# 0x10 FIELD_TYPE_BIT
# 0xf6 FIELD_TYPE_NEWDECIMAL
# 0xf7 FIELD_TYPE_ENUM
# 0xf8 FIELD_TYPE_SET
# 0xff FIELD_TYPE_GEOMETRY
charset_nr = {}
charset_nr[1] = 'big5'
charset_nr[2] = 'latin2'
charset_nr[3] = 'dec8'
charset_nr[4] = 'cp850'
charset_nr[5] = 'latin1'
charset_nr[6] = 'hp8'
charset_nr[7] = 'koi8r'
charset_nr[8] = 'latin1'
charset_nr[9] = 'latin2'
charset_nr[10] = 'swe7'
charset_nr[11] = 'ascii'
charset_nr[12] = 'ujis'
charset_nr[13] = 'sjis'
charset_nr[14] = 'cp1251'
charset_nr[15] = 'latin1'
charset_nr[16] = 'hebrew'
charset_nr[18] = 'tis620'
charset_nr[19] = 'euckr'
charset_nr[20] = 'latin7'
charset_nr[21] = 'latin2'
charset_nr[22] = 'koi8u'
charset_nr[23] = 'cp1251'
charset_nr[24] = 'gb2312'
charset_nr[25] = 'greek'
charset_nr[26] = 'cp1250'
charset_nr[27] = 'latin2'
charset_nr[28] = 'gbk'
charset_nr[29] = 'cp1257'
charset_nr[30] = 'latin5'
charset_nr[31] = 'latin1'
charset_nr[32] = 'armscii8'
charset_nr[33] = 'utf8'
charset_nr[34] = 'cp1250'
charset_nr[35] = 'ucs2'
charset_nr[36] = 'cp866'
charset_nr[37] = 'keybcs2'
charset_nr[38] = 'macce'
charset_nr[39] = 'macroman'
charset_nr[40] = 'cp852'
charset_nr[41] = 'latin7'
charset_nr[42] = 'latin7'
charset_nr[43] = 'macce'
charset_nr[44] = 'cp1250'
charset_nr[47] = 'latin1'
charset_nr[48] = 'latin1'
charset_nr[49] = 'latin1'
charset_nr[50] = 'cp1251'
charset_nr[51] = 'cp1251'
charset_nr[52] = 'cp1251'
charset_nr[53] = 'macroman'
charset_nr[57] = 'cp1256'
charset_nr[58] = 'cp1257'
charset_nr[59] = 'cp1257'
charset_nr[63] = 'binary'
charset_nr[64] = 'armscii8'
charset_nr[65] = 'ascii'
charset_nr[66] = 'cp1250'
charset_nr[67] = 'cp1256'
charset_nr[68] = 'cp866'
charset_nr[69] = 'dec8'
charset_nr[70] = 'greek'
charset_nr[71] = 'hebrew'
charset_nr[72] = 'hp8'
charset_nr[73] = 'keybcs2'
charset_nr[74] = 'koi8r'
charset_nr[75] = 'koi8u'
charset_nr[77] = 'latin2'
charset_nr[78] = 'latin5'
charset_nr[79] = 'latin7'
charset_nr[80] = 'cp850'
charset_nr[81] = 'cp852'
charset_nr[82] = 'swe7'
charset_nr[83] = 'utf8'
charset_nr[84] = 'big5'
charset_nr[85] = 'euckr'
charset_nr[86] = 'gb2312'
charset_nr[87] = 'gbk'
charset_nr[88] = 'sjis'
charset_nr[89] = 'tis620'
charset_nr[90] = 'ucs2'
charset_nr[91] = 'ujis'
charset_nr[92] = 'geostd8'
charset_nr[93] = 'geostd8'
charset_nr[94] = 'latin1'
charset_nr[95] = 'cp932'
charset_nr[96] = 'cp932'
charset_nr[97] = 'eucjpms'
charset_nr[98] = 'eucjpms'
charset_nr[99] = 'cp1250'
for i in range(128, 192):
charset_nr[i] = 'ucs2'
for i in range(192, 211):
charset_nr[i] = 'utf8'
class BufferError(Exception):
pass
class BufferOverflowError(BufferError):
pass
class BufferUnderflowError(BufferError):
pass
class BufferInvalidArgumentError(BufferError):
pass
cdef class Buffer:
"""Creates a :class:`Buffer` object. The buffer class forms the basis for IO in the Concurrence Framework.
The buffer class represents a mutable array of bytes of that can be read from and written to using the
read_XXX and write_XXX methods.
Operations on the buffer are performed relative to the current :attr:`position` attribute of the buffer.
A buffer also has a current :attr:`limit` property above which no data may be read or written.
If an operation tries to read beyond the current :attr:`limit` a BufferUnderflowError is raised. If an operation
tries to write beyond the current :attr:`limit` a BufferOverflowError is raised.
The general idea of the :class:`Buffer` was shamelessly copied from java NIO.
"""
cdef unsigned char * _buff
cdef int _position
cdef Buffer _parent
- cdef int capacity
+ cdef int _capacity
cdef int _limit
def __cinit__(self, int capacity, Buffer parent = None):
if parent is not None:
#this is a copy contructor for a shallow
#copy, e.g. we reference the same data as our parent, but have our
#own position and limit (use .duplicate method to get the copy)
self._parent = parent #this incs the refcnt on parent
self._buff = parent._buff
self._position = parent._position
self._limit = parent._limit
- self.capacity = parent.capacity
+ self._capacity = parent._capacity
else:
#normal constructor
self._parent = None
- self.capacity = capacity
- self._buff = <unsigned char *>(calloc(1, self.capacity))
+ self._capacity = capacity
+ self._buff = <unsigned char *>(calloc(1, self._capacity))
def __dealloc__(self):
if self._parent is None:
free(self._buff)
else:
self._parent = None #releases our refcnt on parent
def __init__(self, int capacity, Buffer parent = None):
"""Create a new empty buffer with the given *capacity*."""
self.clear()
+
def duplicate(self):
"""Return a shallow copy of the Buffer, e.g. the copied buffer
references the same bytes as the original buffer, but has its own
independend position and limit."""
return Buffer(0, self)
def copy(self, Buffer src, int src_start, int dst_start, int length):
"""Copies *length* bytes from buffer *src*, starting at position *src_start*, to this
buffer at position *dst_start*."""
if length < 0:
raise BufferInvalidArgumentError("length must be >= 0")
if src_start < 0:
raise BufferInvalidArgumentError("src start must be >= 0")
- if src_start > src.capacity:
+ if src_start > src._capacity:
raise BufferInvalidArgumentError("src start must <= src capacity")
- if src_start + length > src.capacity:
+ if src_start + length > src._capacity:
raise BufferInvalidArgumentError("src start + length must <= src capacity")
if dst_start < 0:
raise BufferInvalidArgumentError("dst start must be >= 0")
- if dst_start > self.capacity:
+ if dst_start > self._capacity:
raise BufferInvalidArgumentError("dst start must <= dst capacity")
- if dst_start + length > self.capacity:
+ if dst_start + length > self._capacity:
raise BufferInvalidArgumentError("dst start + length must <= dst capacity")
#now we can safely copy!
memcpy(self._buff + dst_start, src._buff + src_start, length)
def clear(self):
"""Prepares the buffer for relative read operations. The buffers :attr:`limit` will set to the buffers :attr:`capacity` and
its :attr:`position` will be set to 0."""
- self._limit = self.capacity
+ self._limit = self._capacity
self._position = 0
def flip(self):
"""Prepares the buffer for relative write operations. The buffers :attr:`limit` will set to the buffers :attr:`position` and
its :attr:`position` will be set to 0."""
self._limit = self._position
self._position = 0
def rewind(self):
"""Sets the buffers :attr:`position` back to 0."""
self._position = 0
cdef int _skip(self, int n) except -1:
if self._position + n <= self.limit:
self._position = self._position + n
return n
else:
raise BufferUnderflowError()
def skip(self, int n):
"""Updates the buffers position by skipping n bytes. It is not allowed to skip passed the current :attr:`limit`.
In that case a :exc:`BufferUnderflowError` will be raised and the :attr:`position` will remain the same"""
return self._skip(n)
cdef int _remaining(self):
return self._limit - self._position
+
+ property capacity:
+ def __get__(self):
+ return self._capacity
+
property remaining:
def __get__(self):
return self._limit - self._position
property limit:
def __get__(self):
return self._limit
def __set__(self, limit):
- if limit >= 0 and limit <= self.capacity and limit >= self._position:
+ if limit >= 0 and limit <= self._capacity and limit >= self._position:
self._limit = limit
else:
if limit < 0:
raise BufferInvalidArgumentError("limit must be >= 0")
- elif limit > self.capacity:
+ elif limit > self._capacity:
raise BufferInvalidArgumentError("limit must be <= capacity")
elif limit < self._position:
raise BufferInvalidArgumentError("limit must be >= position")
else:
raise BufferInvalidArgumentError()
property position:
def __get__(self):
return self._position
def __set__(self, position):
- if position >= 0 and position <= self.capacity and position <= self._limit:
+ if position >= 0 and position <= self._capacity and position <= self._limit:
self._position = position
else:
if position < 0:
raise BufferInvalidArgumentError("position must be >= 0")
- elif position > self.capacity:
+ elif position > self._capacity:
raise BufferInvalidArgumentError("position must be <= capacity")
elif position > self._limit:
raise BufferInvalidArgumentError("position must be <= limit")
else:
raise BufferInvalidArgumentError()
cdef int _read_byte(self) except -1:
cdef int b
if self._position + 1 <= self._limit:
b = self._buff[self._position]
self._position = self._position + 1
return b
else:
raise BufferUnderflowError()
def read_byte(self):
"""Reads and returns a single byte from the buffer and updates the :attr:`position` by 1."""
return self._read_byte()
def recv(self, int fd):
"""Reads as many bytes as will fit up till the :attr:`limit` of the buffer from the filedescriptor *fd*.
Returns a tuple (bytes_read, bytes_remaining). If *bytes_read* is negative, a IO Error was encountered.
The :attr:`position` of the buffer will be updated according to the number of bytes read.
"""
cdef int b
b = 0
#TODO
#b = read(fd, self._buff + self._position, self._limit - self._position)
if b > 0: self._position = self._position + b
return b, self._limit - self._position
def send(self, int fd):
"""Sends as many bytes as possible up till the :attr:`limit` of the buffer to the filedescriptor *fd*.
Returns a tuple (bytes_written, bytes_remaining). If *bytes_written* is negative, an IO Error was encountered.
"""
cdef int b
b = 0
#TODO
#b = write(fd, self._buff + self._position, self._limit - self._position)
if b > 0: self._position = self._position + b
return b, self._limit - self._position
def compact(self):
"""Prepares the buffer again for relative reading, but any left over data still present in the buffer (the bytes between
the current :attr:`position` and current :attr:`limit`) will be copied to the start of the buffer. The position of the buffer
will be right after the copied data.
"""
cdef int n
n = self._limit - self._position
if n > 0 and self._position > 0:
if n < self._position:
memcpy(self._buff + 0, self._buff + self._position, n)
else:
memmove(self._buff + 0, self._buff + self._position, n)
self._position = n
- self._limit = self.capacity
+ self._limit = self._capacity
def __getitem__(self, object i):
cdef int start, end, stride
if type(i) == types.IntType:
- if i >= 0 and i < self.capacity:
+ if i >= 0 and i < self._capacity:
return self._buff[i]
else:
raise BufferInvalidArgumentError("index must be >= 0 and < capacity")
elif type(i) == types.SliceType:
- start, end, stride = i.indices(self.capacity)
+ start, end, stride = i.indices(self._capacity)
return PyString_FromStringAndSize(<char *>(self._buff + start), end - start)
else:
raise BufferInvalidArgumentError("wrong index type")
def __setitem__(self, object i, object value):
cdef int start, end, stride
cdef char *b
cdef Py_ssize_t n
if type(i) == types.IntType:
if type(value) != types.IntType:
raise BufferInvalidArgumentError("value must be integer")
if value < 0 or value > 255:
raise BufferInvalidArgumentError("value must in range [0..255]")
- if i >= 0 and i < self.capacity:
+ if i >= 0 and i < self._capacity:
self._buff[i] = value
else:
raise BufferInvalidArgumentError("index must be >= 0 and < capacity")
elif type(i) == types.SliceType:
- start, end, stride = i.indices(self.capacity)
+ start, end, stride = i.indices(self._capacity)
PyString_AsStringAndSize(value, &b, &n)
if n != (end - start):
raise BufferInvalidArgumentError("incompatible slice")
memcpy(self._buff + start, b, n)
else:
raise BufferInvalidArgumentError("wrong index type")
def read_short(self):
"""Read a 2 byte little endian integer from buffer and updates position."""
cdef int s
if 2 > (self._limit - self._position):
raise BufferUnderflowError()
else:
s = self._buff[self._position] + (self._buff[self._position + 1] << 8)
self._position = self._position + 2
return s
cdef object _read_bytes(self, int n):
"""reads n bytes from buffer, updates position, and returns bytes as a python string"""
if n > (self._limit - self._position):
raise BufferUnderflowError()
else:
s = PyString_FromStringAndSize(<char *>(self._buff + self._position), n)
self._position = self._position + n
return s
def read_bytes(self, int n = -1):
"""Reads n bytes from buffer, updates position, and returns bytes as a python string,
if there are no n bytes available, a :exc:`BufferUnderflowError` is raised."""
if n == -1:
return self._read_bytes(self._limit - self._position)
else:
return self._read_bytes(n)
def read_bytes_until(self, int b):
"""Reads bytes until character b is found, or end of buffer is reached in which case it will raise a :exc:`BufferUnderflowError`."""
cdef int n, maxlen
cdef char *zpos, *start
if b < 0 or b > 255:
raise BufferInvalidArgumentError("b must in range [0..255]")
maxlen = self._limit - self._position
start = <char *>(self._buff + self._position)
zpos = <char *>(memchr(start, b, maxlen))
if zpos == NULL:
raise BufferUnderflowError()
else:
n = zpos - start
s = PyString_FromStringAndSize(start, n)
self._position = self._position + n + 1
return s
def read_line(self, int include_separator = 0):
"""Reads a single line of bytes from the buffer where the end of the line is indicated by either 'LF' or 'CRLF'.
The line will be returned as a string not including the line-separator. Optionally *include_separator* can be specified
to make the method to also return the line-separator."""
cdef int n, maxlen
cdef char *zpos, *start
maxlen = self._limit - self._position
start = <char *>(self._buff + self._position)
zpos = <char *>(memchr(start, 10, maxlen))
if maxlen == 0:
raise BufferUnderflowError()
if zpos == NULL:
raise BufferUnderflowError()
n = zpos - start
if self._buff[self._position + n - 1] == 13: #\r\n
if include_separator:
s = PyString_FromStringAndSize(start, n + 1)
self._position = self._position + n + 1
else:
s = PyString_FromStringAndSize(start, n - 1)
self._position = self._position + n + 1
else: #\n
if include_separator:
s = PyString_FromStringAndSize(start, n + 1)
self._position = self._position + n + 1
else:
s = PyString_FromStringAndSize(start, n)
self._position = self._position + n + 1
return s
def write_bytes(self, s):
"""Writes a number of bytes given by the python string s to the buffer and updates position. Raises
:exc:`BufferOverflowError` if you try to write beyond the current :attr:`limit`."""
cdef char *b
cdef Py_ssize_t n
PyString_AsStringAndSize(s, &b, &n)
if n > (self._limit - self._position):
raise BufferOverflowError()
else:
memcpy(self._buff + self._position, b, n)
self._position = self._position + n
return n
def write_buffer(self, Buffer other):
"""writes available bytes from other buffer to this buffer"""
self.write_bytes(other.read_bytes(-1)) #TODO use copy
cdef int _write_byte(self, unsigned int b) except -1:
"""writes a single byte to the buffer and updates position"""
if self._position + 1 <= self._limit:
self._buff[self._position] = b
self._position = self._position + 1
return 1
else:
raise BufferOverflowError()
def write_byte(self, unsigned int b):
"""writes a single byte to the buffer and updates position"""
return self._write_byte(b)
def write_int(self, unsigned int i):
"""writes a 32 bit integer to the buffer and updates position (little-endian)"""
if self._position + 4 <= self._limit:
self._buff[self._position + 0] = (i >> 0) & 0xFF
self._buff[self._position + 1] = (i >> 8) & 0xFF
self._buff[self._position + 2] = (i >> 16) & 0xFF
self._buff[self._position + 3] = (i >> 24) & 0xFF
self._position = self._position + 4
return 4
else:
raise BufferOverflowError()
def write_short(self, unsigned int i):
"""writes a 16 bit integer to the buffer and updates position (little-endian)"""
if self._position + 2 <= self._limit:
self._buff[self._position + 0] = (i >> 0) & 0xFF
self._buff[self._position + 1] = (i >> 8) & 0xFF
self._position = self._position + 2
return 2
else:
raise BufferOverflowError()
def hex_dump(self, out = None):
highlight1 = "\033[34m"
highlight2 = "\033[32m"
default = "\033[0m"
if out is None: out = sys.stdout
import string
- out.write('<concurrence.io.Buffer id=%x, position=%d, limit=%d, capacity=%d>\n' % (id(self), self.position, self.limit, self.capacity))
+ out.write('<concurrence.io.Buffer id=%x, position=%d, limit=%d, capacity=%d>\n' % (id(self), self.position, self.limit, self._capacity))
printable = set(string.printable)
whitespace = set(string.whitespace)
x = 0
s1 = []
s2 = []
- while x < self.capacity:
+ while x < self._capacity:
v = self[x]
if x < self.position:
s1.append('%s%02x%s' % (highlight1, v, default))
elif x < self.limit:
s1.append('%s%02x%s' % (highlight2, v, default))
else:
s1.append('%02x' % v)
c = chr(v)
if c in printable and not c in whitespace:
s2.append(c)
else:
s2.append('.')
x += 1
if x % 16 == 0:
out.write('%04x' % (x - 16) + ' ' + ' '.join(s1[:8]) + ' ' + ' '.join(s1[8:]) + ' ' + ''.join(s2[:8]) + ' ' + (''.join(s2[8:]) + '\n'))
s1 = []
s2 = []
out.flush()
def __repr__(self):
import cStringIO
sio = cStringIO.StringIO()
self.hex_dump(sio)
return sio.getvalue()
def __str__(self):
return repr(self)
class PacketReadError(Exception):
pass
MAX_PACKET_SIZE = 4 * 1024 * 1024 #4mb
cdef class PacketReader:
cdef int oversize
cdef readonly int number
cdef readonly int length #length in bytes of the current packet in the buffer
cdef readonly int command
cdef readonly int start #position of start of packet in buffer
cdef readonly int end
cdef public object encoding
cdef public object use_unicode
cdef readonly Buffer buffer #the current read buffer
cdef readonly Buffer packet #the current packet (could be normal or oversize packet):
cdef Buffer normal_packet #the normal packet
cdef Buffer oversize_packet #if we are reading an oversize packet, this is where we keep the data
def __init__(self, Buffer buffer):
self.oversize = 0
self.encoding = None
self.use_unicode = False
self.buffer = buffer
self.normal_packet = buffer.duplicate()
self.oversize_packet = buffer.duplicate()
self.packet = self.normal_packet
cdef int _read(self) except PACKET_READ_ERROR:
"""this method scans the buffer for packets, reporting the start, end of packet
or whether the packet in the buffer is incomplete and more data is needed"""
cdef int r
cdef Buffer buffer
buffer = self.buffer
self.command = 0
self.start = 0
self.end = 0
r = buffer._remaining()
if self.oversize == 0: #normal packet reading mode
#print 'normal mode', r
if r < 4:
#print 'rem < 4 return'
return PACKET_READ_NONE #incomplete header
#these four reads will always succeed because r >= 4
self.length = (buffer._read_byte()) + (buffer._read_byte() << 8) + (buffer._read_byte() << 16) + 4
self.number = buffer._read_byte()
if self.length <= r:
#a complete packet sitting in buffer
self.start = buffer._position - 4
self.end = self.start + self.length
self.command = buffer._buff[buffer._position]
buffer._skip(self.length - 4) #skip rest of packet
#print 'single packet recvd', self.length, self.command
if self.length < r:
return PACKET_READ_TRUE | PACKET_READ_START | PACKET_READ_END | PACKET_READ_MORE
else:
return PACKET_READ_TRUE | PACKET_READ_START | PACKET_READ_END
#return self.length < r #if l was smaller, tere is more, otherwise l == r and buffer is empty
else:
#print 'incomplete packet in buffer', buffer._position, self.length
- if self.length > buffer.capacity:
+ if self.length > buffer._capacity:
#print 'start of oversize packet', self.length
self.start = buffer._position - 4
self.end = buffer._limit
self.command = buffer._buff[buffer._position]
buffer._position = buffer._limit #skip rest of buffer
self.oversize = self.length - r#left todo
return PACKET_READ_TRUE | PACKET_READ_START
else:
#print 'small incomplete packet', self.length, buffer._position
buffer._skip(-4) #rewind to start of incomplete packet
return PACKET_READ_NONE #incomplete packet
else: #busy reading an oversized packet
#print 'oversize mode', r, self.oversize, buffer.position, buffer.limit
self.start = buffer._position
if self.oversize < r:
buffer._skip(self.oversize) #skip rest of buffer
self.oversize = 0
else:
buffer._skip(r) #skip rest of buffer or remaining oversize
self.oversize = self.oversize - r
self.end = buffer._position
if self.oversize == 0:
#print 'oversize packet recvd'
return PACKET_READ_TRUE | PACKET_READ_END | PACKET_READ_MORE
else:
#print 'some data of oversize packet recvd'
return PACKET_READ_TRUE
def read(self):
return self._read()
cdef int _read_packet(self) except PACKET_READ_ERROR:
cdef int r, size, max_packet_size
r = self._read()
if r & PACKET_READ_TRUE:
if (r & PACKET_READ_START) and (r & PACKET_READ_END):
#normal sized packet, read entirely
self.packet = self.normal_packet
self.packet._position, self.packet._limit = self.start + 4, self.end
elif (r & PACKET_READ_START) and not (r & PACKET_READ_END):
#print 'start of oversize', self.end - self.start, self.length
#first create oversize_packet if necessary:
- if self.oversize_packet.capacity < self.length:
+ if self.oversize_packet._capacity < self.length:
#find first size multiple of 2 that will fit the oversize packet
- size = self.buffer.capacity
+ size = self.buffer._capacity
while size < self.length:
size = size * 2
if size >= MAX_PACKET_SIZE:
raise PacketReadError("oversized packet will not fit in MAX_PACKET_SIZE, length: %d, MAX_PACKET_SIZE: %d" % (self.length, MAX_PACKET_SIZE))
#print 'createing oversize packet', size
self.oversize_packet = Buffer(size)
self.oversize_packet.copy(self.buffer, self.start, 0, self.end - self.start)
self.packet = self.oversize_packet
self.packet._position, self.packet._limit = 4, self.end - self.start
else:
#end or middle part of oversized packet
self.oversize_packet.copy(self.buffer, self.start, self.oversize_packet._limit, self.end - self.start)
self.oversize_packet._limit = self.oversize_packet._limit + (self.end - self.start)
return r
def read_packet(self):
return self._read_packet()
cdef _read_length_coded_binary(self):
cdef unsigned int n, v
cdef unsigned long long vw
cdef Buffer packet
packet = self.packet
if packet._position + 1 > packet._limit: raise BufferUnderflowError()
n = packet._buff[packet._position]
if n < 251:
packet._position = packet._position + 1
return n
elif n == 251:
assert False, 'unexpected, only valid for row data packet'
elif n == 252:
#16 bit word
if packet._position + 3 > packet._limit: raise BufferUnderflowError()
v = packet._buff[packet._position + 1] | ((packet._buff[packet._position + 2]) << 8)
packet._position = packet._position + 3
return v
elif n == 253:
#24 bit word
if packet._position + 4 > packet._limit: raise BufferUnderflowError()
v = packet._buff[packet._position + 1] | ((packet._buff[packet._position + 2]) << 8) | ((packet._buff[packet._position + 3]) << 16)
packet._position = packet._position + 4
return v
else:
#64 bit word
if packet._position + 9 > packet._limit: raise BufferUnderflowError()
vw = 0
vw |= (<unsigned long long>packet._buff[packet._position + 1]) << 0
vw |= (<unsigned long long>packet._buff[packet._position + 2]) << 8
vw |= (<unsigned long long>packet._buff[packet._position + 3]) << 16
vw |= (<unsigned long long>packet._buff[packet._position + 4]) << 24
vw |= (<unsigned long long>packet._buff[packet._position + 5]) << 32
vw |= (<unsigned long long>packet._buff[packet._position + 6]) << 40
vw |= (<unsigned long long>packet._buff[packet._position + 7]) << 48
vw |= (<unsigned long long>packet._buff[packet._position + 8]) << 56
packet._position = packet._position + 9
return vw
def read_length_coded_binary(self):
return self._read_length_coded_binary()
cdef _read_bytes_length_coded(self):
cdef unsigned int n, w
cdef Buffer packet
packet = self.packet
if packet._position + 1 > packet._limit: raise BufferUnderflowError()
n = packet._buff[packet._position]
w = 1
if n >= 251:
if n == 251:
packet._position = packet._position + 1
return None
elif n == 252:
if packet._position + 2 > packet._limit: raise BufferUnderflowError()
n = packet._buff[packet._position + 1] | ((packet._buff[packet._position + 2]) << 8)
w = 3
else:
assert False, 'not implemented yet, n: %02x' % n
if (n + w) > (packet._limit - packet._position):
raise BufferUnderflowError()
packet._position = packet._position + w
s = PyString_FromStringAndSize(<char *>(packet._buff + packet._position), n)
packet._position = packet._position + n
return s
def read_bytes_length_coded(self):
return self._read_bytes_length_coded()
def read_field_type(self):
cdef int n
cdef Buffer packet
packet = self.packet
n = packet._read_byte()
packet._skip(n) #catalog
n = packet._read_byte()
packet._skip(n) #db
n = packet._read_byte()
packet._skip(n) #table
n = packet._read_byte()
packet._skip(n) #org_table
n = packet._read_byte()
name = packet._read_bytes(n)
n = packet._read_byte()
packet._skip(n) #org_name
packet._skip(1)
charsetnr = packet._read_bytes(2)
n = packet._skip(4)
n = packet.read_byte() #type
return (name, n, charsetnr)
cdef _string_to_int(self, object s):
if s == None:
return None
else:
return int(s)
cdef _string_to_float(self, object s):
if s == None:
return None
else:
return float(s)
cdef _read_datestring(self):
cdef unsigned int n
cdef Buffer packet
packet = self.packet
if packet._position + 1 > packet._limit: raise BufferUnderflowError()
n = packet._buff[packet._position]
packet._position = packet._position + 1
s = PyString_FromStringAndSize(<char *>(packet._buff + packet._position), n)
packet._position = packet._position + n
return s
cdef _datestring_to_date(self, object s):
if not s or s == "0000-00-00":
return None
parts = s.split("-")
try:
assert len(parts) == 3
d = datetime.date(*map(int, parts))
except (AssertionError, ValueError):
raise ValueError("Unhandled date format: %r" % (s, ))
return d
cdef _datestring_to_datetime(self, object s):
if not s:
return None
datestring, timestring = s.split(" ")
_date = self._datestring_to_date(datestring)
if _date is None:
return None
parts = timestring.split(":")
try:
assert len(parts) == 3
d = datetime.datetime(_date.year, _date.month, _date.day, *map(int, parts))
except (AssertionError, ValueError):
raise ValueError("Unhandled datetime format: %r" % (s, ))
return d
cdef int _read_row(self, object row, object fields, int field_count) except PACKET_READ_ERROR:
cdef int i, r
cdef int decode
if self.encoding:
decode = 1
encoding = self.encoding
else:
decode = 0
r = self._read_packet()
if r & PACKET_READ_END: #whole packet recv
if self.packet._buff[self.packet._position] == 0xFE:
return r | PACKET_READ_EOF
else:
i = 0
int_types = INT_TYPES
float_types = FLOAT_TYPES
string_types = STRING_TYPES
date_type = FIELD_TYPE.DATE
datetime_type = FIELD_TYPE.DATETIME
while i < field_count:
t = fields[i][1] #type_code
if t in int_types:
row[i] = self._string_to_int(self._read_bytes_length_coded())
elif t in string_types:
row[i] = self._read_bytes_length_coded()
if row[i] is not None and (self.encoding or self.use_unicode):
bytes = fields[i][2]
nr = ord(bytes[1]) << 8 | ord(bytes[0])
row[i] = row[i].decode(charset_nr[nr])
if not self.use_unicode:
row[i] = row[i].encode(self.encoding)
elif t in float_types:
row[i] = self._string_to_float(self._read_bytes_length_coded())
elif t == date_type:
row[i] = self._datestring_to_date(self._read_datestring())
elif t == datetime_type:
row[i] = self._datestring_to_datetime(self._read_datestring())
else:
row[i] = self._read_bytes_length_coded()
i = i + 1
return r
def read_rows(self, object fields, int row_count):
cdef int r, i, field_count
field_count = len(fields)
i = 0
r = 0
rows = []
row = [None] * field_count
add = rows.append
while i < row_count:
r = self._read_row(row, fields, field_count)
if r & PACKET_READ_END:
if r & PACKET_READ_EOF:
break
else:
add(tuple(row))
if not (r & PACKET_READ_MORE):
break
i = i + 1
return r, rows
cdef enum:
PROXY_STATE_UNDEFINED = -2
PROXY_STATE_ERROR = -1
PROXY_STATE_INIT = 0
PROXY_STATE_READ_AUTH = 1
PROXY_STATE_READ_AUTH_RESULT = 2
PROXY_STATE_READ_AUTH_OLD_PASSWORD = 3
PROXY_STATE_READ_AUTH_OLD_PASSWORD_RESULT = 4
PROXY_STATE_READ_COMMAND = 5
PROXY_STATE_READ_RESULT = 6
PROXY_STATE_READ_RESULT_FIELDS = 7
PROXY_STATE_READ_RESULT_ROWS = 8
PROXY_STATE_READ_RESULT_FIELDS_ONLY = 9
PROXY_STATE_FINISHED = 10
class PROXY_STATE:
UNDEFINED = PROXY_STATE_UNDEFINED
ERROR = PROXY_STATE_ERROR
INIT = PROXY_STATE_INIT
FINISHED = PROXY_STATE_FINISHED
READ_AUTH = PROXY_STATE_READ_AUTH
READ_AUTH_RESULT = PROXY_STATE_READ_AUTH_RESULT
READ_AUTH_OLD_PASSWORD = PROXY_STATE_READ_AUTH_OLD_PASSWORD
READ_AUTH_OLD_PASSWORD_RESULT = PROXY_STATE_READ_AUTH_OLD_PASSWORD_RESULT
READ_COMMAND = PROXY_STATE_READ_COMMAND
READ_RESULT = PROXY_STATE_READ_RESULT
READ_RESULT_FIELDS = PROXY_STATE_READ_RESULT_FIELDS
READ_RESULT_ROWS = PROXY_STATE_READ_RESULT_ROWS
READ_RESULT_FIELDS_ONLY = PROXY_STATE_READ_RESULT_FIELDS_ONLY
SERVER_STATES = set([PROXY_STATE.INIT, PROXY_STATE.READ_AUTH_RESULT, PROXY_STATE.READ_AUTH_OLD_PASSWORD_RESULT,
PROXY_STATE.READ_RESULT, PROXY_STATE.READ_RESULT_FIELDS, PROXY_STATE.READ_RESULT_ROWS,
PROXY_STATE.READ_RESULT_FIELDS_ONLY, PROXY_STATE.FINISHED])
CLIENT_STATES = set([PROXY_STATE.READ_AUTH, PROXY_STATE.READ_AUTH_OLD_PASSWORD, PROXY_STATE.READ_COMMAND])
AUTH_RESULT_STATES = set([PROXY_STATE.READ_AUTH_OLD_PASSWORD_RESULT, PROXY_STATE.READ_AUTH_RESULT])
READ_RESULT_STATES = set([PROXY_STATE.READ_RESULT, PROXY_STATE.READ_RESULT_FIELDS, PROXY_STATE.READ_RESULT_ROWS, PROXY_STATE.READ_RESULT_FIELDS_ONLY])
class ProxyProtocolException(Exception):
pass
cdef class ProxyProtocol:
cdef readonly int state
cdef readonly int number
def __init__(self, initial_state = PROXY_STATE_INIT):
self.reset(initial_state)
def reset(self, int state):
self.state = state
self.number = 0
cdef int _check_number(self, PacketReader reader) except -1:
if self.state == PROXY_STATE_READ_COMMAND:
self.number = 0
if self.number != reader.number:
self.state = PROXY_STATE_ERROR
raise ProxyProtocolException('packet number out of sync')
self.number = self.number + 1
self.number = self.number % 256
def read_server(self, PacketReader reader):
cdef int read_result, prev_state
prev_state = self.state
while 1:
read_result = reader._read()
if read_result & PACKET_READ_START:
self._check_number(reader)
if read_result & PACKET_READ_END: #packet recvd
if self.state == PROXY_STATE_INIT:
#server handshake recvd
#server could have send error instead of inital handshake
self.state = PROXY_STATE_READ_AUTH
elif self.state == PROXY_STATE_READ_AUTH_RESULT:
#server auth result recvd
if reader.command == 0xFE:
self.state = PROXY_STATE_READ_AUTH_OLD_PASSWORD
elif reader.command == 0x00: #OK
self.state = PROXY_STATE_READ_COMMAND
elif self.state == PROXY_STATE_READ_AUTH_OLD_PASSWORD_RESULT:
#server auth old password result recvd
self.state = PROXY_STATE_READ_COMMAND
elif self.state == PROXY_STATE_READ_RESULT:
if reader.command == 0x00: #no result set but ok
#server result recvd OK
self.state = PROXY_STATE_READ_COMMAND
elif reader.command == 0xFF:
#no result set error
self.state = PROXY_STATE_READ_COMMAND
else:
#server result recv result set header
self.state = PROXY_STATE_READ_RESULT_FIELDS
elif self.state == PROXY_STATE_READ_RESULT_FIELDS:
if reader.command == 0xFE: #EOF for fields
#server result fields recvd
self.state = PROXY_STATE_READ_RESULT_ROWS
elif self.state == PROXY_STATE_READ_RESULT_ROWS:
if reader.command == 0xFE: #EOF for rows
#server result rows recvd
self.state = PROXY_STATE_READ_COMMAND
elif self.state == PROXY_STATE_READ_RESULT_FIELDS_ONLY:
if reader.command == 0xFE: #EOF for fields
#server result fields only recvd
self.state = PROXY_STATE_READ_COMMAND
else:
self.state = PROXY_STATE_ERROR
raise ProxyProtocolException('unexpected packet')
if self.state != prev_state:
break
if not (read_result & PACKET_READ_MORE):
break
return read_result, self.state, prev_state
def read_client(self, PacketReader reader):
cdef int read_result, prev_state
prev_state = self.state
while 1:
read_result = reader._read()
if read_result & PACKET_READ_START:
self._check_number(reader)
if read_result & PACKET_READ_END: #packet recvd
if self.state == PROXY_STATE_READ_AUTH:
#client auth recvd
self.state = PROXY_STATE_READ_AUTH_RESULT
elif self.state == PROXY_STATE_READ_AUTH_OLD_PASSWORD:
#client auth old pwd recvd
self.state = PROXY_STATE_READ_AUTH_OLD_PASSWORD_RESULT
elif self.state == PROXY_STATE_READ_COMMAND:
#client cmd recvd
if reader.command == COMMAND_LIST: #list cmd
self.state = PROXY_STATE_READ_RESULT_FIELDS_ONLY
elif reader.command == COMMAND_QUIT: #COM_QUIT
self.state = PROXY_STATE_FINISHED
else:
self.state = PROXY_STATE_READ_RESULT
else:
self.state = PROXY_STATE_ERROR
raise ProxyProtocolException('unexpected packet')
if self.state != prev_state:
break
if not (read_result & PACKET_READ_MORE):
break
return read_result, self.state, prev_state
|
mthurlin/gevent-MySQL
|
71ffaadb2b0fee9bdc9c341661867f8d8fbb65fa
|
Removed debug timeout
|
diff --git a/lib/geventmysql/client.py b/lib/geventmysql/client.py
index 82ffc62..2a1cd40 100644
--- a/lib/geventmysql/client.py
+++ b/lib/geventmysql/client.py
@@ -1,389 +1,389 @@
# Copyright (C) 2009, Hyves (Startphone Ltd.)
#
# This module is part of the Concurrence Framework and is released under
# the New BSD License: http://www.opensource.org/licenses/bsd-license.php
#TODO supporting closing a halfread resultset (e.g. automatically read and discard rest)
from geventmysql._mysql import Buffer
from geventmysql.mysql import BufferedPacketReader, BufferedPacketWriter, PACKET_READ_RESULT, CAPS, COMMAND
import logging
import time
from gevent import socket
# From query: SHOW COLLATION;
charset_map = {}
charset_map["big5"] = 1
charset_map["dec8"] = 3
charset_map["cp850"] = 4
charset_map["hp8"] = 6
charset_map["koi8r"] = 7
charset_map["latin1"] = 8
charset_map["latin1"] = 8
charset_map["latin2"] = 9
charset_map["swe7"] = 10
charset_map["ascii"] = 11
charset_map["ujis"] = 12
charset_map["sjis"] = 13
charset_map["hebrew"] = 16
charset_map["tis620"] = 18
charset_map["euckr"] = 19
charset_map["koi8u"] = 22
charset_map["gb2312"] = 24
charset_map["greek"] = 25
charset_map["cp1250"] = 26
charset_map["gbk"] = 28
charset_map["latin5"] = 30
charset_map["armscii8"] = 32
charset_map["utf8"] = 33
charset_map["utf8"] = 33
charset_map["ucs2"] = 35
charset_map["cp866"] = 36
charset_map["keybcs2"] = 37
charset_map["macce"] = 38
charset_map["macroman"] = 39
charset_map["cp852"] = 40
charset_map["latin7"] = 41
charset_map["cp1251"] = 51
charset_map["cp1256"] = 57
charset_map["cp1257"] = 59
charset_map["binary"] = 63
charset_map["geostd8"] = 92
charset_map["cp932"] = 95
charset_map["eucjpms"] = 97
try:
#python 2.6
import hashlib
SHA = hashlib.sha1
except ImportError:
#python 2.5
import sha
SHA = sha.new
#import time
class ClientError(Exception):
@classmethod
def from_error_packet(cls, packet, skip = 8):
packet.skip(skip)
return cls(packet.read_bytes(packet.remaining))
class ClientLoginError(ClientError): pass
class ClientCommandError(ClientError): pass
class ClientProgrammingError(ClientError): pass
class ResultSet(object):
"""Represents the current resultset being read from a Connection.
The resultset implements an iterator over rows. A Resultset must
be iterated entirely and closed explicitly."""
STATE_INIT = 0
STATE_OPEN = 1
STATE_EOF = 2
STATE_CLOSED = 3
def __init__(self, connection, field_count):
self.state = self.STATE_INIT
self.connection = connection
self.fields = connection.reader.read_fields(field_count)
self.state = self.STATE_OPEN
def __iter__(self):
assert self.state == self.STATE_OPEN, "cannot iterate a resultset when it is not open"
for row in self.connection.reader.read_rows(self.fields):
yield row
self.state = self.STATE_EOF
def close(self, connection_close = False):
"""Closes the current resultset. Make sure you have iterated over all rows before closing it!"""
#print 'close on ResultSet', id(self.connection)
if self.state != self.STATE_EOF and not connection_close:
raise ClientProgrammingError("you can only close a resultset when it was read entirely!")
connection = self.connection
del self.connection
del self.fields
connection._close_current_resultset(self)
self.state = self.STATE_CLOSED
class Connection(object):
"""Represents a single connection to a MySQL Database host."""
STATE_ERROR = -1
STATE_INIT = 0
STATE_CONNECTING = 1
STATE_CONNECTED = 2
STATE_CLOSING = 3
STATE_CLOSED = 4
def __init__(self):
self.state = self.STATE_INIT
self.buffer = Buffer(1024 * 16)
self.socket = None
self.reader = None
self.writer = None
self._time_command = False #whether to keep timing stats on a cmd
self._command_time = -1
self._incommand = False
self.current_resultset = None
def _scramble(self, password, seed):
"""taken from java jdbc driver, scrambles the password using the given seed
according to the mysql login protocol"""
stage1 = SHA(password).digest()
stage2 = SHA(stage1).digest()
md = SHA()
md.update(seed)
md.update(stage2)
#i love python :-):
return ''.join(map(chr, [x ^ ord(stage1[i]) for i, x in enumerate(map(ord, md.digest()))]))
def _handshake(self, user, password, database, charset):
"""performs the mysql login handshake"""
#init buffer for reading (both pos and lim = 0)
self.buffer.clear()
self.buffer.flip()
#read server welcome
packet = self.reader.read_packet()
self.protocol_version = packet.read_byte() #normally this would be 10 (0xa)
if self.protocol_version == 0xff:
#error on initial greeting, possibly too many connection error
raise ClientLoginError.from_error_packet(packet, skip = 2)
elif self.protocol_version == 0xa:
pass #expected
else:
assert False, "Unexpected protocol version %02x" % self.protocol_version
self.server_version = packet.read_bytes_until(0)
packet.skip(4) #thread_id
scramble_buff = packet.read_bytes(8)
packet.skip(1) #filler
server_caps = packet.read_short()
#CAPS.dbg(server_caps)
if not server_caps & CAPS.PROTOCOL_41:
assert False, "<4.1 auth not supported"
server_language = packet.read_byte()
server_status = packet.read_short()
packet.skip(13) #filler
if packet.remaining:
scramble_buff += packet.read_bytes_until(0)
else:
assert False, "<4.1 auth not supported"
client_caps = server_caps
#always turn off compression
client_caps &= ~CAPS.COMPRESS
client_caps &= ~CAPS.NO_SCHEMA
if not server_caps & CAPS.CONNECT_WITH_DB and database:
assert False, "initial db given but not supported by server"
if server_caps & CAPS.CONNECT_WITH_DB and not database:
client_caps &= ~CAPS.CONNECT_WITH_DB
#build and write our answer to the initial handshake packet
self.writer.clear()
self.writer.start()
self.writer.write_int(client_caps)
self.writer.write_int(1024 * 1024 * 32) #16mb max packet
if charset:
self.writer.write_byte(charset_map[charset.replace("-", "")])
else:
self.writer.write_byte(server_language)
self.writer.write_bytes('\0' * 23) #filler
self.writer.write_bytes(user + '\0')
if password:
self.writer.write_byte(20)
self.writer.write_bytes(self._scramble(password, scramble_buff))
else:
self.writer.write_byte(0)
if database:
self.writer.write_bytes(database + '\0')
self.writer.finish(1)
self.writer.flush()
#read final answer from server
self.buffer.flip()
packet = self.reader.read_packet()
result = packet.read_byte()
if result == 0xff:
raise ClientLoginError.from_error_packet(packet)
elif result == 0xfe:
assert False, "old password handshake not implemented"
def _close_current_resultset(self, resultset):
assert resultset == self.current_resultset
self.current_resultset = None
def _send_command(self, cmd, cmd_text):
"""sends a command with the given text"""
#self.log.debug('cmd %s %s', cmd, cmd_text)
#note: we are not using normal writer.start/finish here, because the cmd
#could not fit in buffer, causing flushes in write_string, in that case 'finish' would
#not be able to go back to the header of the packet to write the length in that case
self.writer.clear()
self.writer.write_header(len(cmd_text) + 1 + 4, 0) #1 is len of cmd, 4 is len of header, 0 is packet number
self.writer.write_byte(cmd)
self.writer.write_bytes(cmd_text)
self.writer.flush()
def _close(self):
#self.log.debug("close mysql client %s", id(self))
try:
self.state = self.STATE_CLOSING
if self.current_resultset:
self.current_resultset.close(True)
self.socket.close()
self.state = self.STATE_CLOSED
except:
self.state = self.STATE_ERROR
raise
def connect(self, host = "localhost", port = 3306, user = "", passwd = "", db = "", autocommit = None, charset = None, use_unicode=False):
"""connects to the given host and port with user and passwd"""
#self.log.debug("connect mysql client %s %s %s %s %s", id(self), host, port, user, passwd)
try:
#print 'connect', host, user, passwd, db
#parse addresses of form str <host:port>
if type(host) == str:
if host[0] == '/': #assume unix domain socket
addr = host
elif ':' in host:
host, port = host.split(':')
port = int(port)
addr = (host, port)
else:
addr = (host, port)
assert self.state == self.STATE_INIT, "make sure connection is not already connected or closed"
self.state = self.STATE_CONNECTING
- self.socket = socket.create_connection(addr, timeout=3)
+ self.socket = socket.create_connection(addr)
self.reader = BufferedPacketReader(self.socket, self.buffer)
self.writer = BufferedPacketWriter(self.socket, self.buffer)
self._handshake(user, passwd, db, charset)
#handshake complete client can now send commands
self.state = self.STATE_CONNECTED
if autocommit == False:
self.set_autocommit(False)
elif autocommit == True:
self.set_autocommit(True)
else:
pass #whatever is the default of the db (ON in the case of mysql)
if charset is not None:
self.set_charset(charset)
self.set_use_unicode(use_unicode)
return self
#except TimeoutError: TODO
# self.state = self.STATE_INIT
# raise
except ClientLoginError:
self.state = self.STATE_INIT
raise
except:
self.state = self.STATE_ERROR
raise
def close(self):
"""close this connection"""
assert self.is_connected(), "make sure connection is connected before closing"
if self._incommand != False: assert False, "cannot close while still in a command"
self._close()
def command(self, cmd, cmd_text):
"""sends a COM_XXX command with the given text and possibly return a resultset (select)"""
#print 'command', cmd, repr(cmd_text), type(cmd_text)
assert type(cmd_text) == str #as opposed to unicode
assert self.is_connected(), "make sure connection is connected before query"
if self._incommand != False: assert False, "overlapped commands not supported"
if self.current_resultset: assert False, "overlapped commands not supported, pls read prev resultset and close it"
try:
self._incommand = True
if self._time_command:
start_time = time.time()
self._send_command(cmd, cmd_text)
#read result, expect 1 of OK, ERROR or result set header
self.buffer.flip()
packet = self.reader.read_packet()
result = packet.read_byte()
#print 'res', result
if self._time_command:
end_time = time.time()
self._command_time = end_time - start_time
if result == 0x00:
#OK, return (affected rows, last row id)
rowcount = self.reader.read_length_coded_binary()
lastrowid = self.reader.read_length_coded_binary()
return (rowcount, lastrowid)
elif result == 0xff:
raise ClientCommandError.from_error_packet(packet)
else: #result set
self.current_resultset = ResultSet(self, result)
return self.current_resultset
finally:
self._incommand = False
def is_connected(self):
return self.state == self.STATE_CONNECTED
def query(self, cmd_text):
"""Sends a COM_QUERY command with the given text and return a resultset (select)"""
return self.command(COMMAND.QUERY, cmd_text)
def init_db(self, cmd_text):
"""Sends a COM_INIT command with the given text"""
return self.command(COMMAND.INITDB, cmd_text)
def set_autocommit(self, commit):
"""Sets autocommit setting for this connection. True = on, False = off"""
self.command(COMMAND.QUERY, "SET AUTOCOMMIT = %s" % ('1' if commit else '0'))
def commit(self):
"""Commits this connection"""
self.command(COMMAND.QUERY, "COMMIT")
def rollback(self):
"""Issues a rollback on this connection"""
self.command(COMMAND.QUERY, "ROLLBACK")
def set_charset(self, charset):
"""Sets the charset for this connections (used to decode string fields into unicode strings)"""
self.reader.reader.encoding = charset
def set_use_unicode(self, use_unicode):
self.reader.reader.use_unicode = use_unicode
def set_time_command(self, time_command):
self._time_command = time_command
def get_command_time(self):
return self._command_time
Connection.log = logging.getLogger(Connection.__name__)
def connect(*args, **kwargs):
return Connection().connect(*args, **kwargs)
|
mthurlin/gevent-MySQL
|
20cefac45802e2bfe817c9ef488f94a7efe01c19
|
Fixed reference to TaskletExit
|
diff --git a/lib/geventmysql/__init__.py b/lib/geventmysql/__init__.py
index e80c251..0031521 100644
--- a/lib/geventmysql/__init__.py
+++ b/lib/geventmysql/__init__.py
@@ -1,243 +1,244 @@
# Copyright (C) 2009, Hyves (Startphone Ltd.)
#
# This module is part of the Concurrence Framework and is released under
# the New BSD License: http://www.opensource.org/licenses/bsd-license.php
#this is a dbapi/mysqldb compatible wrapper around the lowlevel
#client in client.py
#TODO weak ref on connection in cursor
import sys
import logging
import exceptions
import gevent
+TaskletExit = gevent.GreenletExit
from datetime import datetime, date
from geventmysql import client
threadsafety = 1
apilevel = "2.0"
paramstyle = "format"
default_charset = sys.getdefaultencoding()
class Error(exceptions.StandardError): pass
class Warning(exceptions.StandardError): pass
class InterfaceError(Error): pass
class DatabaseError(Error): pass
class InternalError(DatabaseError): pass
class OperationalError(DatabaseError): pass
class ProgrammingError(DatabaseError): pass
class IntegrityError(DatabaseError): pass
class DataError(DatabaseError): pass
class NotSupportedError(DatabaseError): pass
class TimeoutError(DatabaseError): pass
class Cursor(object):
log = logging.getLogger('Cursor')
def __init__(self, connection):
self.connection = connection
self.result = None
self.closed = False
self._close_result()
def _close_result(self):
#make sure any previous resultset is closed correctly
if self.result is not None:
#make sure any left over resultset is read from the db, otherwise
#the connection would be in an inconsistent state
try:
while True:
self.result_iter.next()
except StopIteration:
pass #done
self.result.close()
self.description = None
self.result = None
self.result_iter = None
self.lastrowid = None
self.rowcount = -1
def _escape_string(self, s):
"""take from mysql src code:"""
#TODO how fast is this?, do this in C/pyrex?
escaped = []
for ch in s:
if ch == '\0':
escaped.append('\\0')
elif ch == '\n':
escaped.append('\\n')
elif ch == '\r':
escaped.append('\\r')
elif ch == '\\':
escaped.append('\\\\')
elif ch == "'": #single quote
escaped.append("\\'")
elif ch == '"': #double quote
escaped.append('\\"')
elif ch == '\x1a': #EOF on windows
escaped.append('\\Z')
else:
escaped.append(ch)
return ''.join(escaped)
def _wrap_exception(self, e, msg):
self.log.exception(msg)
if isinstance(e, gevent.Timeout):
return TimeoutError(msg + ': ' + str(e))
else:
return Error(msg + ': ' + str(e))
def execute(self, qry, args = []):
#print repr(qry), repr(args), self.connection.charset
if self.closed:
raise ProgrammingError('this cursor is already closed')
if type(qry) == unicode:
#we will only communicate in 8-bits with mysql
qry = qry.encode(self.connection.charset)
try:
self._close_result() #close any previous result if needed
#substitute arguments
for arg in args:
if type(arg) == str:
qry = qry.replace('%s', "'%s'" % self._escape_string(arg), 1)
elif type(arg) == unicode:
qry = qry.replace('%s', "'%s'" % self._escape_string(arg).encode(self.connection.charset), 1)
elif type(arg) == int:
qry = qry.replace('%s', str(arg), 1)
elif type(arg) == long:
qry = qry.replace('%s', str(arg), 1)
elif arg is None:
qry = qry.replace('%s', 'null', 1)
elif isinstance(arg, datetime):
qry = qry.replace('%s', "'%s'" % arg.strftime('%Y-%m-%d %H:%M:%S'), 1)
elif isinstance(arg, date):
qry = qry.replace('%s', "'%s'" % arg.strftime('%Y-%m-%d'), 1)
else:
assert False, "unknown argument type: %s %s" % (type(arg), repr(arg))
result = self.connection.client.query(qry)
#process result if nescecary
if isinstance(result, client.ResultSet):
self.description = tuple(((name, type_code, None, None, None, None, None) for name, type_code, charsetnr in result.fields))
self.result = result
self.result_iter = iter(result)
self.lastrowid = None
self.rowcount = -1
else:
self.rowcount, self.lastrowid = result
self.description = None
self.result = None
except TaskletExit:
raise
except Exception, e:
raise self._wrap_exception(e, "an error occurred while executing qry %s" % (qry, ))
def fetchall(self):
try:
return list(self.result_iter)
except TaskletExit:
raise
except Exception, e:
raise self._wrap_exception(e, "an error occurred while fetching results")
def fetchone(self):
try:
return self.result_iter.next()
except StopIteration:
return None
except TaskletExit:
raise
except Exception, e:
raise self._wrap_exception(e, "an error occurred while fetching results")
def close(self):
if self.closed:
raise ProgrammingError("cannot cursor twice")
try:
self._close_result()
self.closed = True
except TaskletExit:
raise
except Exception, e:
raise self._wrap_exception(e, "an error occurred while closing cursor")
class Connection(object):
def __init__(self, *args, **kwargs):
self.kwargs = kwargs.copy()
if not 'autocommit' in self.kwargs:
#we set autocommit explicitly to OFF as required by python db api, because default of mysql would be ON
self.kwargs['autocommit'] = False
else:
pass #user specified explictly what he wanted for autocommit
if 'charset' in self.kwargs:
self.charset = self.kwargs['charset']
if 'use_unicode' in self.kwargs and self.kwargs['use_unicode'] == True:
pass #charset stays in args, and triggers unicode output in low-level client
else:
del self.kwargs['charset']
else:
self.charset = default_charset
self.client = client.Connection() #low level mysql client
self.client.connect(*args, **self.kwargs)
self.closed = False
def close(self):
#print 'dbapi Connection close'
if self.closed:
raise ProgrammingError("cannot close connection twice")
try:
self.client.close()
del self.client
self.closed = True
except TaskletExit:
raise
except Exception, e:
msg = "an error occurred while closing connection: "
self.log.exception(msg)
raise Error(msg + str(e))
def cursor(self):
if self.closed:
raise ProgrammingError("this connection is already closed")
return Cursor(self)
def get_server_info(self):
return self.client.server_version
def rollback(self):
self.client.rollback()
def commit(self):
self.client.commit()
@property
def socket(self):
return self.client.socket
def connect(*args, **kwargs):
return Connection(*args, **kwargs)
Connect = connect
|
JoseBlanca/psubprocess
|
c824ad843fc1dd045cb4e3fd0deddae312ac870c
|
bugfix: config_job_file closed before using it
|
diff --git a/psubprocess/condor_runner.py b/psubprocess/condor_runner.py
index dc7abab..584d4a3 100644
--- a/psubprocess/condor_runner.py
+++ b/psubprocess/condor_runner.py
@@ -1,334 +1,335 @@
'''The main aim of this module is to provide an easy way to launch condor jobs.
Condor is a specialized workload management system for compute-intensive jobs.
Like other full-featured batch systems, Condor provides a job queueing
mechanism, scheduling policy, priority scheme, resource monitoring, and
resource management. More on condor on its web site:
http://www.cs.wisc.edu/condor/
The interface used is similar to the subprocess.Popen one.
Besides the standard parameters like cmd, stdout, stderr, and stdin, this condor
Popen takes a couple of extra paramteres cmd_def and runner_conf. The cmd_def
syntax is explained in the streams.py file. Condor Popen needs the cmd_def to
be able to get from the cmd which are the input and output files. The input
files should be specified in the condor job file, in the case that we want
to transfer them to the computing nodes. Besides the input and output files
in the cmd should have no paths, otherwise the command would fail in the other
machines. That's why we need cmd_def.
Created on 14/07/2009
'''
# Copyright 2009 Jose Blanca, Peio Ziarsolo, COMAV-Univ. Politecnica Valencia
# This file is part of psubprocess.
# psubprocess is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as
# published by the Free Software Foundation, either version 3 of the
# License, or (at your option) any later version.
# psubprocess is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
# You should have received a copy of the GNU Affero General Public License
# along with psubprocess. If not, see <http://www.gnu.org/licenses/>.
from tempfile import NamedTemporaryFile
import subprocess, signal, os.path
from subprocess import Popen as PythonPopen
from psubprocess.streams import get_streams_from_cmd
def call(cmd):
'It calls a command and it returns stdout, stderr and retcode'
def subprocess_setup():
''' Python installs a SIGPIPE handler by default. This is usually not
what non-Python subprocesses expect. Taken from this url:
http://www.chiark.greenend.org.uk/ucgi/~cjwatson/blosxom/2009/07/02#
2009-07-02-python-sigpipe'''
signal.signal(signal.SIGPIPE, signal.SIG_DFL)
process = PythonPopen(cmd, stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
preexec_fn=subprocess_setup)
stdout, stderr = process.communicate()
retcode = process.returncode
return stdout, stderr, retcode
def write_condor_job_file(fhand, parameters):
'It writes a condor job file using the given fhand'
to_print = 'Executable = %s\nArguments = "%s"\nUniverse = vanilla\n' % \
(parameters['executable'], parameters['arguments'])
to_print += 'Log = %s\n' % parameters['log_file'].name
if parameters['transfer_files']:
to_print += 'When_to_transfer_output = ON_EXIT\n'
to_print += 'Getenv = True\n'
if ('transfer_executable' in parameters and
parameters['transfer_executable']):
to_print += 'Transfer_executable = %s\n' % \
parameters['transfer_executable']
if 'input_fnames' in parameters and parameters['input_fnames']:
ins = ','.join(parameters['input_fnames'])
to_print += 'Transfer_input_files = %s\n' % ins
if parameters['transfer_files']:
to_print += 'Should_transfer_files = IF_NEEDED\n'
if 'requirements' in parameters:
to_print += "Requirements = %s\n" % parameters['requirements']
if 'stdout' in parameters:
to_print += 'Output = %s\n' % parameters['stdout'].name
if 'stderr' in parameters:
to_print += 'Error = %s\n' % parameters['stderr'].name
if 'stdin' in parameters:
to_print += 'Input = %s\n' % parameters['stdin'].name
to_print += 'Queue\n'
fhand.write(to_print)
fhand.flush()
- fhand.close()
class Popen(object):
'''It launches and controls a condor job.
The job is launched when an instance is created. After that we can get the
cluster id with the method.pid. The rest of the interface is very similar
to the subprocess.Popen one. There's no communicate method because there's
no support for PIPE.
'''
def __init__(self, cmd, cmd_def=None, runner_conf=None, stdout=None,
stderr=None, stdin=None):
'''It launches a condor job.
The interface is similar to the subprocess.Popen one, although there are
some differences.
stdout, stdin and stderr should be file handlers, there's no support for
PIPEs. The extra parameter cmd_def is required if we need to transfer
the input and output files to the computing nodes of the cluster using
the condor file transfer mechanism. The cmd_def syntax is explained in
the streams.py file.
runner_conf is a dict that admits several parameters that control how
condor is run:
- transfer_files: do we want to transfer the files using the condor
transfer file mechanism? (default True)
- condor_log: the condor log file. If it's not given Popen will
create a condor log file in the tempdir.
- transfer_executable: do we want to transfer the executable?
(default False)
- requirements: The requirements line for the condor job file.
(default None)
'''
#we use the same parameters as subprocess.Popen
#pylint: disable-msg=R0913
if cmd_def is None:
cmd_def = []
#runner conf
if runner_conf is None:
runner_conf = {}
#some defaults
if 'transfer_files' not in runner_conf:
runner_conf['transfer_files'] = True
if 'condor_log' not in runner_conf:
self._log_file = NamedTemporaryFile(suffix='.log')
self._log_file.close()
else:
self._log_file = runner_conf['condor_log']
#print 'condor_log', self._log_file
#create condor job file
condor_job_file = self._create_condor_job_file(cmd, cmd_def,
self._log_file,
runner_conf,
stdout, stderr, stdin)
self._condor_job_file = condor_job_file
#print open(condor_job_file.name).read()
#launch condor
self._retcode = None
self._cluster_number = None
#print 'launching'
self._launch_condor(condor_job_file)
#print 'launched'
+ # close the created job file
+ condor_job_file.close()
def _launch_condor(self, condor_job_file):
'Given the condor_job_file it launches the condor job'
try:
stdout, stderr, retcode = call(['condor_submit',
condor_job_file.name])
except OSError, msg:
raise OSError('condor_submit not found in your path.' + str(msg))
if retcode:
msg = 'There was a problem with condor_submit: ' + stderr
raise RuntimeError(msg)
#the condor cluster number is given by condor_submit
#1 job(s) submitted to cluster 15.
for line in stdout.splitlines():
if 'submitted to cluster' in line:
self._cluster_number = line.strip().strip('.').split()[-1]
def _get_pid(self):
'It returns the condor cluster number'
return self._cluster_number
pid = property(_get_pid)
def _get_returncode(self):
'It returns the return code'
return self._retcode
returncode = property(_get_returncode)
@staticmethod
def _remove_paths_from_cmd(cmd, streams, conf):
'''It removes the absolute and relative paths from the cmd,
it returns the modified cmd'''
cmd_mod = cmd[:]
for stream in streams:
if 'fname' not in stream:
continue
fpath = stream['fname']
#for the output files we can't deal with transfering files with
#paths. Condor will deliver those files into the initialdir, not
#where we expected.
if (stream['io'] != 'in' and conf['transfer_files']
and os.path.split(fpath)[-1] != fpath):
msg = 'output files with paths are not transferable'
raise ValueError(msg)
index = cmd_mod.index(fpath)
fpath = os.path.split(fpath)[-1]
cmd_mod[index] = fpath
return cmd_mod
def _create_condor_job_file(self, cmd, cmd_def, log_file, runner_conf,
stdout, stderr, stdin):
'Given a cmd and the cmd_def it returns the condor job file'
#streams
streams = get_streams_from_cmd(cmd, cmd_def, stdout=stdout,
stderr=stderr, stdin=stdin)
#we need some parameters to write the condor file
parameters = {}
#the executable
binary = cmd[0]
#the binary should be an absolute path
if not os.path.isabs(binary):
#the path to the binary could be relative
if os.sep in binary:
#we make the path absolute
binary = os.path.abspath(binary)
else:
#we have to look in the system $PATH
binary = call(['which', binary])[0].strip()
parameters['executable'] = binary
parameters['log_file'] = log_file
#the cmd shouldn't have absolute path in the files because they will be
#transfered to another node in the condor working dir and they wouldn't
#be found with an absolute path
cmd_no_path = self._remove_paths_from_cmd(cmd, streams, runner_conf)
parameters['arguments'] = ' '.join(cmd_no_path[1:])
if stdout is not None:
parameters['stdout'] = stdout
if stderr is not None:
parameters['stderr'] = stderr
if stdin is not None:
parameters['stdin'] = stdin
transfer_bin = False
if 'transfer_executable' in runner_conf:
transfer_bin = runner_conf['transfer_executable']
parameters['transfer_executable'] = transfer_bin
transfer_files = runner_conf['transfer_executable']
parameters['transfer_files'] = str(transfer_files)
if 'requirements' in runner_conf:
parameters['requirements'] = runner_conf['requirements']
in_fnames = []
for stream in streams:
if stream['io'] == 'in':
fname = None
if 'fname' in stream:
fname = stream['fname']
else:
fname = stream['fhand'].name
in_fnames.append(fname)
parameters['input_fnames'] = in_fnames
#now we can create the job file
condor_job_file = NamedTemporaryFile()
write_condor_job_file(condor_job_file, parameters=parameters)
return condor_job_file
def _update_retcode(self):
'It updates the retcode looking at the log file, it returns the retcode'
for line in open(self._log_file.name):
if 'return value' in line:
ret = line.split('return value')[1].strip().strip(')')
self._retcode = int(ret)
return self._retcode
def poll(self):
'It checks if condor has run ower condor cluster'
cluster_number = self._cluster_number
cmd = ['condor_q', cluster_number,
'-format', '"%d.\n"', 'ClusterId']
stdout, stderr, retcode = call(cmd)
if retcode:
msg = 'There was a problem with condor_q: ' + stderr
raise RuntimeError(msg)
if cluster_number not in stdout:
#the job is finished
return self._update_retcode()
return self._retcode
def wait(self):
'It waits until the condor job is finished'
try:
stderr, retcode = call(['condor_wait', self._log_file.name])[1:]
except OSError:
raise OSError('condor_wait not found in your path')
if retcode:
msg = 'There was a problem with condor_wait: ' + stderr
raise RuntimeError(msg)
return self._update_retcode()
def kill(self):
'It runs condor_rm for the condor job'
try:
stderr, retcode = call(['condor_rm', self.pid])[1:]
except OSError:
raise OSError('condor_rm not found in your path')
if retcode:
msg = 'There was a problem with condor_rm: ' + stderr
raise RuntimeError(msg)
return self._update_retcode()
def terminate(self):
'It runs condor_rm for the condor job'
self.kill()
def get_default_splits():
'It returns a suggested number of splits for this Popen runner'
try:
stdout, stderr, retcode = call(['condor_status', '-total'])
except OSError:
raise OSError('condor_status not found in your path')
if retcode:
msg = 'There was a problem with condor_status: ' + stderr
raise RuntimeError(msg)
for line in stdout.splitlines():
line = line.strip().lower()
if line.startswith('total') and 'owner' not in line:
return int(line.split()[1]) * 2
diff --git a/test/condor_runner_test.py b/test/condor_runner_test.py
index 2c65cc2..106e3d7 100644
--- a/test/condor_runner_test.py
+++ b/test/condor_runner_test.py
@@ -1,196 +1,196 @@
'''
Created on 14/07/2009
@author: jose
'''
# Copyright 2009 Jose Blanca, Peio Ziarsolo, COMAV-Univ. Politecnica Valencia
# This file is part of psubprocess.
# psubprocess is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as
# published by the Free Software Foundation, either version 3 of the
# License, or (at your option) any later version.
# psubprocess is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
# You should have received a copy of the GNU Affero General Public License
# along with psubprocess. If not, see <http://www.gnu.org/licenses/>.
import unittest
from tempfile import NamedTemporaryFile, mkstemp
import os
from psubprocess.condor_runner import (write_condor_job_file, Popen,
get_default_splits, call)
from test_utils import create_test_binary
class CondorRunnerTest(unittest.TestCase):
'It tests the condor runner'
@staticmethod
def test_write_condor_job_file():
'It tests that we can write a condor job file with the right parameters'
fhand1 = NamedTemporaryFile()
fhand2 = NamedTemporaryFile()
flog = NamedTemporaryFile()
stderr_ = NamedTemporaryFile()
stdout_ = NamedTemporaryFile()
stdin_ = NamedTemporaryFile()
expected = '''Executable = /bin/ls
Arguments = "-i %s -j %s"
Universe = vanilla
Log = %s
When_to_transfer_output = ON_EXIT
Getenv = True
Transfer_executable = True
Transfer_input_files = %s,%s
Should_transfer_files = IF_NEEDED
Output = %s
Error = %s
Input = %s
Queue
''' % (fhand1.name, fhand2.name, flog.name, fhand1.name, fhand2.name,
stdout_.name, stderr_.name, stdin_.name)
fhand = open(mkstemp()[1], 'w')
parameters = {'executable':'/bin/ls', 'log_file':flog,
'input_fnames':[fhand1.name, fhand2.name],
'arguments':'-i %s -j %s' % (fhand1.name, fhand2.name),
'transfer_executable':True, 'transfer_files':True,
'stdout':stdout_, 'stderr':stderr_, 'stdin':stdin_}
write_condor_job_file(fhand, parameters=parameters)
condor = open(fhand.name).read()
assert condor == expected
os.remove(fhand.name)
@staticmethod
def test_run_condor_stdout():
'It test that we can run condor job and retrieve stdout and stderr'
bin = create_test_binary()
#a simple job
cmd = [bin]
cmd.extend(['-o', 'hola', '-e', 'caracola'])
stdout = NamedTemporaryFile()
stderr = NamedTemporaryFile()
popen = Popen(cmd, runner_conf={'transfer_executable':True},
stdout=stdout, stderr=stderr)
assert popen.wait() == 0 #waits till finishes and looks to the retcode
assert open(stdout.name).read() == 'hola'
assert open(stderr.name).read() == 'caracola'
os.remove(bin)
@staticmethod
def test_run_condor_stdin():
'It test that we can run condor job with stdin'
bin = create_test_binary()
#a simple job
cmd = [bin]
cmd.extend(['-s'])
stdin = NamedTemporaryFile()
stdout = NamedTemporaryFile()
stdin.write('hola')
stdin.flush()
popen = Popen(cmd, runner_conf={'transfer_executable':True},
stdout=stdout, stdin=stdin)
assert popen.wait() == 0 #waits till finishes and looks to the retcode
assert open(stdout.name).read() == 'hola'
os.remove(bin)
@staticmethod
def test_run_condor_retcode():
'It test that we can run condor job and get the retcode'
bin = create_test_binary()
#a simple job
cmd = [bin]
cmd.extend(['-r', '10'])
popen = Popen(cmd, runner_conf={'transfer_executable':True})
assert popen.wait() == 10 #waits till finishes and looks to the retcode
os.remove(bin)
@staticmethod
def test_run_condor_in_file():
'It test that we can run condor job with an input file'
bin = create_test_binary()
in_file = NamedTemporaryFile()
in_file.write('hola')
in_file.flush()
cmd = [bin]
cmd.extend(['-i', in_file.name])
stdout = NamedTemporaryFile()
stderr = NamedTemporaryFile()
cmd_def = [{'options': ('-i', '--input'), 'io': 'in'}]
popen = Popen(cmd, runner_conf={'transfer_executable':True},
stdout=stdout, stderr=stderr, cmd_def=cmd_def)
assert popen.wait() == 0 #waits till finishes and looks to the retcod
assert open(stdout.name).read() == 'hola'
os.remove(bin)
def test_run_condor_in_out_file(self):
'It test that we can run condor job with an output file'
bin = create_test_binary()
in_file = NamedTemporaryFile()
in_file.write('hola')
in_file.flush()
out_file = open('output.txt', 'w')
cmd = [bin]
cmd.extend(['-i', in_file.name, '-t', out_file.name])
stdout = NamedTemporaryFile()
stderr = NamedTemporaryFile()
cmd_def = [{'options': ('-i', '--input'), 'io': 'in'},
{'options': ('-t', '--output'), 'io': 'out'}]
popen = Popen(cmd, runner_conf={'transfer_executable':True},
stdout=stdout, stderr=stderr, cmd_def=cmd_def)
popen.wait()
assert popen.wait() == 0 #waits till finishes and looks to the retcod
assert open(out_file.name).read() == 'hola'
os.remove(out_file.name)
#and output file with path won't be allowed unless the transfer file
#mechanism is not used
out_file = NamedTemporaryFile()
cmd = [bin]
cmd.extend(['-i', in_file.name, '-t', out_file.name])
stdout = NamedTemporaryFile()
stderr = NamedTemporaryFile()
cmd_def = [{'options': ('-i', '--input'), 'io': 'in'},
{'options': ('-t', '--output'), 'io': 'out'}]
try:
popen = Popen(cmd, runner_conf={'transfer_executable':True},
stdout=stdout, stderr=stderr, cmd_def=cmd_def)
self.fail('ValueError expected')
#pylint: disable-msg=W0704
except ValueError:
pass
os.remove(bin)
@staticmethod
def test_default_splits():
'It tests that we can get a suggested number of splits'
assert get_default_splits() > 0
assert isinstance(get_default_splits(), int)
@staticmethod
def test_run_condor_kill():
'It test that we can kill a condor job'
bin = create_test_binary()
#a simple job
cmd = [bin]
cmd.extend(['-w'])
popen = Popen(cmd, runner_conf={'transfer_executable':True})
pid = str(popen.pid)
popen.kill()
stdout = call(['condor_q', pid])[0]
assert pid not in stdout
os.remove(bin)
if __name__ == "__main__":
- #import sys;sys.argv = ['', 'CondorRunnerTest.test_write_condor_job_file']
- unittest.main()
\ No newline at end of file
+ #import sys;sys.argv = ['', 'CondorRunnerTest.test_run_condor_stdout']
+ unittest.main()
|
JoseBlanca/psubprocess
|
020a56c487cc159e7375d5e7a90bf874d534bb43
|
Added bam splitter and joiner
|
diff --git a/psubprocess/bam.py b/psubprocess/bam.py
index ed28468..f1ec791 100644
--- a/psubprocess/bam.py
+++ b/psubprocess/bam.py
@@ -1,53 +1,91 @@
'''
Utils to split and join bams
Created on 06/09/2010
@author: peio
'''
-from psubprocess.utils import call
+from psubprocess.utils import call, get_fhand
+from tempfile import NamedTemporaryFile
-def bam2sam(bam_fhand, sam_fhand):
+def bam2sam(bam_fhand, sam_fhand, header=False):
'''It converts between bam and sam.'''
bam_fhand.seek(0)
+
cmd = ['samtools', 'view', bam_fhand.name, '-o', sam_fhand.name]
+ if header:
+ cmd.insert(2, '-h')
call(cmd, raise_on_error=True)
+ sam_fhand.flush()
def sam2bam(sam_fhand, bam_fhand, header=None):
'It converts between bam and sam.'
sam_fhand.seek(0)
if header is not None:
pass
cmd = ['samtools', 'view', '-bSh', '-o', bam_fhand.name, sam_fhand.name]
call(cmd, raise_on_error=True)
+ bam_fhand.flush()
def get_bam_header(bam_fhand, header_fhand):
'It gets the header of the bam'
cmd = ['samtools', 'view', '-H', bam_fhand.name, '-o', header_fhand.name]
call(cmd, raise_on_error=True)
def bam_unigene_counter(fhand, expression=None):
'It count the unigene number of a bam'
unigenes = set()
for line in fhand:
unigene = line.split()[2]
unigenes.add(unigene)
return len(unigenes)
def unigenes_in_bam(fhand, expression=None):
'It yields the bam mapping by joined by unigene'
unigene_prev = None
unigene_lines = ''
for line in fhand:
unigene = line.split()[2]
if unigene_prev is not None and unigene_prev != unigene:
yield unigene_lines
unigene_lines = ''
unigene_lines += line
unigene_prev = unigene
else:
yield unigene_lines
+def bam_joiner(out_file, in_files):
+ 'It joins bam files'
+ #are we working with fhands or fnames?
+ out_fhand = get_fhand(out_file, writable=True)
+ sam_fhand = NamedTemporaryFile(suffix='.sam')
+
+ first = True
+ for file_ in in_files:
+ file_ = get_fhand(file_)
+ if first:
+ first = False
+ bam2sam(file_, sam_fhand, header=True)
+
+ else:
+ sam_fhand_temp = NamedTemporaryFile(suffix='.sam')
+ bam2sam(file_, sam_fhand_temp)
+ sam_fhand_temp.seek(0)
+ sam_fhand2 = open(sam_fhand.name, 'a')
+ sam_fhand2.write(open(sam_fhand_temp.name).read())
+ sam_fhand2.close()
+
+ sam_fhand.flush()
+ sam2bam(sam_fhand, out_fhand)
+ out_fhand.flush()
+
+
+
+
+
+
+
+
diff --git a/psubprocess/data/seq.bam b/psubprocess/data/seq.bam
new file mode 100644
index 0000000..2197e55
Binary files /dev/null and b/psubprocess/data/seq.bam differ
diff --git a/psubprocess/prunner.py b/psubprocess/prunner.py
index 2f5ba71..3c3ce7d 100644
--- a/psubprocess/prunner.py
+++ b/psubprocess/prunner.py
@@ -1,496 +1,507 @@
'''It launches parallel processes with an interface similar to Popen.
It divides jobs into subjobs and launches the subjobs.
This module is useful when we have a non-parallel command to run in a
multiprocessor computer or in a multinode cluster. It will take the input files,
it will split them and it will run a subjob for everyone of the splits. It will
wait for the subjobs to finnish and it will join the output files generated
by all subjobs. At the end of the process will get the same output files as if
the command wasn't run in parallel.
This approach will work with commands that process a lot of items. This module
divides the items in sereval set and it assigns each of this sets to one new
subjob. These are the subjobs that will be run in parallel.
To do it requires the parameters used by popen: cmd, stdin, stdout, stderr and
some extra information: runner, splits and cmd_def.
runner is optional and it should be a subprocess.Popen like class. If it's not
given subprocess.Popen will be used. This Popen be the class used to run the
subjobs. If subprocess.Popen is used the subjobs will run in the processors of
the local node on several independent processes. If the Condor Popen is used
the subjobs will run in a condor cluster.
splits is the number of subjobs that we want to generate. If it's not given the
runner will provide a suitable number.
cmd_def is a dict that defines how the cmd defines the input and output files.
We need to tell Popen which are the input and output files in order to split
them and join them. The syntax for cmd_def is explained in the stream.py module
'''
# Copyright 2009 Jose Blanca, Peio Ziarsolo, COMAV-Univ. Politecnica Valencia
# This file is part of psubprocess.
# psubprocess is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as
# published by the Free Software Foundation, either version 3 of the
# License, or (at your option) any later version.
# psubprocess is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
# You should have received a copy of the GNU Affero General Public License
# along with psubprocess. If not, see <http://www.gnu.org/licenses/>.
from subprocess import Popen as StdPopen
import os, copy
from psubprocess.streams import get_streams_from_cmd, STDOUT, STDERR, STDIN
from psubprocess.condor_runner import call
from psubprocess import condor_runner
from psubprocess.splitters import (get_splitter,
create_non_splitter_splitter)
from psubprocess.utils import NamedTemporaryDir, copy_file_mode
from psubprocess.cmd_def_from_cmd import get_cmd_def_from_cmd
+from psubprocess.bam import bam_joiner
RUNNER_MODULES = {}
RUNNER_MODULES['condor_runner'] = condor_runner
class Popen(object):
'''It paralellizes the given processes dividing them into subprocesses.
The interface is similar to subprocess.Popen to ease the use of this class,
although the functionality of this class is much mor limited.
When an instance of this class is created a series of subjobs is launched.
When all subjobs are finished returncode will have an int, if they're still
running returncode will be None.
We can wait for all subjobs to finnish using the wait method or we can
kill or terminate them using kill and terminate.
'''
def __init__(self, cmd, cmd_def=None, runner=None, runner_conf=None,
stdout=None, stderr=None, stdin=None, splits=None):
'''It inits the a Popen instance, it creates and runs the subjobs.
Like the subprocess.Popen it accepts stdin, stdout, stderr, but in this
case all of them should be files, PIPE will not work.
In the cmd_def list we have to tell this Popen how to locate the
input and output files in the cmd and how to split and join them. Look
for the cmd_format in the streams.py file.
keyword arguments:
cmd -- a list with the cmd to parallelize
cmd_def -- the cmd definition list (default [])
runner -- which runner to use (default subprocess.Popen)
runner_conf -- extra parameters for the runner (default {})
stdout -- a fhand to store the stdout (default None)
stderr -- a fhand to store the stderr (default None)
stdin -- a fhand with the stdin (default None)
splits -- number of subjobs to generate
'''
#we want the same interface as subprocess.popen
#pylint: disable-msg=R0913
self._retcode = None
self._outputs_collected = False
#some defaults
#if the runner is not given, we use subprocess.Popen
if runner is None:
runner = StdPopen
#is the cmd_def set in the command?
cmd, cmd_cmd_def = get_cmd_def_from_cmd(cmd)
if cmd_cmd_def:
cmd_def = cmd_cmd_def
elif cmd_def:
cmd_def = cmd_def
else:
cmd_def = []
if not cmd_def and stdin is not None:
raise ValueError('No cmd_def given but stdin present')
#if the number of splits is not given we calculate them
if splits is None:
splits = self.default_splits(runner)
#we need a work dir to create the temporary split files
self._work_dir = NamedTemporaryDir()
copy_file_mode('.', self._work_dir.name)
#the main job
self._job = {'cmd': cmd, 'work_dir': self._work_dir}
#we create the new subjobs
self._jobs = self._split_jobs(cmd, cmd_def, splits, self._work_dir,
stdout=stdout, stderr=stderr, stdin=stdin)
#launch every subjobs
self._launch_jobs(self._jobs, runner=runner, runner_conf=runner_conf)
@staticmethod
def _launch_jobs(jobs, runner, runner_conf):
'It launches all jobs and it adds its popen instance to them'
jobs['popens'] = []
cwd = os.getcwd()
for job_index, (cmd, streams, work_dir) in enumerate(zip(jobs['cmds'],
jobs['streams'], jobs['work_dirs'])):
#the std stream can be present or not
stdin, stdout, stderr = None, None, None
if jobs['stdins']:
stdin = jobs['stdins'][job_index]
if jobs['stdouts']:
stdout = jobs['stdouts'][job_index]
if jobs['stderrs']:
stderr = jobs['stderrs'][job_index]
#for every job we go to its dir to launch it
os.chdir(work_dir.name)
#we have to be sure that stdin is open for read
if stdin:
stdin = open(stdin.name)
#we have to be sure that stdout and stderr are open for write
if stdout:
stdout = open(stdout.name, 'w')
if stderr:
stderr = open(stderr.name, 'w')
#we launch the job
if runner == StdPopen:
popen = runner(cmd, stdout=stdout, stderr=stderr, stdin=stdin)
else:
popen = runner(cmd, cmd_def=streams, stdout=stdout,
stderr=stderr, stdin=stdin,
runner_conf=runner_conf)
#we record it's popen instane
jobs['popens'].append(popen)
os.chdir(cwd)
def _split_jobs(self, cmd, cmd_def, splits, work_dir, stdout=None,
stderr=None, stdin=None,):
''''I creates one job for every split.
Every job has a cmd, work_dir and streams, this info is in the jobs dict
with the keys: cmds, work_dirs, streams
'''
#too many arguments, but similar interface to our __init__
#pylint: disable-msg=R0913
#pylint: disable-msg=R0914
#the main job streams
main_job_streams = get_streams_from_cmd(cmd, cmd_def, stdout=stdout,
stderr=stderr, stdin=stdin)
self._job['streams'] = main_job_streams
streams, work_dirs = self._split_streams(main_job_streams, splits,
work_dir.name)
#now we have to create a new cmd with the right in and out streams for
#every split
cmds, stdins, stdouts, stderrs = self._create_cmds(cmd, streams)
jobs = {'cmds': cmds, 'work_dirs': work_dirs, 'streams': streams,
'stdins':stdins, 'stdouts':stdouts, 'stderrs':stderrs}
return jobs
@staticmethod
def _create_cmds(cmd, streams):
'''Given a base cmd and a steams list it creates one modified cmds for
every stream'''
#the streams is a list of streams
streamss = streams
cmds = []
stdouts = []
stdins = []
stderrs = []
for streams in streamss:
new_cmd = copy.deepcopy(cmd)
for stream in streams:
#is the stream in the cmd or in is a std one?
if 'cmd_location' in stream:
location = stream['cmd_location']
else:
location = None
if location is None:
continue
elif location == STDIN:
stdins.append(stream['fhand'])
elif location == STDOUT:
stdouts.append(stream['fhand'])
elif location == STDERR:
stderrs.append(stream['fhand'])
else:
#we modify the cmd[location] with the new file
#we use the fname and no path because the jobs will be
#launched from the job working dir
location = stream['cmd_location']
fpath = stream['fname']
fname = os.path.split(fpath)[-1]
new_cmd[location] = fname
cmds.append(new_cmd)
return cmds, stdins, stdouts, stderrs
@staticmethod
def _split_streams(streams, splits, work_dir):
'''Given a list of streams it splits every stream in the given number of
splits'''
#which are the input and output streams?
input_stream_indexes = []
output_stream_indexes = []
for index, stream in enumerate(streams):
if stream['io'] == 'in':
input_stream_indexes.append(index)
elif stream['io'] == 'out':
output_stream_indexes.append(index)
#we create one work dir for every split
work_dirs = []
for index in range(splits):
dir_ = NamedTemporaryDir(dir=work_dir)
work_dirs.append(dir_)
copy_file_mode('.', dir_.name)
#we have to do first the input files because the number of splits could
#be changed by them
#we split the input stream files into several splits
#we have to sort the input_stream_indexes, first we should take the ones
#that have an input file to be split
def do_we_have_to_split(stream_index):
'If the stream has to split a file it will return True'
split = None
stream = streams[stream_index]
#maybe they shouldn't be split
if 'special' in stream and 'no_split' in stream['special']:
split = False
#maybe there is no file to split
if (('fhand' in stream and stream['fhand'] is None) or
('fname' in stream and stream['fname'] is None) or
('fname' not in stream and 'fhand' not in stream)):
split = False
elif (('fhand' in stream and stream['fhand'] is not None) or
('fname' in stream and stream['fname'] is not None)):
split = True
return split
def to_be_split_first(stream1, stream2):
'It sorts the streams, the ones to be split go first'
split1 = do_we_have_to_split(stream1)
split2 = do_we_have_to_split(stream2)
return int(split1) - int(split2)
input_stream_indexes = sorted(input_stream_indexes, to_be_split_first)
first = True
split_files = {}
for index in input_stream_indexes:
stream = streams[index]
#splitter
splitter = None
if 'special' in stream and 'no_split' in stream['special']:
splitter = create_non_splitter_splitter(copy_files=True)
elif 'splitter' not in stream:
msg = 'An splitter should be provided for every input stream'
msg += 'missing for: ' + str(stream)
raise ValueError(msg)
else:
splitter = stream['splitter']
#if the splitter is a function we assume that it will know how to
#split the given file, otherwise should be a registered type of
#splitter or a regular expression
if '__call__' not in dir(splitter):
splitter = get_splitter(splitter)
#we split the input files in the splits, every file will be in one
#of the given work_dirs
#the stream can have fname or fhands
if 'fhand' in stream:
file_ = stream['fhand']
elif 'fname' in stream:
file_ = stream['fname']
else:
file_ = None
if file_ is None:
#the stream migth have no file associated
files = [None] * len(work_dirs)
else:
files = splitter(file_, work_dirs)
#the files len can be different than splits, in that case we modify
#the splits or we raise an error
if len(files) != splits:
if first:
splits = len(files)
#we discard the empty temporary dirs
work_dirs = work_dirs[0:splits]
else:
msg = 'Not all input files were divided in the same number'
msg += ' of splits'
raise RuntimeError(msg)
first = False
split_files[index] = files #a list of files for every in stream
#we split the ouptut stream files into several splits
output_splitter = create_non_splitter_splitter(copy_files=False)
for index in output_stream_indexes:
stream = streams[index]
#for th output we just create the new names, but we don't split
#any file
if 'fhand' in stream:
fname = stream['fhand']
else:
fname = stream['fname']
files = output_splitter(fname, work_dirs)
split_files[index] = files #a list of files for every in stream
new_streamss = []
#we need one new stream for every split
for split_index in range(splits):
#the streams for one job
new_streams = []
for stream_index, stream in enumerate(streams):
#we duplicate the original stream
new_stream = stream.copy()
#we set the new files
if 'fhand' in stream:
new_stream['fhand'] = split_files[stream_index][split_index]
else:
new_stream['fname'] = split_files[stream_index][split_index]
new_streams.append(new_stream)
new_streamss.append(new_streams)
return new_streamss, work_dirs
@staticmethod
def default_splits(runner):
'Given a runner it returns the number of splits recommended by default'
if runner is StdPopen:
#the number of processors
return os.sysconf('SC_NPROCESSORS_ONLN')
else:
module = runner.__module__.split('.')[-1]
module = RUNNER_MODULES[module]
return module.get_default_splits()
def wait(self):
'It waits for all the works to finnish'
#we wait till all jobs finish
for job in self._jobs['popens']:
job.wait()
#now that all jobs have finished we join the results
self._collect_output_streams()
#we join now the retcodes
self._collect_retcodes()
return self._retcode
def _collect_output_streams(self):
'''It joins all the output streams into the output files and it removes
the work dirs'''
if self._outputs_collected:
return
#for each file in the main job cmd
for stream_index, stream in enumerate(self._job['streams']):
if stream['io'] == 'in':
#now we're dealing only with output files
continue
#every subjob has a part to join for this output stream
part_out_fnames = []
for streams in self._jobs['streams']:
this_stream = streams[stream_index]
if 'fname' in this_stream:
part_out_fnames.append(this_stream['fname'])
else:
part_out_fnames.append(this_stream['fhand'])
- #we need a function to join this stream
- joiner = None
- if joiner in stream:
- joiner = stream['joiner']
- else:
- joiner = default_cat_joiner
+
+ joiner = _get_joiner(stream)
+
if 'fname' in stream:
out_file = stream['fname']
else:
out_file = stream['fhand']
joiner(out_file, part_out_fnames)
#now we can delete the tempdirs
for work_dir in self._jobs['work_dirs']:
work_dir.close()
self._outputs_collected = True
def _collect_retcodes(self):
'It gathers the retcodes from all processes'
retcode = None
for popen in self._jobs['popens']:
job_retcode = popen.returncode
if job_retcode is None:
#if some job is yet to be finished the main job is not finished
retcode = None
break
elif job_retcode != 0:
#if one job has finnished badly the main job is badly finished
retcode = job_retcode
break
#it should be 0 at this point
retcode = job_retcode
#if the retcode is not None the jobs have finished and we have to
#collect the outputs
if retcode is not None:
self._collect_output_streams()
self._retcode = retcode
return retcode
def _get_returncode(self):
'It returns the return code'
if self._retcode is None:
self._collect_retcodes()
return self._retcode
returncode = property(_get_returncode)
def kill(self):
'It kills all jobs'
if 'popens' not in self._jobs:
return
for popen in self._jobs['popens']:
#untill 2.6 subprocess.popen do not support kill
if 'kill' in dir(popen):
popen.kill()
else:
pid = popen.pid
call(['kill', '-9', str(pid)])
def terminate(self):
'It kills all jobs'
if 'popens' not in self._jobs:
return
for popen in self._jobs['popens']:
#untill 2.6 subprocess.popen do not support terminate
if 'terminate' in dir(popen):
popen.terminate()
else:
pid = popen.pid
call(['kill', '-6', str(pid)])
+def _get_joiner(stream):
+ 'It gets the joiner'
+ joiners = {'bam':bam_joiner}
+ if 'joiner' in stream:
+ joiner = stream['joiner']
+ else:
+ joiner = default_cat_joiner
+
+ if '__call__' not in dir(joiner):
+ joiner = joiners[joiner]
+
+ return joiner
+
def default_cat_joiner(out_file_, in_files_):
'''It joins the given in files into the given out file.
It works with fnames or fhands.
'''
#are we working with fhands or fnames?
file_is_str = None
if isinstance(out_file_, str):
file_is_str = True
else:
file_is_str = False
#the output fhand
if file_is_str:
out_fhand = open(out_file_, 'w')
else:
out_fhand = open(out_file_.name, 'w')
for in_file_ in in_files_:
#the input fhand
if file_is_str:
in_fhand = open(in_file_, 'r')
else:
in_fhand = open(in_file_.name, 'r')
for line in in_fhand:
out_fhand.write(line)
in_fhand.close()
out_fhand.close()
diff --git a/psubprocess/splitters.py b/psubprocess/splitters.py
index 1cd9f48..af15739 100644
--- a/psubprocess/splitters.py
+++ b/psubprocess/splitters.py
@@ -1,333 +1,335 @@
'''
Created on 03/12/2009
@author: jose
'''
# Copyright 2009 Jose Blanca, Peio Ziarsolo, COMAV-Univ. Politecnica Valencia
# This file is part of psubprocess.
# psubprocess is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as
# published by the Free Software Foundation, either version 3 of the
# License, or (at your option) any later version.
# psubprocess is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
# You should have received a copy of the GNU Affero General Public License
# along with psubprocess. If not, see <http://www.gnu.org/licenses/>.
import re, os, shutil
from tempfile import NamedTemporaryFile
from psubprocess.utils import copy_file_mode
from psubprocess.bam import (bam2sam, sam2bam, get_bam_header,
bam_unigene_counter, unigenes_in_bam)
from Bio.SeqIO.QualityIO import FastqGeneralIterator
def _calculate_divisions(num_items, splits):
'''It calculates how many items should be in every split to divide
the num_items into splits.
Not all splits will have an equal number of items, it will return a tuple
with two tuples inside:
((num_fragments_1, num_items_1), (num_fragments_2, num_items_2))
splits = num_fragments_1 + num_fragments_2
num_items_1 = num_items_2 + 1
num_fragments_1 could be equal to 0.
This is the best way to create as many splits as possible as similar as
possible.
'''
if splits >= num_items:
return ((0, 1), (splits, 1))
num_fragments1 = num_items % splits
num_fragments2 = splits - num_fragments1
num_items2 = num_items // splits
num_items1 = num_items2 + 1
res = ((num_fragments1, num_items1), (num_fragments2, num_items2))
return res
def _items_in_file(fhand, expression):
'''Given an fhand and an expression it yields the items cutting where the
line matches the expression'''
sofar = fhand.readline()
for line in fhand:
if expression.search(line):
yield sofar
sofar = line
else:
sofar += line
else:
#the last item
yield sofar
def _re_item_counter(fhand, expression):
'It counts how many times the expression is found in the file'
nitems = 0
for line in fhand:
if expression.search(line):
nitems += 1
return nitems
def _items_in_fastq(fhand, expression=None):
'It returns the fastq items'
for item in FastqGeneralIterator(fhand):
yield '@%s\n%s\n+\n%s\n' % (item)
def _fastq_items_counter(fhand, expression=None):
nitems = 0
for item in FastqGeneralIterator(fhand):
nitems += 1
return nitems
def _blank_line_items_counter(fhand, expression=None):
'It returns the number of items separated by blank line'
nitems = 0
item_read = False
for line in fhand:
line = line.rstrip()
if line:
item_read = True
elif item_read and not line:
item_read = False
nitems += 1
return nitems
def _items_in_blank_line(fhand, expression=None):
'It returns the items separated by blank lines'
buffer_ = ''
for line in fhand:
line = line.rstrip()
if line:
buffer_ += line + '\n'
elif buffer_ and not line:
yield buffer_ + '\n'
buffer_ = ''
if buffer_:
yield buffer_ + '\n'
def _create_file_splitter(kind, expression=None):
'''Given an expression it creates a file splitter.
The expression can be a regex or an str.
The item in the file will be defined everytime a line matches the
expression.
'''
item_counters = {'re': _re_item_counter,
'fastq': _fastq_items_counter,
'blank_line': _blank_line_items_counter,
'bam':bam_unigene_counter}
item_splitters = {'re':_items_in_file,
'fastq':_items_in_fastq,
'blank_line': _items_in_blank_line,
'bam':unigenes_in_bam}
preproces_funcs = {'bam':bam2sam}
postproces_funcs = {'bam':sam2bam}
header_funcs = {'bam':get_bam_header}
footer_funcs = {}
item_counter = item_counters[kind]
item_splitter = item_splitters[kind]
preprocesor = preproces_funcs[kind] if kind in preproces_funcs else None
postprocesor = postproces_funcs[kind] if kind in postproces_funcs else None
header_extractor = header_funcs[kind] if kind in header_funcs else None
footer_extractor = footer_funcs[kind] if kind in footer_funcs else None
if expression is not None and isinstance(expression, str):
expression = re.compile(expression)
def splitter(file_, work_dirs):
'''It splits the given file into several splits.
Every split will be located in one of the work_dirs, although it is not
guaranteed to create as many splits as work dirs. If in the file there
are less items than work_dirs some work_dirs will be left empty.
It returns a list with the fpaths or fhands for the splitted files.
file_ can be an fhand or an fname.
'''
#the file_ can be an fname or an fhand. which one is it?
file_is_str = None
if isinstance(file_, str):
fname = file_
file_is_str = True
else:
fname = file_.name
file_is_str = False
# do we have header?
if header_extractor is not None:
header_fhand = NamedTemporaryFile()
fhand = open(fname)
header_extractor(fhand, header_fhand)
fhand.close()
else:
header_fhand = None
# do we have footer?
if footer_extractor is not None:
footer_fhand = NamedTemporaryFile()
fhand = open(fname)
footer_extractor(fhand, header_fhand)
fhand.close()
else:
footer_fhand = None
# File preprocess
if preprocesor is not None:
suffix = os.path.splitext(fname)[-1]
preprocessed_fhand = NamedTemporaryFile(suffix=suffix)
fhand = open(fname)
preprocesor(fhand, preprocessed_fhand)
fhand.close()
fname = preprocessed_fhand.name
#how many splits do we want?
nsplits = len(work_dirs)
#how many items are in the file? We assume that all files have the same
#number of items
fhand = open(fname, 'r')
nitems = item_counter(fhand, expression)
#how many splits a we going to create? and how many items will be in
#every split
#if there are more items than splits we create as many splits as items
if nsplits > nitems:
nsplits = nitems
(nsplits1, nitems1), (nsplits2, nitems2) = _calculate_divisions(nitems,
nsplits)
#we have to create nsplits1 files with nitems1 in it and nsplits2 files
#with nitems2 items in it
new_files = []
fhand = open(fname, 'r')
items = item_splitter(fhand, expression)
splits_made = 0
for nsplits, nitems in ((nsplits1, nitems1), (nsplits2, nitems2)):
#we have to create nsplits files with nitems in it
#we don't need the split_index for anything
#pylint: disable-msg=W0612
for split_index in range(nsplits):
suffix = os.path.splitext(fname)[-1]
work_dir = work_dirs[splits_made]
ofh = NamedTemporaryFile(dir=work_dir.name, delete=False,
suffix=suffix)
copy_file_mode(fhand.name, ofh.name)
# header
if header_fhand is not None:
header_fhand.seek(0)
ofh.write(header_fhand.read())
for item_index in range(nitems):
ofh.write(items.next())
ofh.flush()
# footer
if footer_fhand is not None:
footer_fhand.seek(0)
ofh.write(footer_fhand.read())
#postprocess
if postprocesor is not None:
newofh = NamedTemporaryFile(dir=work_dir.name, delete=False,
suffix=suffix)
postprocesor(ofh, newofh)
ofh_path = ofh.name
ofh.close()
os.remove(ofh_path)
ofh = newofh
#we have to close the files otherwise we can run out of files
#in the os filesystem
if file_is_str:
new_files.append(ofh.name)
else:
new_files.append(ofh)
ofh.close()
splits_made += 1
return new_files
return splitter
fastq_splitter = _create_file_splitter(kind='fastq')
blank_line_splitter = _create_file_splitter(kind='blank_line')
bam_splitter = _create_file_splitter(kind='bam')
def create_file_splitter_with_re(expression):
'''Given an expression it creates a file splitter.
The expression can be a regex or an str.
The item in the file will be defined everytime a line matches the
expression.
'''
return _create_file_splitter(kind='re', expression=expression)
def get_splitter(expression):
'''If the expression is a known splitter kind it returns it, otherwise it
creates a regular expression based splitter'''
if expression == 'fastq':
return fastq_splitter
elif expression == 'blank_line':
return blank_line_splitter
+ elif expression == 'bam':
+ return bam_splitter
else:
return create_file_splitter_with_re(expression)
def create_non_splitter_splitter(copy_files=False):
'''It creates an splitter function that will not split the given file.
The created splitter will create one file for every work_dir given. This
file can be empty (useful for the output streams, or a copy of the given
file (useful for the no_split input streams).
'''
def splitter(file_, work_dirs):
'''It creates one output file for every splits.
Every split will be located in one of the work_dirs.
It returns a list with the fpaths for the new files.
'''
#the file_ can be an fname or an fhand. which one is it?
file_is_str = None
if isinstance(file_, str):
fname = file_
file_is_str = True
else:
fname = file_.name
file_is_str = False
#how many splits do we want?
nsplits = len(work_dirs)
new_fpaths = []
#we have to create nsplits
suffix = os.path.splitext(fname)[-1]
for split_index in range(nsplits):
work_dir = work_dirs[split_index]
#we use delete=False because this temp file is in a temp dir that
#will be completely deleted. If we use delete=True we get an error
#because the file might be already deleted when its __del__ method
#is called
ofh = NamedTemporaryFile(dir=work_dir.name, suffix=suffix,
delete=False)
os.remove(ofh.name)
ofh_name = ofh.name
#we have to close the files otherwise we can run out of files
#in the os filesystem
ofh.close()
if copy_files:
#i've tried with os.symlink but condor does not like it
shutil.copyfile(fname, ofh_name)
#the file will be deleted
#what do we need the fname or the fhand?
if file_is_str:
new_fpaths.append(ofh.name)
else:
new_fpaths.append(ofh)
return new_fpaths
return splitter
diff --git a/psubprocess/utils.py b/psubprocess/utils.py
index f3d0bd6..f870651 100644
--- a/psubprocess/utils.py
+++ b/psubprocess/utils.py
@@ -1,120 +1,131 @@
'''
Created on 03/12/2009
@author: jose
'''
+import psubprocess
# Copyright 2009 Jose Blanca, Peio Ziarsolo, COMAV-Univ. Politecnica Valencia
# This file is part of psubprocess.
# psubprocess is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as
# published by the Free Software Foundation, either version 3 of the
# License, or (at your option) any later version.
# psubprocess is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
# You should have received a copy of the GNU Affero General Public License
# along with psubprocess. If not, see <http://www.gnu.org/licenses/>.
import tempfile, os, shutil, signal, subprocess, logging
+DATA_DIR = os.path.join(os.path.split(psubprocess.__path__[0])[0], 'psubprocess',
+ 'data')
+
class NamedTemporaryDir(object):
'''This class creates temporary directories '''
#pylint: disable-msg=W0622
#we redifine the build in dir because temfile uses that inteface
def __init__(self, dir=None):
'''It initiates the class.'''
self._name = tempfile.mkdtemp(dir=dir)
def get_name(self):
'Returns path to the dict'
return self._name
name = property(get_name)
def close(self):
'''It removes the temp dir'''
if os.path.exists(self._name):
shutil.rmtree(self._name)
def __del__(self):
'''It removes the temp dir when instance is removed and the garbage
collector decides it'''
self.close()
def copy_file_mode(fpath1, fpath2):
'It copies the os.stats mode from file1 to file2'
mode = os.stat(fpath1)[0]
os.chmod(fpath2, mode)
def call(cmd, environment=None, stdin=None, raise_on_error=False,
stdout=None, stderr=None, log=False):
'It calls a command and it returns stdout, stderr and retcode'
def subprocess_setup():
''' Python installs a SIGPIPE handler by default. This is usually not
what non-Python subprocesses expect. Taken from this url:
http://www.chiark.greenend.org.uk/ucgi/~cjwatson/blosxom/2009/07/02#
2009-07-02-python-sigpipe'''
signal.signal(signal.SIGPIPE, signal.SIG_DFL)
if stdin is None:
pstdin = None
else:
pstdin = subprocess.PIPE
if stdout is None:
stdout = subprocess.PIPE
if stderr is None:
stderr = subprocess.PIPE
#we want to inherit the environment, and modify it
if environment is not None:
new_env = {}
for key, value in os.environ.items():
new_env[key] = value
for key, value in environment.items():
new_env[key] = value
environment = new_env
if log:
logger = logging.getLogger('franklin')
logger.info('Running command: ' + ' '.join(cmd))
try:
process = subprocess.Popen(cmd, stdout=stdout, stderr=stderr,
env=environment, stdin=pstdin,
preexec_fn=subprocess_setup)
except OSError:
#if it fails let's be sure that the binary is not on the system
binary = cmd[0]
if binary is None:
raise OSError('The binary was not found: ' + cmd[0])
#let's try with an absolute path, sometimes works
cmd.pop(0)
cmd.insert(0, binary)
process = subprocess.Popen(cmd, stdout=stdout, stderr=stderr,
env=environment, stdin=pstdin,
preexec_fn=subprocess_setup)
if stdin is None:
stdout_str, stderr_str = process.communicate()
else:
stdout_str, stderr_str = process.communicate(stdin)
retcode = process.returncode
if stdout != subprocess.PIPE:
stdout.flush()
stdout.seek(0)
if stderr != subprocess.PIPE:
stderr.flush()
stderr.seek(0)
if raise_on_error and retcode:
if stdout != subprocess.PIPE:
stdout_str = open(stdout.name).read()
if stderr != subprocess.PIPE:
stderr_str = open(stderr.name).read()
msg = 'Error running command: %s\n stderr: %s\n stdout: %s' % \
(' '.join(cmd), stderr_str,
stdout_str)
raise RuntimeError(msg)
return stdout_str, stderr_str, retcode
+
+def get_fhand(file_, writable=False):
+ 'Given an fhand or and fpath it returns an fhand'
+ if isinstance(file_, basestring):
+ mode = 'w' if writable else 'r'
+ file_ = open(file_, mode)
+ return file_
diff --git a/test/prunner_test.py b/test/prunner_test.py
index 2f05be2..9064618 100644
--- a/test/prunner_test.py
+++ b/test/prunner_test.py
@@ -1,305 +1,333 @@
'''
Created on 16/07/2009
@author: jose
'''
# Copyright 2009 Jose Blanca, Peio Ziarsolo, COMAV-Univ. Politecnica Valencia
# This file is part of psubprocess.
# psubprocess is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as
# published by the Free Software Foundation, either version 3 of the
# License, or (at your option) any later version.
# psubprocess is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
# You should have received a copy of the GNU Affero General Public License
# along with psubprocess. If not, see <http://www.gnu.org/licenses/>.
import unittest
from tempfile import NamedTemporaryFile
import os
from psubprocess import Popen
from psubprocess.streams import STDIN
+from psubprocess.utils import DATA_DIR
from test_utils import create_test_binary
class PRunnerTest(unittest.TestCase):
'It test that we can parallelize processes'
@staticmethod
def test_file_in():
'It tests the most basic behaviour'
bin = create_test_binary()
#with infile
in_file = NamedTemporaryFile()
in_file.write('hola')
in_file.flush()
cmd = [bin]
cmd.extend(['-i', in_file.name])
stdout = NamedTemporaryFile()
stderr = NamedTemporaryFile()
cmd_def = [{'options': ('-i', '--input'), 'io': 'in', 'splitter':''}]
popen = Popen(cmd, stdout=stdout, stderr=stderr, cmd_def=cmd_def)
assert popen.wait() == 0 #waits till finishes and looks to the retcode
assert open(stdout.name).read() == 'hola'
in_file.close()
os.remove(bin)
@staticmethod
def test_job_no_in_stream():
'It test that a job with no in stream is run splits times'
bin = create_test_binary()
cmd = [bin]
cmd.extend(['-o', 'hola', '-e', 'caracola'])
stdout = NamedTemporaryFile()
stderr = NamedTemporaryFile()
cmd_def = [{'options': ('-i', '--input'), 'io': 'in', 'splitter':''}]
splits = 4
popen = Popen(cmd, stdout=stdout, stderr=stderr, cmd_def=cmd_def,
splits=splits)
assert popen.wait() == 0 #waits till finishes and looks to the retcod
assert open(stdout.name).read() == 'hola' * splits
assert open(stderr.name).read() == 'caracola' * splits
os.remove(bin)
@staticmethod
def test_stdin():
'It test that stdin works as input'
bin = create_test_binary()
#with stdin
content = 'hola1\nhola2\nhola3\nhola4\nhola5\nhola6\nhola7\nhola8\n'
content += 'hola9\nhola10|n'
stdin = NamedTemporaryFile()
stdin.write(content)
stdin.flush()
cmd = [bin]
cmd.extend(['-s'])
stdout = NamedTemporaryFile()
stderr = NamedTemporaryFile()
cmd_def = [{'options':STDIN, 'io': 'in', 'splitter':''}]
popen = Popen(cmd, stdout=stdout, stderr=stderr, stdin=stdin,
cmd_def=cmd_def)
assert popen.wait() == 0 #waits till finishes and looks to the retcod
assert open(stdout.name).read() == content
assert open(stderr.name).read() == ''
os.remove(bin)
@staticmethod
def test_infile_outfile():
'It tests that we can set an input file and an output file'
bin = create_test_binary()
#with infile
in_file = NamedTemporaryFile()
content = 'hola1\nhola2\nhola3\nhola4\nhola5\nhola6\nhola7\nhola8\n'
content += 'hola9\nhola10|n'
in_file.write(content)
in_file.flush()
out_file = NamedTemporaryFile()
cmd = [bin]
cmd.extend(['-i', in_file.name, '-t', out_file.name])
stdout = NamedTemporaryFile()
stderr = NamedTemporaryFile()
cmd_def = [{'options': ('-i', '--input'), 'io': 'in', 'splitter':''},
{'options': ('-t', '--output'), 'io': 'out'}]
popen = Popen(cmd, stdout=stdout, stderr=stderr, cmd_def=cmd_def)
assert popen.wait() == 0 #waits till finishes and looks to the retcod
assert not open(stdout.name).read()
assert not open(stderr.name).read()
assert open(out_file.name).read() == content
in_file.close()
os.remove(bin)
@staticmethod
def test_retcode():
'It tests that we get the correct returncode'
bin = create_test_binary()
cmd = [bin]
cmd.extend(['-r', '20'])
stdout = NamedTemporaryFile()
stderr = NamedTemporaryFile()
popen = Popen(cmd, stdout=stdout, stderr=stderr, cmd_def=[])
assert popen.wait() == 20 #waits till finishes and looks to the retcod
assert not open(stdout.name).read()
assert not open(stderr.name).read()
os.remove(bin)
@staticmethod
def xtest_infile_outfile_condor():
'It tests that we can set an input file and an output file'
bin = create_test_binary()
#with infile
in_file = NamedTemporaryFile()
content = 'hola1\nhola2\nhola3\nhola4\nhola5\nhola6\nhola7\nhola8\n'
content += 'hola9\nhola10|n'
in_file.write(content)
in_file.flush()
out_file = NamedTemporaryFile()
cmd = [bin]
cmd.extend(['-i', in_file.name, '-t', out_file.name])
stdout = NamedTemporaryFile()
stderr = NamedTemporaryFile()
cmd_def = [{'options': ('-i', '--input'), 'io': 'in', 'splitter':''},
{'options': ('-t', '--output'), 'io': 'out'}]
from psubprocess import CondorPopen
popen = Popen(cmd, stdout=stdout, stderr=stderr, cmd_def=cmd_def,
runner=CondorPopen,
runner_conf={'transfer_executable':True})
assert popen.wait() == 0 #waits till finishes and looks to the retcode
assert not open(stdout.name).read()
assert not open(stderr.name).read()
assert open(out_file.name).read() == content
in_file.close()
os.remove(bin)
@staticmethod
def test_stdin_real_splitter():
'It test that stdin works as input'
bin = create_test_binary()
#with stdin
content = '>hola1\nhola2\n>hola3\nhola4\n>hola5\nhola6\n>hola7\nhola8\n'
content += '>hola9\nhola10|n'
stdin = NamedTemporaryFile()
stdin.write(content)
stdin.flush()
cmd = [bin]
cmd.extend(['-s'])
stdout = NamedTemporaryFile()
stderr = NamedTemporaryFile()
cmd_def = [{'options':STDIN, 'io': 'in', 'splitter':'>'}]
popen = Popen(cmd, stdout=stdout, stderr=stderr, stdin=stdin,
cmd_def=cmd_def)
assert popen.wait() == 0 #waits till finishes and looks to the retcode
assert open(stdout.name).read() == content
assert open(stderr.name).read() == ''
os.remove(bin)
@staticmethod
def test_2_infile_outfile():
'It tests that we can set 2 input files and an output file'
bin = create_test_binary()
#with infile
content = 'hola1\nhola2\nhola3\nhola4\nhola5\nhola6\nhola7\nhola8\n'
content += 'hola9\nhola10|n'
in_file1 = NamedTemporaryFile()
in_file1.write(content)
in_file1.flush()
in_file2 = NamedTemporaryFile()
in_file2.write(content)
in_file2.flush()
out_file1 = NamedTemporaryFile()
out_file2 = NamedTemporaryFile()
cmd = [bin]
cmd.extend(['-i', in_file1.name, '-t', out_file1.name])
cmd.extend(['-x', in_file2.name, '-z', out_file2.name])
stdout = NamedTemporaryFile()
stderr = NamedTemporaryFile()
cmd_def = [{'options': ('-i', '--input'), 'io': 'in', 'splitter':''},
{'options': ('-x', '--input'), 'io': 'in', 'splitter':''},
{'options': ('-t', '--output'), 'io': 'out'},
{'options': ('-z', '--output'), 'io': 'out'}]
popen = Popen(cmd, stdout=stdout, stderr=stderr, cmd_def=cmd_def)
assert popen.wait() == 0 #waits till finishes and looks to the retcod
assert not open(stdout.name).read()
assert not open(stderr.name).read()
assert open(out_file1.name).read() == content
assert open(out_file2.name).read() == content
in_file1.close()
in_file2.close()
os.remove(bin)
@staticmethod
def test_kill_subjobs():
'It tests that we can kill the subjobs'
bin = create_test_binary()
cmd = [bin]
cmd.extend(['-w'])
stdout = NamedTemporaryFile()
stderr = NamedTemporaryFile()
popen = Popen(cmd, stdout=stdout, stderr=stderr, cmd_def=[])
assert popen.returncode is None
popen.kill()
assert not open(stdout.name).read()
assert not open(stderr.name).read()
os.remove(bin)
@staticmethod
def test_nosplit():
'It test that we can set some input files to be not split'
bin = create_test_binary()
#with infile
in_file = NamedTemporaryFile()
content = 'hola1\nhola2\n'
in_file.write(content)
in_file.flush()
out_file = NamedTemporaryFile()
cmd = [bin]
cmd.extend(['-i', in_file.name, '-t', out_file.name])
stdout = NamedTemporaryFile()
stderr = NamedTemporaryFile()
cmd_def = [{'options': ('-i', '--input'), 'io': 'in',
'special':['no_split']},
{'options': ('-t', '--output'), 'io': 'out'}]
splits = 4
popen = Popen(cmd, stdout=stdout, stderr=stderr, cmd_def=cmd_def,
splits=splits)
assert popen.wait() == 0 #waits till finishes and looks to the retcod
assert not open(stdout.name).read()
assert not open(stderr.name).read()
assert open(out_file.name).read() == content * splits
in_file.close()
os.remove(bin)
@staticmethod
def test_lots_splits_outfile():
'It tests that we can set 2 input files and an output file'
bin = create_test_binary()
splits = 200
content = ['hola%d\n' % split for split in range(splits)]
content = ''.join(content)
in_file1 = NamedTemporaryFile()
in_file1.write(content)
in_file1.flush()
in_file2 = NamedTemporaryFile()
in_file2.write(content)
in_file2.flush()
out_file1 = NamedTemporaryFile()
out_file2 = NamedTemporaryFile()
cmd = [bin]
cmd.extend(['-i', in_file1.name, '-t', out_file1.name])
cmd.extend(['-x', in_file2.name, '-z', out_file2.name])
stdout = NamedTemporaryFile()
stderr = NamedTemporaryFile()
cmd_def = [{'options': ('-i', '--input'), 'io': 'in', 'splitter':''},
{'options': ('-x', '--input'), 'io': 'in', 'splitter':''},
{'options': ('-t', '--output'), 'io': 'out'},
{'options': ('-z', '--output'), 'io': 'out'}]
popen = Popen(cmd, stdout=stdout, stderr=stderr, cmd_def=cmd_def,
splits=splits)
assert popen.wait() == 0 #waits till finishes and looks to the retcod
assert not open(stdout.name).read()
assert not open(stderr.name).read()
assert open(out_file1.name).read() == content
assert open(out_file2.name).read() == content
in_file1.close()
in_file2.close()
os.remove(bin)
+ @staticmethod
+ def test_bam_infile_outfile():
+ 'It tests that we can set an bam input and output file'
+ bin = create_test_binary()
+ #with infile
+ in_file = open(os.path.join(DATA_DIR, 'seq.bam'))
+ out_file = NamedTemporaryFile()
+
+ cmd = [bin]
+ cmd.extend(['-i', in_file.name, '-t', out_file.name])
+ stdout = NamedTemporaryFile()
+ stderr = NamedTemporaryFile()
+ cmd_def = [{'options': ('-i', '--input'), 'io': 'in', 'splitter':'bam'},
+ {'options': ('-t', '--output'), 'io': 'out', 'joiner':'bam'}]
+ popen = Popen(cmd, stdout=stdout, stderr=stderr, cmd_def=cmd_def)
+
+
+
+ assert popen.wait() == 0 #waits till finishes and looks to the retcod
+
+# print open(out_file.name).read()
+# print out_file.name
+
+ in_file.close()
+ os.remove(bin)
+
+
if __name__ == "__main__":
- #import sys;sys.argv = ['', 'PRunnerTest.test_file_in']
+# import sys;sys.argv = ['', 'PRunnerTest.test_bam_infile_outfile']
unittest.main()
diff --git a/test/splitter_test.py b/test/splitter_test.py
index fa7e188..5c6ce58 100644
--- a/test/splitter_test.py
+++ b/test/splitter_test.py
@@ -1,116 +1,112 @@
'''
Created on 03/12/2009
@author: jose
'''
# Copyright 2009 Jose Blanca, Peio Ziarsolo, COMAV-Univ. Politecnica Valencia
# This file is part of psubprocess.
# psubprocess is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as
# published by the Free Software Foundation, either version 3 of the
# License, or (at your option) any later version.
# psubprocess is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
# You should have received a copy of the GNU Affero General Public License
# along with psubprocess. If not, see <http://www.gnu.org/licenses/>.
import unittest, os
-from StringIO import StringIO
from tempfile import NamedTemporaryFile
-from franklin.utils.misc_utils import DATA_DIR
+from psubprocess.utils import DATA_DIR
from psubprocess.prunner import NamedTemporaryDir
from psubprocess.splitters import (create_file_splitter_with_re, fastq_splitter,
bam_splitter, blank_line_splitter)
class SplitterTest(unittest.TestCase):
'It test that we can split the input files'
@staticmethod
def test_re_splitter():
'It tests the general regular expression based splitter'
fastq = '@seq1\nACTG\n+\nmoco\n@seq2\nGTCA\n+\nhola\n'
file_ = NamedTemporaryFile()
file_.write(fastq)
file_.flush()
splitter = create_file_splitter_with_re(expression='^@')
dir1 = NamedTemporaryDir()
dir2 = NamedTemporaryDir()
dir3 = NamedTemporaryDir()
new_files = splitter(file_, [dir1, dir2, dir3])
assert len(new_files) == 2
assert open(new_files[0].name).read() == '@seq1\nACTG\n+\nmoco\n'
assert open(new_files[1].name).read() == '@seq2\nGTCA\n+\nhola\n'
dir1.close()
dir2.close()
dir3.close()
@staticmethod
def test_fastq_splitter():
'It tests the fastq splitter'
fastq = '@seq1\nACTG\n+\nmoco\n@seq2\nGTCA\n+\nhola\n'
file_ = NamedTemporaryFile()
file_.write(fastq)
file_.flush()
splitter = fastq_splitter
dir1 = NamedTemporaryDir()
dir2 = NamedTemporaryDir()
dir3 = NamedTemporaryDir()
new_files = splitter(file_, [dir1, dir2, dir3])
assert len(new_files) == 2
assert open(new_files[0].name).read() == '@seq1\nACTG\n+\nmoco\n'
assert open(new_files[1].name).read() == '@seq2\nGTCA\n+\nhola\n'
dir1.close()
dir2.close()
dir3.close()
@staticmethod
def test_blank_line_splitter():
'It tests the blank line splitter'
fastq = 'hola\n\ncaracola\n\n'
file_ = NamedTemporaryFile()
file_.write(fastq)
file_.flush()
splitter = blank_line_splitter
dir1 = NamedTemporaryDir()
dir2 = NamedTemporaryDir()
dir3 = NamedTemporaryDir()
new_files = splitter(file_, [dir1, dir2, dir3])
assert len(new_files) == 2
assert open(new_files[0].name).read() == 'hola\n\n'
assert open(new_files[1].name).read() == 'caracola\n\n'
dir1.close()
dir2.close()
dir3.close()
@staticmethod
def test_bam_splitter():
'It test bam splitter'
bam_fhand = os.path.join(DATA_DIR, 'seq.bam')
-
splitter = bam_splitter
dir1 = NamedTemporaryDir()
dir2 = NamedTemporaryDir()
dir3 = NamedTemporaryDir()
new_files = splitter(bam_fhand, [dir1, dir2, dir3])
assert len(new_files) == 2
dir1.close()
dir2.close()
dir3.close()
-
-
if __name__ == "__main__":
#import sys;sys.argv = ['', 'Test.testName']
unittest.main()
|
JoseBlanca/psubprocess
|
7ec8a276c812748b329d67793c49db4678cd5bf6
|
Added splitter for bam files
|
diff --git a/psubprocess/bam.py b/psubprocess/bam.py
new file mode 100644
index 0000000..ed28468
--- /dev/null
+++ b/psubprocess/bam.py
@@ -0,0 +1,53 @@
+'''
+Utils to split and join bams
+
+Created on 06/09/2010
+
+@author: peio
+'''
+from psubprocess.utils import call
+
+
+def bam2sam(bam_fhand, sam_fhand):
+ '''It converts between bam and sam.'''
+ bam_fhand.seek(0)
+ cmd = ['samtools', 'view', bam_fhand.name, '-o', sam_fhand.name]
+ call(cmd, raise_on_error=True)
+
+def sam2bam(sam_fhand, bam_fhand, header=None):
+ 'It converts between bam and sam.'
+ sam_fhand.seek(0)
+ if header is not None:
+ pass
+ cmd = ['samtools', 'view', '-bSh', '-o', bam_fhand.name, sam_fhand.name]
+ call(cmd, raise_on_error=True)
+
+def get_bam_header(bam_fhand, header_fhand):
+ 'It gets the header of the bam'
+ cmd = ['samtools', 'view', '-H', bam_fhand.name, '-o', header_fhand.name]
+ call(cmd, raise_on_error=True)
+
+def bam_unigene_counter(fhand, expression=None):
+ 'It count the unigene number of a bam'
+ unigenes = set()
+ for line in fhand:
+ unigene = line.split()[2]
+ unigenes.add(unigene)
+ return len(unigenes)
+
+def unigenes_in_bam(fhand, expression=None):
+ 'It yields the bam mapping by joined by unigene'
+ unigene_prev = None
+ unigene_lines = ''
+ for line in fhand:
+ unigene = line.split()[2]
+ if unigene_prev is not None and unigene_prev != unigene:
+ yield unigene_lines
+ unigene_lines = ''
+
+ unigene_lines += line
+ unigene_prev = unigene
+ else:
+ yield unigene_lines
+
+
diff --git a/psubprocess/splitters.py b/psubprocess/splitters.py
index caae9a1..1cd9f48 100644
--- a/psubprocess/splitters.py
+++ b/psubprocess/splitters.py
@@ -1,266 +1,333 @@
'''
Created on 03/12/2009
@author: jose
'''
# Copyright 2009 Jose Blanca, Peio Ziarsolo, COMAV-Univ. Politecnica Valencia
# This file is part of psubprocess.
# psubprocess is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as
# published by the Free Software Foundation, either version 3 of the
# License, or (at your option) any later version.
# psubprocess is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
# You should have received a copy of the GNU Affero General Public License
# along with psubprocess. If not, see <http://www.gnu.org/licenses/>.
import re, os, shutil
from tempfile import NamedTemporaryFile
from psubprocess.utils import copy_file_mode
+from psubprocess.bam import (bam2sam, sam2bam, get_bam_header,
+ bam_unigene_counter, unigenes_in_bam)
from Bio.SeqIO.QualityIO import FastqGeneralIterator
def _calculate_divisions(num_items, splits):
'''It calculates how many items should be in every split to divide
the num_items into splits.
Not all splits will have an equal number of items, it will return a tuple
with two tuples inside:
((num_fragments_1, num_items_1), (num_fragments_2, num_items_2))
splits = num_fragments_1 + num_fragments_2
num_items_1 = num_items_2 + 1
num_fragments_1 could be equal to 0.
This is the best way to create as many splits as possible as similar as
possible.
'''
if splits >= num_items:
return ((0, 1), (splits, 1))
num_fragments1 = num_items % splits
num_fragments2 = splits - num_fragments1
num_items2 = num_items // splits
num_items1 = num_items2 + 1
res = ((num_fragments1, num_items1), (num_fragments2, num_items2))
return res
def _items_in_file(fhand, expression):
'''Given an fhand and an expression it yields the items cutting where the
line matches the expression'''
sofar = fhand.readline()
for line in fhand:
if expression.search(line):
yield sofar
sofar = line
else:
sofar += line
else:
#the last item
yield sofar
def _re_item_counter(fhand, expression):
'It counts how many times the expression is found in the file'
nitems = 0
for line in fhand:
if expression.search(line):
nitems += 1
return nitems
def _items_in_fastq(fhand, expression=None):
'It returns the fastq items'
for item in FastqGeneralIterator(fhand):
yield '@%s\n%s\n+\n%s\n' % (item)
def _fastq_items_counter(fhand, expression=None):
nitems = 0
for item in FastqGeneralIterator(fhand):
nitems += 1
return nitems
def _blank_line_items_counter(fhand, expression=None):
'It returns the number of items separated by blank line'
nitems = 0
item_read = False
for line in fhand:
line = line.rstrip()
if line:
item_read = True
elif item_read and not line:
item_read = False
nitems += 1
return nitems
def _items_in_blank_line(fhand, expression=None):
'It returns the items separated by blank lines'
buffer_ = ''
for line in fhand:
line = line.rstrip()
if line:
buffer_ += line + '\n'
elif buffer_ and not line:
yield buffer_ + '\n'
buffer_ = ''
if buffer_:
yield buffer_ + '\n'
def _create_file_splitter(kind, expression=None):
'''Given an expression it creates a file splitter.
The expression can be a regex or an str.
The item in the file will be defined everytime a line matches the
expression.
'''
item_counters = {'re': _re_item_counter,
'fastq': _fastq_items_counter,
- 'blank_line': _blank_line_items_counter}
+ 'blank_line': _blank_line_items_counter,
+ 'bam':bam_unigene_counter}
item_splitters = {'re':_items_in_file,
'fastq':_items_in_fastq,
- 'blank_line': _items_in_blank_line}
+ 'blank_line': _items_in_blank_line,
+ 'bam':unigenes_in_bam}
+ preproces_funcs = {'bam':bam2sam}
+ postproces_funcs = {'bam':sam2bam}
- item_counter = item_counters[kind]
+ header_funcs = {'bam':get_bam_header}
+ footer_funcs = {}
+
+ item_counter = item_counters[kind]
item_splitter = item_splitters[kind]
+ preprocesor = preproces_funcs[kind] if kind in preproces_funcs else None
+ postprocesor = postproces_funcs[kind] if kind in postproces_funcs else None
+ header_extractor = header_funcs[kind] if kind in header_funcs else None
+ footer_extractor = footer_funcs[kind] if kind in footer_funcs else None
+
if expression is not None and isinstance(expression, str):
expression = re.compile(expression)
def splitter(file_, work_dirs):
'''It splits the given file into several splits.
Every split will be located in one of the work_dirs, although it is not
guaranteed to create as many splits as work dirs. If in the file there
are less items than work_dirs some work_dirs will be left empty.
It returns a list with the fpaths or fhands for the splitted files.
file_ can be an fhand or an fname.
'''
#the file_ can be an fname or an fhand. which one is it?
file_is_str = None
if isinstance(file_, str):
fname = file_
file_is_str = True
else:
fname = file_.name
file_is_str = False
+ # do we have header?
+ if header_extractor is not None:
+ header_fhand = NamedTemporaryFile()
+ fhand = open(fname)
+ header_extractor(fhand, header_fhand)
+ fhand.close()
+ else:
+ header_fhand = None
+
+ # do we have footer?
+ if footer_extractor is not None:
+ footer_fhand = NamedTemporaryFile()
+ fhand = open(fname)
+ footer_extractor(fhand, header_fhand)
+ fhand.close()
+ else:
+ footer_fhand = None
+
+ # File preprocess
+ if preprocesor is not None:
+ suffix = os.path.splitext(fname)[-1]
+ preprocessed_fhand = NamedTemporaryFile(suffix=suffix)
+ fhand = open(fname)
+ preprocesor(fhand, preprocessed_fhand)
+ fhand.close()
+ fname = preprocessed_fhand.name
+
+
#how many splits do we want?
nsplits = len(work_dirs)
#how many items are in the file? We assume that all files have the same
#number of items
fhand = open(fname, 'r')
nitems = item_counter(fhand, expression)
#how many splits a we going to create? and how many items will be in
#every split
#if there are more items than splits we create as many splits as items
if nsplits > nitems:
nsplits = nitems
(nsplits1, nitems1), (nsplits2, nitems2) = _calculate_divisions(nitems,
nsplits)
#we have to create nsplits1 files with nitems1 in it and nsplits2 files
#with nitems2 items in it
new_files = []
fhand = open(fname, 'r')
items = item_splitter(fhand, expression)
splits_made = 0
for nsplits, nitems in ((nsplits1, nitems1), (nsplits2, nitems2)):
#we have to create nsplits files with nitems in it
#we don't need the split_index for anything
#pylint: disable-msg=W0612
for split_index in range(nsplits):
suffix = os.path.splitext(fname)[-1]
work_dir = work_dirs[splits_made]
ofh = NamedTemporaryFile(dir=work_dir.name, delete=False,
suffix=suffix)
copy_file_mode(fhand.name, ofh.name)
+
+ # header
+ if header_fhand is not None:
+ header_fhand.seek(0)
+ ofh.write(header_fhand.read())
+
for item_index in range(nitems):
ofh.write(items.next())
ofh.flush()
+
+ # footer
+ if footer_fhand is not None:
+ footer_fhand.seek(0)
+ ofh.write(footer_fhand.read())
+
+ #postprocess
+ if postprocesor is not None:
+ newofh = NamedTemporaryFile(dir=work_dir.name, delete=False,
+ suffix=suffix)
+ postprocesor(ofh, newofh)
+ ofh_path = ofh.name
+ ofh.close()
+ os.remove(ofh_path)
+ ofh = newofh
+
#we have to close the files otherwise we can run out of files
#in the os filesystem
if file_is_str:
new_files.append(ofh.name)
else:
new_files.append(ofh)
ofh.close()
splits_made += 1
+
return new_files
return splitter
fastq_splitter = _create_file_splitter(kind='fastq')
blank_line_splitter = _create_file_splitter(kind='blank_line')
+bam_splitter = _create_file_splitter(kind='bam')
+
def create_file_splitter_with_re(expression):
'''Given an expression it creates a file splitter.
The expression can be a regex or an str.
The item in the file will be defined everytime a line matches the
expression.
'''
return _create_file_splitter(kind='re', expression=expression)
def get_splitter(expression):
'''If the expression is a known splitter kind it returns it, otherwise it
creates a regular expression based splitter'''
if expression == 'fastq':
return fastq_splitter
elif expression == 'blank_line':
return blank_line_splitter
else:
return create_file_splitter_with_re(expression)
def create_non_splitter_splitter(copy_files=False):
'''It creates an splitter function that will not split the given file.
The created splitter will create one file for every work_dir given. This
file can be empty (useful for the output streams, or a copy of the given
file (useful for the no_split input streams).
'''
def splitter(file_, work_dirs):
'''It creates one output file for every splits.
Every split will be located in one of the work_dirs.
It returns a list with the fpaths for the new files.
'''
#the file_ can be an fname or an fhand. which one is it?
file_is_str = None
if isinstance(file_, str):
fname = file_
file_is_str = True
else:
fname = file_.name
file_is_str = False
#how many splits do we want?
nsplits = len(work_dirs)
new_fpaths = []
#we have to create nsplits
suffix = os.path.splitext(fname)[-1]
for split_index in range(nsplits):
work_dir = work_dirs[split_index]
#we use delete=False because this temp file is in a temp dir that
#will be completely deleted. If we use delete=True we get an error
#because the file might be already deleted when its __del__ method
#is called
ofh = NamedTemporaryFile(dir=work_dir.name, suffix=suffix,
delete=False)
os.remove(ofh.name)
ofh_name = ofh.name
#we have to close the files otherwise we can run out of files
#in the os filesystem
ofh.close()
if copy_files:
#i've tried with os.symlink but condor does not like it
shutil.copyfile(fname, ofh_name)
#the file will be deleted
#what do we need the fname or the fhand?
if file_is_str:
new_fpaths.append(ofh.name)
else:
new_fpaths.append(ofh)
return new_fpaths
return splitter
diff --git a/psubprocess/utils.py b/psubprocess/utils.py
index cd1d5b3..f3d0bd6 100644
--- a/psubprocess/utils.py
+++ b/psubprocess/utils.py
@@ -1,47 +1,120 @@
'''
Created on 03/12/2009
@author: jose
'''
# Copyright 2009 Jose Blanca, Peio Ziarsolo, COMAV-Univ. Politecnica Valencia
# This file is part of psubprocess.
# psubprocess is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as
# published by the Free Software Foundation, either version 3 of the
# License, or (at your option) any later version.
# psubprocess is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
# You should have received a copy of the GNU Affero General Public License
# along with psubprocess. If not, see <http://www.gnu.org/licenses/>.
-import tempfile, os, shutil
+import tempfile, os, shutil, signal, subprocess, logging
class NamedTemporaryDir(object):
'''This class creates temporary directories '''
#pylint: disable-msg=W0622
#we redifine the build in dir because temfile uses that inteface
def __init__(self, dir=None):
'''It initiates the class.'''
self._name = tempfile.mkdtemp(dir=dir)
def get_name(self):
'Returns path to the dict'
return self._name
name = property(get_name)
def close(self):
'''It removes the temp dir'''
if os.path.exists(self._name):
shutil.rmtree(self._name)
def __del__(self):
'''It removes the temp dir when instance is removed and the garbage
collector decides it'''
self.close()
def copy_file_mode(fpath1, fpath2):
'It copies the os.stats mode from file1 to file2'
mode = os.stat(fpath1)[0]
os.chmod(fpath2, mode)
+
+def call(cmd, environment=None, stdin=None, raise_on_error=False,
+ stdout=None, stderr=None, log=False):
+ 'It calls a command and it returns stdout, stderr and retcode'
+ def subprocess_setup():
+ ''' Python installs a SIGPIPE handler by default. This is usually not
+ what non-Python subprocesses expect. Taken from this url:
+ http://www.chiark.greenend.org.uk/ucgi/~cjwatson/blosxom/2009/07/02#
+ 2009-07-02-python-sigpipe'''
+ signal.signal(signal.SIGPIPE, signal.SIG_DFL)
+
+ if stdin is None:
+ pstdin = None
+ else:
+ pstdin = subprocess.PIPE
+ if stdout is None:
+ stdout = subprocess.PIPE
+ if stderr is None:
+ stderr = subprocess.PIPE
+ #we want to inherit the environment, and modify it
+ if environment is not None:
+ new_env = {}
+ for key, value in os.environ.items():
+ new_env[key] = value
+ for key, value in environment.items():
+ new_env[key] = value
+ environment = new_env
+
+ if log:
+ logger = logging.getLogger('franklin')
+ logger.info('Running command: ' + ' '.join(cmd))
+
+ try:
+ process = subprocess.Popen(cmd, stdout=stdout, stderr=stderr,
+ env=environment, stdin=pstdin,
+ preexec_fn=subprocess_setup)
+ except OSError:
+ #if it fails let's be sure that the binary is not on the system
+ binary = cmd[0]
+ if binary is None:
+ raise OSError('The binary was not found: ' + cmd[0])
+ #let's try with an absolute path, sometimes works
+ cmd.pop(0)
+ cmd.insert(0, binary)
+
+ process = subprocess.Popen(cmd, stdout=stdout, stderr=stderr,
+ env=environment, stdin=pstdin,
+ preexec_fn=subprocess_setup)
+
+ if stdin is None:
+ stdout_str, stderr_str = process.communicate()
+ else:
+ stdout_str, stderr_str = process.communicate(stdin)
+ retcode = process.returncode
+
+ if stdout != subprocess.PIPE:
+ stdout.flush()
+ stdout.seek(0)
+ if stderr != subprocess.PIPE:
+ stderr.flush()
+ stderr.seek(0)
+
+ if raise_on_error and retcode:
+ if stdout != subprocess.PIPE:
+ stdout_str = open(stdout.name).read()
+ if stderr != subprocess.PIPE:
+ stderr_str = open(stderr.name).read()
+ msg = 'Error running command: %s\n stderr: %s\n stdout: %s' % \
+ (' '.join(cmd), stderr_str,
+ stdout_str)
+ raise RuntimeError(msg)
+
+ return stdout_str, stderr_str, retcode
diff --git a/test/splitter_test.py b/test/splitter_test.py
index 1ec2c79..fa7e188 100644
--- a/test/splitter_test.py
+++ b/test/splitter_test.py
@@ -1,94 +1,116 @@
'''
Created on 03/12/2009
@author: jose
'''
# Copyright 2009 Jose Blanca, Peio Ziarsolo, COMAV-Univ. Politecnica Valencia
# This file is part of psubprocess.
# psubprocess is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as
# published by the Free Software Foundation, either version 3 of the
# License, or (at your option) any later version.
# psubprocess is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
# You should have received a copy of the GNU Affero General Public License
# along with psubprocess. If not, see <http://www.gnu.org/licenses/>.
-import unittest
+import unittest, os
+from StringIO import StringIO
from tempfile import NamedTemporaryFile
+from franklin.utils.misc_utils import DATA_DIR
from psubprocess.prunner import NamedTemporaryDir
from psubprocess.splitters import (create_file_splitter_with_re, fastq_splitter,
- blank_line_splitter)
+ bam_splitter, blank_line_splitter)
class SplitterTest(unittest.TestCase):
'It test that we can split the input files'
@staticmethod
def test_re_splitter():
'It tests the general regular expression based splitter'
fastq = '@seq1\nACTG\n+\nmoco\n@seq2\nGTCA\n+\nhola\n'
file_ = NamedTemporaryFile()
file_.write(fastq)
file_.flush()
splitter = create_file_splitter_with_re(expression='^@')
dir1 = NamedTemporaryDir()
dir2 = NamedTemporaryDir()
dir3 = NamedTemporaryDir()
new_files = splitter(file_, [dir1, dir2, dir3])
assert len(new_files) == 2
assert open(new_files[0].name).read() == '@seq1\nACTG\n+\nmoco\n'
assert open(new_files[1].name).read() == '@seq2\nGTCA\n+\nhola\n'
dir1.close()
dir2.close()
dir3.close()
@staticmethod
def test_fastq_splitter():
'It tests the fastq splitter'
fastq = '@seq1\nACTG\n+\nmoco\n@seq2\nGTCA\n+\nhola\n'
file_ = NamedTemporaryFile()
file_.write(fastq)
file_.flush()
splitter = fastq_splitter
dir1 = NamedTemporaryDir()
dir2 = NamedTemporaryDir()
dir3 = NamedTemporaryDir()
new_files = splitter(file_, [dir1, dir2, dir3])
assert len(new_files) == 2
assert open(new_files[0].name).read() == '@seq1\nACTG\n+\nmoco\n'
assert open(new_files[1].name).read() == '@seq2\nGTCA\n+\nhola\n'
dir1.close()
dir2.close()
dir3.close()
@staticmethod
def test_blank_line_splitter():
'It tests the blank line splitter'
fastq = 'hola\n\ncaracola\n\n'
file_ = NamedTemporaryFile()
file_.write(fastq)
file_.flush()
splitter = blank_line_splitter
dir1 = NamedTemporaryDir()
dir2 = NamedTemporaryDir()
dir3 = NamedTemporaryDir()
new_files = splitter(file_, [dir1, dir2, dir3])
assert len(new_files) == 2
assert open(new_files[0].name).read() == 'hola\n\n'
assert open(new_files[1].name).read() == 'caracola\n\n'
dir1.close()
dir2.close()
dir3.close()
+ @staticmethod
+ def test_bam_splitter():
+ 'It test bam splitter'
+
+ bam_fhand = os.path.join(DATA_DIR, 'seq.bam')
+
+
+ splitter = bam_splitter
+ dir1 = NamedTemporaryDir()
+ dir2 = NamedTemporaryDir()
+ dir3 = NamedTemporaryDir()
+ new_files = splitter(bam_fhand, [dir1, dir2, dir3])
+ assert len(new_files) == 2
+
+ dir1.close()
+ dir2.close()
+ dir3.close()
+
+
+
if __name__ == "__main__":
#import sys;sys.argv = ['', 'Test.testName']
- unittest.main()
\ No newline at end of file
+ unittest.main()
|
JoseBlanca/psubprocess
|
ead40b3a68ecd9689343cc2a853374fa72cca936
|
bugfix: program version is taken from psubprocess.__init__
|
diff --git a/setup.py b/setup.py
index 066866d..8b8f0ae 100644
--- a/setup.py
+++ b/setup.py
@@ -1,73 +1,75 @@
'''
Created on 25/03/2009
@author: jose blanca
'''
#taken from django-tagging
import os
from distutils.core import setup
+import psubprocess
+
PACKAGE_DIR = 'psubprocess'
SCRIPTS_DIR = 'scripts'
def fullsplit(path, result=None):
"""
Split a pathname into components (the opposite of os.path.join) in a
platform-neutral way.
"""
if result is None:
result = []
head, tail = os.path.split(path)
if head == '':
return [tail] + result
if head == path:
return result
return fullsplit(head, [tail] + result)
# Compile the list of packages available, because distutils doesn't have
# an easy way to do this.
packages, data_files, modules = [], [], []
root_dir = os.path.dirname(__file__)
pieces = fullsplit(root_dir)
if pieces[-1] == '':
len_root_dir = len(pieces) - 1
else:
len_root_dir = len(pieces)
for dirpath, dirnames, filenames in os.walk(os.path.join(root_dir,
PACKAGE_DIR)):
if '__init__.py' in filenames:
package = '.'.join(fullsplit(dirpath)[len_root_dir:])
packages.append(package)
for filename in os.listdir(dirpath):
if (filename.startswith('.') or filename.startswith('_') or
not filename.endswith('.py')):
continue
modules.append(package + '.' + filename)
elif filenames:
data_files.append([dirpath, [os.path.join(dirpath, f) for f in filenames]])
scripts = []
for dirpath, dirnames, filenames in os.walk(os.path.join(root_dir,
SCRIPTS_DIR)):
for filename in filenames:
if filename == '__init__.py':
continue
elif filename.endswith('.py'):
scripts.append(os.path.join(dirpath, filename))
setup(
# basic package data
name = PACKAGE_DIR,
- version = "0.0.1",
+ version = psubprocess.__version__,
author='Jose Blanca, Peio Ziarsolo',
author_email='jblanca@btc.upv.es',
description='runs commands in parallel environments',
# package structure
packages=packages,
package_dir={'':'.'},
requires=[],
scripts=scripts,
)
|
JoseBlanca/psubprocess
|
b5cd16022708982562b75587224ae8a3d68da2ad
|
version 0.1.1
|
diff --git a/psubprocess/__init__.py b/psubprocess/__init__.py
index 0a6bcc0..2272ba3 100644
--- a/psubprocess/__init__.py
+++ b/psubprocess/__init__.py
@@ -1,7 +1,9 @@
'Some magic to get a nicer interface'
+__version__ = '0.1.1'
+
from . import prunner
from . import condor_runner
Popen = prunner.Popen
-CondorPopen = condor_runner.Popen
\ No newline at end of file
+CondorPopen = condor_runner.Popen
|
JoseBlanca/psubprocess
|
2c3ca08a8fbf795aff4c5cc35da1e3ca6aa5e947
|
added an splitter for items separated by blank lines
|
diff --git a/psubprocess/splitters.py b/psubprocess/splitters.py
index 233a900..caae9a1 100644
--- a/psubprocess/splitters.py
+++ b/psubprocess/splitters.py
@@ -1,264 +1,266 @@
'''
Created on 03/12/2009
@author: jose
'''
# Copyright 2009 Jose Blanca, Peio Ziarsolo, COMAV-Univ. Politecnica Valencia
# This file is part of psubprocess.
# psubprocess is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as
# published by the Free Software Foundation, either version 3 of the
# License, or (at your option) any later version.
# psubprocess is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
# You should have received a copy of the GNU Affero General Public License
# along with psubprocess. If not, see <http://www.gnu.org/licenses/>.
import re, os, shutil
from tempfile import NamedTemporaryFile
from psubprocess.utils import copy_file_mode
from Bio.SeqIO.QualityIO import FastqGeneralIterator
def _calculate_divisions(num_items, splits):
'''It calculates how many items should be in every split to divide
the num_items into splits.
Not all splits will have an equal number of items, it will return a tuple
with two tuples inside:
((num_fragments_1, num_items_1), (num_fragments_2, num_items_2))
splits = num_fragments_1 + num_fragments_2
num_items_1 = num_items_2 + 1
num_fragments_1 could be equal to 0.
This is the best way to create as many splits as possible as similar as
possible.
'''
if splits >= num_items:
return ((0, 1), (splits, 1))
num_fragments1 = num_items % splits
num_fragments2 = splits - num_fragments1
num_items2 = num_items // splits
num_items1 = num_items2 + 1
res = ((num_fragments1, num_items1), (num_fragments2, num_items2))
return res
def _items_in_file(fhand, expression):
'''Given an fhand and an expression it yields the items cutting where the
line matches the expression'''
sofar = fhand.readline()
for line in fhand:
if expression.search(line):
yield sofar
sofar = line
else:
sofar += line
else:
#the last item
yield sofar
def _re_item_counter(fhand, expression):
'It counts how many times the expression is found in the file'
nitems = 0
for line in fhand:
if expression.search(line):
nitems += 1
return nitems
def _items_in_fastq(fhand, expression=None):
'It returns the fastq items'
for item in FastqGeneralIterator(fhand):
yield '@%s\n%s\n+\n%s\n' % (item)
def _fastq_items_counter(fhand, expression=None):
nitems = 0
for item in FastqGeneralIterator(fhand):
nitems += 1
return nitems
def _blank_line_items_counter(fhand, expression=None):
'It returns the number of items separated by blank line'
nitems = 0
item_read = False
for line in fhand:
line = line.rstrip()
if line:
item_read = True
elif item_read and not line:
item_read = False
nitems += 1
return nitems
def _items_in_blank_line(fhand, expression=None):
'It returns the items separated by blank lines'
buffer_ = ''
for line in fhand:
line = line.rstrip()
if line:
buffer_ += line + '\n'
elif buffer_ and not line:
yield buffer_ + '\n'
buffer_ = ''
if buffer_:
yield buffer_ + '\n'
def _create_file_splitter(kind, expression=None):
'''Given an expression it creates a file splitter.
The expression can be a regex or an str.
The item in the file will be defined everytime a line matches the
expression.
'''
item_counters = {'re': _re_item_counter,
'fastq': _fastq_items_counter,
'blank_line': _blank_line_items_counter}
item_splitters = {'re':_items_in_file,
'fastq':_items_in_fastq,
'blank_line': _items_in_blank_line}
item_counter = item_counters[kind]
item_splitter = item_splitters[kind]
if expression is not None and isinstance(expression, str):
expression = re.compile(expression)
def splitter(file_, work_dirs):
'''It splits the given file into several splits.
Every split will be located in one of the work_dirs, although it is not
guaranteed to create as many splits as work dirs. If in the file there
are less items than work_dirs some work_dirs will be left empty.
It returns a list with the fpaths or fhands for the splitted files.
file_ can be an fhand or an fname.
'''
#the file_ can be an fname or an fhand. which one is it?
file_is_str = None
if isinstance(file_, str):
fname = file_
file_is_str = True
else:
fname = file_.name
file_is_str = False
#how many splits do we want?
nsplits = len(work_dirs)
#how many items are in the file? We assume that all files have the same
#number of items
fhand = open(fname, 'r')
nitems = item_counter(fhand, expression)
#how many splits a we going to create? and how many items will be in
#every split
#if there are more items than splits we create as many splits as items
if nsplits > nitems:
nsplits = nitems
(nsplits1, nitems1), (nsplits2, nitems2) = _calculate_divisions(nitems,
nsplits)
#we have to create nsplits1 files with nitems1 in it and nsplits2 files
#with nitems2 items in it
new_files = []
fhand = open(fname, 'r')
items = item_splitter(fhand, expression)
splits_made = 0
for nsplits, nitems in ((nsplits1, nitems1), (nsplits2, nitems2)):
#we have to create nsplits files with nitems in it
#we don't need the split_index for anything
#pylint: disable-msg=W0612
for split_index in range(nsplits):
suffix = os.path.splitext(fname)[-1]
work_dir = work_dirs[splits_made]
ofh = NamedTemporaryFile(dir=work_dir.name, delete=False,
suffix=suffix)
copy_file_mode(fhand.name, ofh.name)
for item_index in range(nitems):
ofh.write(items.next())
ofh.flush()
#we have to close the files otherwise we can run out of files
#in the os filesystem
if file_is_str:
new_files.append(ofh.name)
else:
new_files.append(ofh)
ofh.close()
splits_made += 1
return new_files
return splitter
fastq_splitter = _create_file_splitter(kind='fastq')
blank_line_splitter = _create_file_splitter(kind='blank_line')
def create_file_splitter_with_re(expression):
'''Given an expression it creates a file splitter.
The expression can be a regex or an str.
The item in the file will be defined everytime a line matches the
expression.
'''
return _create_file_splitter(kind='re', expression=expression)
def get_splitter(expression):
'''If the expression is a known splitter kind it returns it, otherwise it
creates a regular expression based splitter'''
if expression == 'fastq':
return fastq_splitter
+ elif expression == 'blank_line':
+ return blank_line_splitter
else:
return create_file_splitter_with_re(expression)
def create_non_splitter_splitter(copy_files=False):
'''It creates an splitter function that will not split the given file.
The created splitter will create one file for every work_dir given. This
file can be empty (useful for the output streams, or a copy of the given
file (useful for the no_split input streams).
'''
def splitter(file_, work_dirs):
'''It creates one output file for every splits.
Every split will be located in one of the work_dirs.
It returns a list with the fpaths for the new files.
'''
#the file_ can be an fname or an fhand. which one is it?
file_is_str = None
if isinstance(file_, str):
fname = file_
file_is_str = True
else:
fname = file_.name
file_is_str = False
#how many splits do we want?
nsplits = len(work_dirs)
new_fpaths = []
#we have to create nsplits
suffix = os.path.splitext(fname)[-1]
for split_index in range(nsplits):
work_dir = work_dirs[split_index]
#we use delete=False because this temp file is in a temp dir that
#will be completely deleted. If we use delete=True we get an error
#because the file might be already deleted when its __del__ method
#is called
ofh = NamedTemporaryFile(dir=work_dir.name, suffix=suffix,
delete=False)
os.remove(ofh.name)
ofh_name = ofh.name
#we have to close the files otherwise we can run out of files
#in the os filesystem
ofh.close()
if copy_files:
#i've tried with os.symlink but condor does not like it
shutil.copyfile(fname, ofh_name)
#the file will be deleted
#what do we need the fname or the fhand?
if file_is_str:
new_fpaths.append(ofh.name)
else:
new_fpaths.append(ofh)
return new_fpaths
return splitter
|
JoseBlanca/psubprocess
|
cb2127cc5241447fa3632227b845966241679e2e
|
added an splitter for items separated by blank lines
|
diff --git a/psubprocess/splitters.py b/psubprocess/splitters.py
index 2857424..233a900 100644
--- a/psubprocess/splitters.py
+++ b/psubprocess/splitters.py
@@ -1,234 +1,264 @@
'''
Created on 03/12/2009
@author: jose
'''
# Copyright 2009 Jose Blanca, Peio Ziarsolo, COMAV-Univ. Politecnica Valencia
# This file is part of psubprocess.
# psubprocess is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as
# published by the Free Software Foundation, either version 3 of the
# License, or (at your option) any later version.
# psubprocess is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
# You should have received a copy of the GNU Affero General Public License
# along with psubprocess. If not, see <http://www.gnu.org/licenses/>.
import re, os, shutil
from tempfile import NamedTemporaryFile
from psubprocess.utils import copy_file_mode
from Bio.SeqIO.QualityIO import FastqGeneralIterator
def _calculate_divisions(num_items, splits):
'''It calculates how many items should be in every split to divide
the num_items into splits.
Not all splits will have an equal number of items, it will return a tuple
with two tuples inside:
((num_fragments_1, num_items_1), (num_fragments_2, num_items_2))
splits = num_fragments_1 + num_fragments_2
num_items_1 = num_items_2 + 1
num_fragments_1 could be equal to 0.
This is the best way to create as many splits as possible as similar as
possible.
'''
if splits >= num_items:
return ((0, 1), (splits, 1))
num_fragments1 = num_items % splits
num_fragments2 = splits - num_fragments1
num_items2 = num_items // splits
num_items1 = num_items2 + 1
res = ((num_fragments1, num_items1), (num_fragments2, num_items2))
return res
def _items_in_file(fhand, expression):
'''Given an fhand and an expression it yields the items cutting where the
line matches the expression'''
sofar = fhand.readline()
for line in fhand:
if expression.search(line):
yield sofar
sofar = line
else:
sofar += line
else:
#the last item
yield sofar
def _re_item_counter(fhand, expression):
'It counts how many times the expression is found in the file'
nitems = 0
for line in fhand:
if expression.search(line):
nitems += 1
return nitems
def _items_in_fastq(fhand, expression=None):
'It returns the fastq items'
for item in FastqGeneralIterator(fhand):
yield '@%s\n%s\n+\n%s\n' % (item)
def _fastq_items_counter(fhand, expression=None):
nitems = 0
for item in FastqGeneralIterator(fhand):
nitems += 1
return nitems
+def _blank_line_items_counter(fhand, expression=None):
+ 'It returns the number of items separated by blank line'
+ nitems = 0
+ item_read = False
+ for line in fhand:
+ line = line.rstrip()
+ if line:
+ item_read = True
+ elif item_read and not line:
+ item_read = False
+ nitems += 1
+ return nitems
+
+def _items_in_blank_line(fhand, expression=None):
+ 'It returns the items separated by blank lines'
+ buffer_ = ''
+ for line in fhand:
+ line = line.rstrip()
+ if line:
+ buffer_ += line + '\n'
+ elif buffer_ and not line:
+ yield buffer_ + '\n'
+ buffer_ = ''
+ if buffer_:
+ yield buffer_ + '\n'
+
def _create_file_splitter(kind, expression=None):
'''Given an expression it creates a file splitter.
The expression can be a regex or an str.
The item in the file will be defined everytime a line matches the
expression.
'''
item_counters = {'re': _re_item_counter,
- 'fastq': _fastq_items_counter}
+ 'fastq': _fastq_items_counter,
+ 'blank_line': _blank_line_items_counter}
item_splitters = {'re':_items_in_file,
- 'fastq':_items_in_fastq}
+ 'fastq':_items_in_fastq,
+ 'blank_line': _items_in_blank_line}
item_counter = item_counters[kind]
item_splitter = item_splitters[kind]
if expression is not None and isinstance(expression, str):
expression = re.compile(expression)
def splitter(file_, work_dirs):
'''It splits the given file into several splits.
Every split will be located in one of the work_dirs, although it is not
guaranteed to create as many splits as work dirs. If in the file there
are less items than work_dirs some work_dirs will be left empty.
It returns a list with the fpaths or fhands for the splitted files.
file_ can be an fhand or an fname.
'''
#the file_ can be an fname or an fhand. which one is it?
file_is_str = None
if isinstance(file_, str):
fname = file_
file_is_str = True
else:
fname = file_.name
file_is_str = False
#how many splits do we want?
nsplits = len(work_dirs)
#how many items are in the file? We assume that all files have the same
#number of items
fhand = open(fname, 'r')
nitems = item_counter(fhand, expression)
#how many splits a we going to create? and how many items will be in
#every split
#if there are more items than splits we create as many splits as items
if nsplits > nitems:
nsplits = nitems
(nsplits1, nitems1), (nsplits2, nitems2) = _calculate_divisions(nitems,
nsplits)
#we have to create nsplits1 files with nitems1 in it and nsplits2 files
#with nitems2 items in it
new_files = []
fhand = open(fname, 'r')
items = item_splitter(fhand, expression)
splits_made = 0
for nsplits, nitems in ((nsplits1, nitems1), (nsplits2, nitems2)):
#we have to create nsplits files with nitems in it
#we don't need the split_index for anything
#pylint: disable-msg=W0612
for split_index in range(nsplits):
suffix = os.path.splitext(fname)[-1]
work_dir = work_dirs[splits_made]
ofh = NamedTemporaryFile(dir=work_dir.name, delete=False,
suffix=suffix)
copy_file_mode(fhand.name, ofh.name)
for item_index in range(nitems):
ofh.write(items.next())
ofh.flush()
#we have to close the files otherwise we can run out of files
#in the os filesystem
if file_is_str:
new_files.append(ofh.name)
else:
new_files.append(ofh)
ofh.close()
splits_made += 1
return new_files
return splitter
fastq_splitter = _create_file_splitter(kind='fastq')
+blank_line_splitter = _create_file_splitter(kind='blank_line')
+
def create_file_splitter_with_re(expression):
'''Given an expression it creates a file splitter.
The expression can be a regex or an str.
The item in the file will be defined everytime a line matches the
expression.
'''
return _create_file_splitter(kind='re', expression=expression)
def get_splitter(expression):
'''If the expression is a known splitter kind it returns it, otherwise it
creates a regular expression based splitter'''
if expression == 'fastq':
return fastq_splitter
else:
return create_file_splitter_with_re(expression)
def create_non_splitter_splitter(copy_files=False):
'''It creates an splitter function that will not split the given file.
The created splitter will create one file for every work_dir given. This
file can be empty (useful for the output streams, or a copy of the given
file (useful for the no_split input streams).
'''
def splitter(file_, work_dirs):
'''It creates one output file for every splits.
Every split will be located in one of the work_dirs.
It returns a list with the fpaths for the new files.
'''
#the file_ can be an fname or an fhand. which one is it?
file_is_str = None
if isinstance(file_, str):
fname = file_
file_is_str = True
else:
fname = file_.name
file_is_str = False
#how many splits do we want?
nsplits = len(work_dirs)
new_fpaths = []
#we have to create nsplits
suffix = os.path.splitext(fname)[-1]
for split_index in range(nsplits):
work_dir = work_dirs[split_index]
#we use delete=False because this temp file is in a temp dir that
#will be completely deleted. If we use delete=True we get an error
#because the file might be already deleted when its __del__ method
#is called
ofh = NamedTemporaryFile(dir=work_dir.name, suffix=suffix,
delete=False)
os.remove(ofh.name)
ofh_name = ofh.name
#we have to close the files otherwise we can run out of files
#in the os filesystem
ofh.close()
if copy_files:
#i've tried with os.symlink but condor does not like it
shutil.copyfile(fname, ofh_name)
#the file will be deleted
#what do we need the fname or the fhand?
if file_is_str:
new_fpaths.append(ofh.name)
else:
new_fpaths.append(ofh)
return new_fpaths
return splitter
diff --git a/test/splitter_test.py b/test/splitter_test.py
index c3ad2a6..1ec2c79 100644
--- a/test/splitter_test.py
+++ b/test/splitter_test.py
@@ -1,73 +1,94 @@
'''
Created on 03/12/2009
@author: jose
'''
# Copyright 2009 Jose Blanca, Peio Ziarsolo, COMAV-Univ. Politecnica Valencia
# This file is part of psubprocess.
# psubprocess is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as
# published by the Free Software Foundation, either version 3 of the
# License, or (at your option) any later version.
# psubprocess is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
# You should have received a copy of the GNU Affero General Public License
# along with psubprocess. If not, see <http://www.gnu.org/licenses/>.
import unittest
from tempfile import NamedTemporaryFile
from psubprocess.prunner import NamedTemporaryDir
-from psubprocess.splitters import create_file_splitter_with_re, fastq_splitter
+from psubprocess.splitters import (create_file_splitter_with_re, fastq_splitter,
+ blank_line_splitter)
class SplitterTest(unittest.TestCase):
'It test that we can split the input files'
@staticmethod
def test_re_splitter():
'It tests the general regular expression based splitter'
fastq = '@seq1\nACTG\n+\nmoco\n@seq2\nGTCA\n+\nhola\n'
file_ = NamedTemporaryFile()
file_.write(fastq)
file_.flush()
splitter = create_file_splitter_with_re(expression='^@')
dir1 = NamedTemporaryDir()
dir2 = NamedTemporaryDir()
dir3 = NamedTemporaryDir()
new_files = splitter(file_, [dir1, dir2, dir3])
assert len(new_files) == 2
assert open(new_files[0].name).read() == '@seq1\nACTG\n+\nmoco\n'
assert open(new_files[1].name).read() == '@seq2\nGTCA\n+\nhola\n'
dir1.close()
dir2.close()
dir3.close()
@staticmethod
def test_fastq_splitter():
'It tests the fastq splitter'
fastq = '@seq1\nACTG\n+\nmoco\n@seq2\nGTCA\n+\nhola\n'
file_ = NamedTemporaryFile()
file_.write(fastq)
file_.flush()
splitter = fastq_splitter
dir1 = NamedTemporaryDir()
dir2 = NamedTemporaryDir()
dir3 = NamedTemporaryDir()
new_files = splitter(file_, [dir1, dir2, dir3])
assert len(new_files) == 2
assert open(new_files[0].name).read() == '@seq1\nACTG\n+\nmoco\n'
assert open(new_files[1].name).read() == '@seq2\nGTCA\n+\nhola\n'
dir1.close()
dir2.close()
dir3.close()
+ @staticmethod
+ def test_blank_line_splitter():
+ 'It tests the blank line splitter'
+ fastq = 'hola\n\ncaracola\n\n'
+ file_ = NamedTemporaryFile()
+ file_.write(fastq)
+ file_.flush()
+
+ splitter = blank_line_splitter
+ dir1 = NamedTemporaryDir()
+ dir2 = NamedTemporaryDir()
+ dir3 = NamedTemporaryDir()
+ new_files = splitter(file_, [dir1, dir2, dir3])
+ assert len(new_files) == 2
+
+ assert open(new_files[0].name).read() == 'hola\n\n'
+ assert open(new_files[1].name).read() == 'caracola\n\n'
+ dir1.close()
+ dir2.close()
+ dir3.close()
if __name__ == "__main__":
#import sys;sys.argv = ['', 'Test.testName']
unittest.main()
\ No newline at end of file
|
JoseBlanca/psubprocess
|
2f102d750c30d41b72cc587a5ca404e92e71cf5e
|
setup moved to distutils
|
diff --git a/setup.py b/setup.py
index ca67e08..066866d 100644
--- a/setup.py
+++ b/setup.py
@@ -1,73 +1,73 @@
'''
Created on 25/03/2009
@author: jose blanca
'''
#taken from django-tagging
import os
+from distutils.core import setup
PACKAGE_DIR = 'psubprocess'
SCRIPTS_DIR = 'scripts'
def fullsplit(path, result=None):
"""
Split a pathname into components (the opposite of os.path.join) in a
platform-neutral way.
"""
if result is None:
result = []
head, tail = os.path.split(path)
if head == '':
return [tail] + result
if head == path:
return result
return fullsplit(head, [tail] + result)
# Compile the list of packages available, because distutils doesn't have
# an easy way to do this.
packages, data_files, modules = [], [], []
root_dir = os.path.dirname(__file__)
pieces = fullsplit(root_dir)
if pieces[-1] == '':
len_root_dir = len(pieces) - 1
else:
len_root_dir = len(pieces)
for dirpath, dirnames, filenames in os.walk(os.path.join(root_dir,
PACKAGE_DIR)):
if '__init__.py' in filenames:
package = '.'.join(fullsplit(dirpath)[len_root_dir:])
packages.append(package)
for filename in os.listdir(dirpath):
if (filename.startswith('.') or filename.startswith('_') or
not filename.endswith('.py')):
continue
modules.append(package + '.' + filename)
elif filenames:
data_files.append([dirpath, [os.path.join(dirpath, f) for f in filenames]])
scripts = []
for dirpath, dirnames, filenames in os.walk(os.path.join(root_dir,
SCRIPTS_DIR)):
for filename in filenames:
if filename == '__init__.py':
continue
elif filename.endswith('.py'):
scripts.append(os.path.join(dirpath, filename))
-from setuptools import setup
setup(
# basic package data
name = PACKAGE_DIR,
version = "0.0.1",
author='Jose Blanca, Peio Ziarsolo',
author_email='jblanca@btc.upv.es',
description='runs commands in parallel environments',
# package structure
packages=packages,
package_dir={'':'.'},
requires=[],
scripts=scripts,
)
|
JoseBlanca/psubprocess
|
b0b0bde9ad07fddf7a6ad94723ac0e5e9259543e
|
build dirs added to gitignore
|
diff --git a/.gitignore b/.gitignore
index 57ea1eb..9ae73c8 100644
--- a/.gitignore
+++ b/.gitignore
@@ -1,6 +1,9 @@
.project
.pydevproject
*.pyc
*.tar.gz
doc/_build/
doc/downloads/
+build/
+dist/
+psubprocess.egg-info/
|
JoseBlanca/psubprocess
|
ec8f7a40ce89c9c9a36dab10124705427fc25287
|
downloads added to git ignore
|
diff --git a/.gitignore b/.gitignore
index d03b75e..57ea1eb 100644
--- a/.gitignore
+++ b/.gitignore
@@ -1,4 +1,6 @@
.project
.pydevproject
*.pyc
*.tar.gz
+doc/_build/
+doc/downloads/
|
JoseBlanca/psubprocess
|
192516192e45d7c7acc9c405b975f84768dc8f68
|
tar.gz added to git ignore
|
diff --git a/.gitignore b/.gitignore
index a9af213..d03b75e 100644
--- a/.gitignore
+++ b/.gitignore
@@ -1,2 +1,4 @@
.project
.pydevproject
+*.pyc
+*.tar.gz
|
JoseBlanca/psubprocess
|
6d2c389f82edc5f0ed79480d6d3e0fc76d115f9a
|
added .gitignore
|
diff --git a/.gitignore b/.gitignore
new file mode 100644
index 0000000..a9af213
--- /dev/null
+++ b/.gitignore
@@ -0,0 +1,2 @@
+.project
+.pydevproject
|
JoseBlanca/psubprocess
|
ab951c652f93484898326daa4142cdb190495ed7
|
bug fix: too many open files, python 2.6 required
|
diff --git a/psubprocess/condor_runner.py b/psubprocess/condor_runner.py
index d4ed412..dc7abab 100644
--- a/psubprocess/condor_runner.py
+++ b/psubprocess/condor_runner.py
@@ -1,334 +1,334 @@
'''The main aim of this module is to provide an easy way to launch condor jobs.
Condor is a specialized workload management system for compute-intensive jobs.
Like other full-featured batch systems, Condor provides a job queueing
mechanism, scheduling policy, priority scheme, resource monitoring, and
resource management. More on condor on its web site:
http://www.cs.wisc.edu/condor/
The interface used is similar to the subprocess.Popen one.
Besides the standard parameters like cmd, stdout, stderr, and stdin, this condor
Popen takes a couple of extra paramteres cmd_def and runner_conf. The cmd_def
syntax is explained in the streams.py file. Condor Popen needs the cmd_def to
be able to get from the cmd which are the input and output files. The input
files should be specified in the condor job file, in the case that we want
to transfer them to the computing nodes. Besides the input and output files
in the cmd should have no paths, otherwise the command would fail in the other
machines. That's why we need cmd_def.
Created on 14/07/2009
'''
# Copyright 2009 Jose Blanca, Peio Ziarsolo, COMAV-Univ. Politecnica Valencia
# This file is part of psubprocess.
# psubprocess is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as
# published by the Free Software Foundation, either version 3 of the
# License, or (at your option) any later version.
# psubprocess is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
# You should have received a copy of the GNU Affero General Public License
# along with psubprocess. If not, see <http://www.gnu.org/licenses/>.
-from psubprocess.utils import NamedTemporaryFile
+from tempfile import NamedTemporaryFile
import subprocess, signal, os.path
from subprocess import Popen as PythonPopen
from psubprocess.streams import get_streams_from_cmd
def call(cmd):
'It calls a command and it returns stdout, stderr and retcode'
def subprocess_setup():
''' Python installs a SIGPIPE handler by default. This is usually not
what non-Python subprocesses expect. Taken from this url:
http://www.chiark.greenend.org.uk/ucgi/~cjwatson/blosxom/2009/07/02#
2009-07-02-python-sigpipe'''
signal.signal(signal.SIGPIPE, signal.SIG_DFL)
process = PythonPopen(cmd, stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
preexec_fn=subprocess_setup)
stdout, stderr = process.communicate()
retcode = process.returncode
return stdout, stderr, retcode
def write_condor_job_file(fhand, parameters):
'It writes a condor job file using the given fhand'
to_print = 'Executable = %s\nArguments = "%s"\nUniverse = vanilla\n' % \
(parameters['executable'], parameters['arguments'])
to_print += 'Log = %s\n' % parameters['log_file'].name
if parameters['transfer_files']:
to_print += 'When_to_transfer_output = ON_EXIT\n'
to_print += 'Getenv = True\n'
if ('transfer_executable' in parameters and
parameters['transfer_executable']):
to_print += 'Transfer_executable = %s\n' % \
parameters['transfer_executable']
if 'input_fnames' in parameters and parameters['input_fnames']:
ins = ','.join(parameters['input_fnames'])
to_print += 'Transfer_input_files = %s\n' % ins
if parameters['transfer_files']:
to_print += 'Should_transfer_files = IF_NEEDED\n'
if 'requirements' in parameters:
to_print += "Requirements = %s\n" % parameters['requirements']
if 'stdout' in parameters:
to_print += 'Output = %s\n' % parameters['stdout'].name
if 'stderr' in parameters:
to_print += 'Error = %s\n' % parameters['stderr'].name
if 'stdin' in parameters:
to_print += 'Input = %s\n' % parameters['stdin'].name
to_print += 'Queue\n'
fhand.write(to_print)
fhand.flush()
fhand.close()
class Popen(object):
'''It launches and controls a condor job.
The job is launched when an instance is created. After that we can get the
cluster id with the method.pid. The rest of the interface is very similar
to the subprocess.Popen one. There's no communicate method because there's
no support for PIPE.
'''
def __init__(self, cmd, cmd_def=None, runner_conf=None, stdout=None,
stderr=None, stdin=None):
'''It launches a condor job.
The interface is similar to the subprocess.Popen one, although there are
some differences.
stdout, stdin and stderr should be file handlers, there's no support for
PIPEs. The extra parameter cmd_def is required if we need to transfer
the input and output files to the computing nodes of the cluster using
the condor file transfer mechanism. The cmd_def syntax is explained in
the streams.py file.
runner_conf is a dict that admits several parameters that control how
condor is run:
- transfer_files: do we want to transfer the files using the condor
transfer file mechanism? (default True)
- condor_log: the condor log file. If it's not given Popen will
create a condor log file in the tempdir.
- transfer_executable: do we want to transfer the executable?
(default False)
- requirements: The requirements line for the condor job file.
(default None)
'''
#we use the same parameters as subprocess.Popen
#pylint: disable-msg=R0913
if cmd_def is None:
cmd_def = []
#runner conf
if runner_conf is None:
runner_conf = {}
#some defaults
if 'transfer_files' not in runner_conf:
runner_conf['transfer_files'] = True
if 'condor_log' not in runner_conf:
self._log_file = NamedTemporaryFile(suffix='.log')
self._log_file.close()
else:
self._log_file = runner_conf['condor_log']
#print 'condor_log', self._log_file
#create condor job file
condor_job_file = self._create_condor_job_file(cmd, cmd_def,
self._log_file,
runner_conf,
stdout, stderr, stdin)
self._condor_job_file = condor_job_file
#print open(condor_job_file.name).read()
#launch condor
self._retcode = None
self._cluster_number = None
#print 'launching'
self._launch_condor(condor_job_file)
#print 'launched'
def _launch_condor(self, condor_job_file):
'Given the condor_job_file it launches the condor job'
try:
stdout, stderr, retcode = call(['condor_submit',
condor_job_file.name])
except OSError, msg:
raise OSError('condor_submit not found in your path.' + str(msg))
if retcode:
msg = 'There was a problem with condor_submit: ' + stderr
raise RuntimeError(msg)
#the condor cluster number is given by condor_submit
#1 job(s) submitted to cluster 15.
for line in stdout.splitlines():
if 'submitted to cluster' in line:
self._cluster_number = line.strip().strip('.').split()[-1]
def _get_pid(self):
'It returns the condor cluster number'
return self._cluster_number
pid = property(_get_pid)
def _get_returncode(self):
'It returns the return code'
return self._retcode
returncode = property(_get_returncode)
@staticmethod
def _remove_paths_from_cmd(cmd, streams, conf):
'''It removes the absolute and relative paths from the cmd,
it returns the modified cmd'''
cmd_mod = cmd[:]
for stream in streams:
if 'fname' not in stream:
continue
fpath = stream['fname']
#for the output files we can't deal with transfering files with
#paths. Condor will deliver those files into the initialdir, not
#where we expected.
if (stream['io'] != 'in' and conf['transfer_files']
and os.path.split(fpath)[-1] != fpath):
msg = 'output files with paths are not transferable'
raise ValueError(msg)
index = cmd_mod.index(fpath)
fpath = os.path.split(fpath)[-1]
cmd_mod[index] = fpath
return cmd_mod
def _create_condor_job_file(self, cmd, cmd_def, log_file, runner_conf,
stdout, stderr, stdin):
'Given a cmd and the cmd_def it returns the condor job file'
#streams
streams = get_streams_from_cmd(cmd, cmd_def, stdout=stdout,
stderr=stderr, stdin=stdin)
#we need some parameters to write the condor file
parameters = {}
#the executable
binary = cmd[0]
#the binary should be an absolute path
if not os.path.isabs(binary):
#the path to the binary could be relative
if os.sep in binary:
#we make the path absolute
binary = os.path.abspath(binary)
else:
#we have to look in the system $PATH
binary = call(['which', binary])[0].strip()
parameters['executable'] = binary
parameters['log_file'] = log_file
#the cmd shouldn't have absolute path in the files because they will be
#transfered to another node in the condor working dir and they wouldn't
#be found with an absolute path
cmd_no_path = self._remove_paths_from_cmd(cmd, streams, runner_conf)
parameters['arguments'] = ' '.join(cmd_no_path[1:])
if stdout is not None:
parameters['stdout'] = stdout
if stderr is not None:
parameters['stderr'] = stderr
if stdin is not None:
parameters['stdin'] = stdin
transfer_bin = False
if 'transfer_executable' in runner_conf:
transfer_bin = runner_conf['transfer_executable']
parameters['transfer_executable'] = transfer_bin
transfer_files = runner_conf['transfer_executable']
parameters['transfer_files'] = str(transfer_files)
if 'requirements' in runner_conf:
parameters['requirements'] = runner_conf['requirements']
in_fnames = []
for stream in streams:
if stream['io'] == 'in':
fname = None
if 'fname' in stream:
fname = stream['fname']
else:
fname = stream['fhand'].name
in_fnames.append(fname)
parameters['input_fnames'] = in_fnames
#now we can create the job file
condor_job_file = NamedTemporaryFile()
write_condor_job_file(condor_job_file, parameters=parameters)
return condor_job_file
def _update_retcode(self):
'It updates the retcode looking at the log file, it returns the retcode'
for line in open(self._log_file.name):
if 'return value' in line:
ret = line.split('return value')[1].strip().strip(')')
self._retcode = int(ret)
return self._retcode
def poll(self):
'It checks if condor has run ower condor cluster'
cluster_number = self._cluster_number
cmd = ['condor_q', cluster_number,
'-format', '"%d.\n"', 'ClusterId']
stdout, stderr, retcode = call(cmd)
if retcode:
msg = 'There was a problem with condor_q: ' + stderr
raise RuntimeError(msg)
if cluster_number not in stdout:
#the job is finished
return self._update_retcode()
return self._retcode
def wait(self):
'It waits until the condor job is finished'
try:
stderr, retcode = call(['condor_wait', self._log_file.name])[1:]
except OSError:
raise OSError('condor_wait not found in your path')
if retcode:
msg = 'There was a problem with condor_wait: ' + stderr
raise RuntimeError(msg)
return self._update_retcode()
def kill(self):
'It runs condor_rm for the condor job'
try:
stderr, retcode = call(['condor_rm', self.pid])[1:]
except OSError:
raise OSError('condor_rm not found in your path')
if retcode:
msg = 'There was a problem with condor_rm: ' + stderr
raise RuntimeError(msg)
return self._update_retcode()
def terminate(self):
'It runs condor_rm for the condor job'
self.kill()
def get_default_splits():
'It returns a suggested number of splits for this Popen runner'
try:
stdout, stderr, retcode = call(['condor_status', '-total'])
except OSError:
raise OSError('condor_status not found in your path')
if retcode:
msg = 'There was a problem with condor_status: ' + stderr
raise RuntimeError(msg)
for line in stdout.splitlines():
line = line.strip().lower()
if line.startswith('total') and 'owner' not in line:
return int(line.split()[1]) * 2
diff --git a/psubprocess/prunner.py b/psubprocess/prunner.py
index 45a13df..2f5ba71 100644
--- a/psubprocess/prunner.py
+++ b/psubprocess/prunner.py
@@ -1,496 +1,496 @@
'''It launches parallel processes with an interface similar to Popen.
It divides jobs into subjobs and launches the subjobs.
This module is useful when we have a non-parallel command to run in a
multiprocessor computer or in a multinode cluster. It will take the input files,
it will split them and it will run a subjob for everyone of the splits. It will
wait for the subjobs to finnish and it will join the output files generated
by all subjobs. At the end of the process will get the same output files as if
the command wasn't run in parallel.
This approach will work with commands that process a lot of items. This module
divides the items in sereval set and it assigns each of this sets to one new
subjob. These are the subjobs that will be run in parallel.
To do it requires the parameters used by popen: cmd, stdin, stdout, stderr and
some extra information: runner, splits and cmd_def.
runner is optional and it should be a subprocess.Popen like class. If it's not
given subprocess.Popen will be used. This Popen be the class used to run the
subjobs. If subprocess.Popen is used the subjobs will run in the processors of
the local node on several independent processes. If the Condor Popen is used
the subjobs will run in a condor cluster.
splits is the number of subjobs that we want to generate. If it's not given the
runner will provide a suitable number.
cmd_def is a dict that defines how the cmd defines the input and output files.
We need to tell Popen which are the input and output files in order to split
them and join them. The syntax for cmd_def is explained in the stream.py module
'''
# Copyright 2009 Jose Blanca, Peio Ziarsolo, COMAV-Univ. Politecnica Valencia
# This file is part of psubprocess.
# psubprocess is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as
# published by the Free Software Foundation, either version 3 of the
# License, or (at your option) any later version.
# psubprocess is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
# You should have received a copy of the GNU Affero General Public License
# along with psubprocess. If not, see <http://www.gnu.org/licenses/>.
from subprocess import Popen as StdPopen
import os, copy
from psubprocess.streams import get_streams_from_cmd, STDOUT, STDERR, STDIN
from psubprocess.condor_runner import call
from psubprocess import condor_runner
from psubprocess.splitters import (get_splitter,
create_non_splitter_splitter)
from psubprocess.utils import NamedTemporaryDir, copy_file_mode
from psubprocess.cmd_def_from_cmd import get_cmd_def_from_cmd
RUNNER_MODULES = {}
RUNNER_MODULES['condor_runner'] = condor_runner
class Popen(object):
'''It paralellizes the given processes dividing them into subprocesses.
The interface is similar to subprocess.Popen to ease the use of this class,
although the functionality of this class is much mor limited.
When an instance of this class is created a series of subjobs is launched.
When all subjobs are finished returncode will have an int, if they're still
running returncode will be None.
We can wait for all subjobs to finnish using the wait method or we can
kill or terminate them using kill and terminate.
'''
def __init__(self, cmd, cmd_def=None, runner=None, runner_conf=None,
stdout=None, stderr=None, stdin=None, splits=None):
'''It inits the a Popen instance, it creates and runs the subjobs.
Like the subprocess.Popen it accepts stdin, stdout, stderr, but in this
case all of them should be files, PIPE will not work.
In the cmd_def list we have to tell this Popen how to locate the
input and output files in the cmd and how to split and join them. Look
for the cmd_format in the streams.py file.
keyword arguments:
cmd -- a list with the cmd to parallelize
cmd_def -- the cmd definition list (default [])
runner -- which runner to use (default subprocess.Popen)
runner_conf -- extra parameters for the runner (default {})
stdout -- a fhand to store the stdout (default None)
stderr -- a fhand to store the stderr (default None)
stdin -- a fhand with the stdin (default None)
splits -- number of subjobs to generate
'''
#we want the same interface as subprocess.popen
#pylint: disable-msg=R0913
self._retcode = None
self._outputs_collected = False
#some defaults
#if the runner is not given, we use subprocess.Popen
if runner is None:
runner = StdPopen
#is the cmd_def set in the command?
cmd, cmd_cmd_def = get_cmd_def_from_cmd(cmd)
if cmd_cmd_def:
cmd_def = cmd_cmd_def
elif cmd_def:
cmd_def = cmd_def
else:
cmd_def = []
if not cmd_def and stdin is not None:
raise ValueError('No cmd_def given but stdin present')
#if the number of splits is not given we calculate them
if splits is None:
splits = self.default_splits(runner)
#we need a work dir to create the temporary split files
self._work_dir = NamedTemporaryDir()
copy_file_mode('.', self._work_dir.name)
#the main job
self._job = {'cmd': cmd, 'work_dir': self._work_dir}
#we create the new subjobs
self._jobs = self._split_jobs(cmd, cmd_def, splits, self._work_dir,
stdout=stdout, stderr=stderr, stdin=stdin)
#launch every subjobs
self._launch_jobs(self._jobs, runner=runner, runner_conf=runner_conf)
@staticmethod
def _launch_jobs(jobs, runner, runner_conf):
'It launches all jobs and it adds its popen instance to them'
jobs['popens'] = []
cwd = os.getcwd()
for job_index, (cmd, streams, work_dir) in enumerate(zip(jobs['cmds'],
jobs['streams'], jobs['work_dirs'])):
#the std stream can be present or not
stdin, stdout, stderr = None, None, None
if jobs['stdins']:
stdin = jobs['stdins'][job_index]
if jobs['stdouts']:
stdout = jobs['stdouts'][job_index]
if jobs['stderrs']:
stderr = jobs['stderrs'][job_index]
#for every job we go to its dir to launch it
os.chdir(work_dir.name)
#we have to be sure that stdin is open for read
if stdin:
stdin = open(stdin.name)
#we have to be sure that stdout and stderr are open for write
if stdout:
stdout = open(stdout.name, 'w')
if stderr:
stderr = open(stderr.name, 'w')
#we launch the job
if runner == StdPopen:
popen = runner(cmd, stdout=stdout, stderr=stderr, stdin=stdin)
else:
popen = runner(cmd, cmd_def=streams, stdout=stdout,
stderr=stderr, stdin=stdin,
runner_conf=runner_conf)
#we record it's popen instane
jobs['popens'].append(popen)
os.chdir(cwd)
def _split_jobs(self, cmd, cmd_def, splits, work_dir, stdout=None,
stderr=None, stdin=None,):
''''I creates one job for every split.
Every job has a cmd, work_dir and streams, this info is in the jobs dict
with the keys: cmds, work_dirs, streams
'''
#too many arguments, but similar interface to our __init__
#pylint: disable-msg=R0913
#pylint: disable-msg=R0914
#the main job streams
main_job_streams = get_streams_from_cmd(cmd, cmd_def, stdout=stdout,
stderr=stderr, stdin=stdin)
self._job['streams'] = main_job_streams
streams, work_dirs = self._split_streams(main_job_streams, splits,
work_dir.name)
#now we have to create a new cmd with the right in and out streams for
#every split
cmds, stdins, stdouts, stderrs = self._create_cmds(cmd, streams)
jobs = {'cmds': cmds, 'work_dirs': work_dirs, 'streams': streams,
'stdins':stdins, 'stdouts':stdouts, 'stderrs':stderrs}
return jobs
@staticmethod
def _create_cmds(cmd, streams):
'''Given a base cmd and a steams list it creates one modified cmds for
every stream'''
#the streams is a list of streams
streamss = streams
cmds = []
stdouts = []
stdins = []
stderrs = []
for streams in streamss:
new_cmd = copy.deepcopy(cmd)
for stream in streams:
#is the stream in the cmd or in is a std one?
if 'cmd_location' in stream:
location = stream['cmd_location']
else:
location = None
if location is None:
continue
elif location == STDIN:
stdins.append(stream['fhand'])
elif location == STDOUT:
stdouts.append(stream['fhand'])
elif location == STDERR:
stderrs.append(stream['fhand'])
else:
#we modify the cmd[location] with the new file
#we use the fname and no path because the jobs will be
#launched from the job working dir
location = stream['cmd_location']
fpath = stream['fname']
fname = os.path.split(fpath)[-1]
new_cmd[location] = fname
cmds.append(new_cmd)
return cmds, stdins, stdouts, stderrs
@staticmethod
def _split_streams(streams, splits, work_dir):
'''Given a list of streams it splits every stream in the given number of
splits'''
#which are the input and output streams?
input_stream_indexes = []
output_stream_indexes = []
for index, stream in enumerate(streams):
if stream['io'] == 'in':
input_stream_indexes.append(index)
elif stream['io'] == 'out':
output_stream_indexes.append(index)
#we create one work dir for every split
work_dirs = []
for index in range(splits):
dir_ = NamedTemporaryDir(dir=work_dir)
work_dirs.append(dir_)
copy_file_mode('.', dir_.name)
#we have to do first the input files because the number of splits could
#be changed by them
#we split the input stream files into several splits
#we have to sort the input_stream_indexes, first we should take the ones
#that have an input file to be split
def do_we_have_to_split(stream_index):
'If the stream has to split a file it will return True'
split = None
stream = streams[stream_index]
#maybe they shouldn't be split
if 'special' in stream and 'no_split' in stream['special']:
split = False
#maybe there is no file to split
if (('fhand' in stream and stream['fhand'] is None) or
('fname' in stream and stream['fname'] is None) or
('fname' not in stream and 'fhand' not in stream)):
split = False
elif (('fhand' in stream and stream['fhand'] is not None) or
('fname' in stream and stream['fname'] is not None)):
split = True
return split
def to_be_split_first(stream1, stream2):
'It sorts the streams, the ones to be split go first'
split1 = do_we_have_to_split(stream1)
split2 = do_we_have_to_split(stream2)
return int(split1) - int(split2)
input_stream_indexes = sorted(input_stream_indexes, to_be_split_first)
first = True
split_files = {}
for index in input_stream_indexes:
stream = streams[index]
#splitter
splitter = None
if 'special' in stream and 'no_split' in stream['special']:
splitter = create_non_splitter_splitter(copy_files=True)
elif 'splitter' not in stream:
msg = 'An splitter should be provided for every input stream'
msg += 'missing for: ' + str(stream)
raise ValueError(msg)
else:
splitter = stream['splitter']
#if the splitter is a function we assume that it will know how to
#split the given file, otherwise should be a registered type of
#splitter or a regular expression
if '__call__' not in dir(splitter):
splitter = get_splitter(splitter)
#we split the input files in the splits, every file will be in one
#of the given work_dirs
#the stream can have fname or fhands
if 'fhand' in stream:
file_ = stream['fhand']
elif 'fname' in stream:
file_ = stream['fname']
else:
file_ = None
if file_ is None:
#the stream migth have no file associated
files = [None] * len(work_dirs)
else:
files = splitter(file_, work_dirs)
#the files len can be different than splits, in that case we modify
#the splits or we raise an error
if len(files) != splits:
if first:
splits = len(files)
#we discard the empty temporary dirs
work_dirs = work_dirs[0:splits]
else:
msg = 'Not all input files were divided in the same number'
msg += ' of splits'
raise RuntimeError(msg)
first = False
split_files[index] = files #a list of files for every in stream
#we split the ouptut stream files into several splits
output_splitter = create_non_splitter_splitter(copy_files=False)
for index in output_stream_indexes:
stream = streams[index]
#for th output we just create the new names, but we don't split
#any file
if 'fhand' in stream:
fname = stream['fhand']
else:
fname = stream['fname']
files = output_splitter(fname, work_dirs)
split_files[index] = files #a list of files for every in stream
new_streamss = []
#we need one new stream for every split
for split_index in range(splits):
#the streams for one job
new_streams = []
for stream_index, stream in enumerate(streams):
#we duplicate the original stream
new_stream = stream.copy()
#we set the new files
if 'fhand' in stream:
new_stream['fhand'] = split_files[stream_index][split_index]
else:
new_stream['fname'] = split_files[stream_index][split_index]
new_streams.append(new_stream)
new_streamss.append(new_streams)
return new_streamss, work_dirs
@staticmethod
def default_splits(runner):
'Given a runner it returns the number of splits recommended by default'
if runner is StdPopen:
#the number of processors
return os.sysconf('SC_NPROCESSORS_ONLN')
else:
module = runner.__module__.split('.')[-1]
module = RUNNER_MODULES[module]
return module.get_default_splits()
def wait(self):
'It waits for all the works to finnish'
#we wait till all jobs finish
for job in self._jobs['popens']:
job.wait()
#now that all jobs have finished we join the results
self._collect_output_streams()
#we join now the retcodes
self._collect_retcodes()
return self._retcode
def _collect_output_streams(self):
'''It joins all the output streams into the output files and it removes
the work dirs'''
if self._outputs_collected:
return
#for each file in the main job cmd
for stream_index, stream in enumerate(self._job['streams']):
if stream['io'] == 'in':
#now we're dealing only with output files
continue
#every subjob has a part to join for this output stream
part_out_fnames = []
for streams in self._jobs['streams']:
this_stream = streams[stream_index]
if 'fname' in this_stream:
part_out_fnames.append(this_stream['fname'])
else:
part_out_fnames.append(this_stream['fhand'])
#we need a function to join this stream
joiner = None
if joiner in stream:
joiner = stream['joiner']
else:
joiner = default_cat_joiner
if 'fname' in stream:
out_file = stream['fname']
else:
out_file = stream['fhand']
- default_cat_joiner(out_file, part_out_fnames)
+ joiner(out_file, part_out_fnames)
#now we can delete the tempdirs
for work_dir in self._jobs['work_dirs']:
work_dir.close()
self._outputs_collected = True
def _collect_retcodes(self):
'It gathers the retcodes from all processes'
retcode = None
for popen in self._jobs['popens']:
job_retcode = popen.returncode
if job_retcode is None:
#if some job is yet to be finished the main job is not finished
retcode = None
break
elif job_retcode != 0:
#if one job has finnished badly the main job is badly finished
retcode = job_retcode
break
#it should be 0 at this point
retcode = job_retcode
#if the retcode is not None the jobs have finished and we have to
#collect the outputs
if retcode is not None:
self._collect_output_streams()
self._retcode = retcode
return retcode
def _get_returncode(self):
'It returns the return code'
if self._retcode is None:
self._collect_retcodes()
return self._retcode
returncode = property(_get_returncode)
def kill(self):
'It kills all jobs'
if 'popens' not in self._jobs:
return
for popen in self._jobs['popens']:
#untill 2.6 subprocess.popen do not support kill
if 'kill' in dir(popen):
popen.kill()
else:
pid = popen.pid
call(['kill', '-9', str(pid)])
def terminate(self):
'It kills all jobs'
if 'popens' not in self._jobs:
return
for popen in self._jobs['popens']:
#untill 2.6 subprocess.popen do not support terminate
if 'terminate' in dir(popen):
popen.terminate()
else:
pid = popen.pid
call(['kill', '-6', str(pid)])
def default_cat_joiner(out_file_, in_files_):
'''It joins the given in files into the given out file.
It works with fnames or fhands.
'''
#are we working with fhands or fnames?
file_is_str = None
if isinstance(out_file_, str):
file_is_str = True
else:
file_is_str = False
#the output fhand
if file_is_str:
out_fhand = open(out_file_, 'w')
else:
out_fhand = open(out_file_.name, 'w')
for in_file_ in in_files_:
#the input fhand
if file_is_str:
in_fhand = open(in_file_, 'r')
else:
in_fhand = open(in_file_.name, 'r')
for line in in_fhand:
out_fhand.write(line)
in_fhand.close()
out_fhand.close()
diff --git a/psubprocess/splitters.py b/psubprocess/splitters.py
index e8f9eb5..2857424 100644
--- a/psubprocess/splitters.py
+++ b/psubprocess/splitters.py
@@ -1,233 +1,234 @@
'''
Created on 03/12/2009
@author: jose
'''
# Copyright 2009 Jose Blanca, Peio Ziarsolo, COMAV-Univ. Politecnica Valencia
# This file is part of psubprocess.
# psubprocess is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as
# published by the Free Software Foundation, either version 3 of the
# License, or (at your option) any later version.
# psubprocess is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
# You should have received a copy of the GNU Affero General Public License
# along with psubprocess. If not, see <http://www.gnu.org/licenses/>.
import re, os, shutil
-from psubprocess.utils import NamedTemporaryFile, copy_file_mode
+from tempfile import NamedTemporaryFile
+from psubprocess.utils import copy_file_mode
from Bio.SeqIO.QualityIO import FastqGeneralIterator
def _calculate_divisions(num_items, splits):
'''It calculates how many items should be in every split to divide
the num_items into splits.
Not all splits will have an equal number of items, it will return a tuple
with two tuples inside:
((num_fragments_1, num_items_1), (num_fragments_2, num_items_2))
splits = num_fragments_1 + num_fragments_2
num_items_1 = num_items_2 + 1
num_fragments_1 could be equal to 0.
This is the best way to create as many splits as possible as similar as
possible.
'''
if splits >= num_items:
return ((0, 1), (splits, 1))
num_fragments1 = num_items % splits
num_fragments2 = splits - num_fragments1
num_items2 = num_items // splits
num_items1 = num_items2 + 1
res = ((num_fragments1, num_items1), (num_fragments2, num_items2))
return res
def _items_in_file(fhand, expression):
'''Given an fhand and an expression it yields the items cutting where the
line matches the expression'''
sofar = fhand.readline()
for line in fhand:
if expression.search(line):
yield sofar
sofar = line
else:
sofar += line
else:
#the last item
yield sofar
def _re_item_counter(fhand, expression):
'It counts how many times the expression is found in the file'
nitems = 0
for line in fhand:
if expression.search(line):
nitems += 1
return nitems
def _items_in_fastq(fhand, expression=None):
'It returns the fastq items'
for item in FastqGeneralIterator(fhand):
yield '@%s\n%s\n+\n%s\n' % (item)
def _fastq_items_counter(fhand, expression=None):
nitems = 0
for item in FastqGeneralIterator(fhand):
nitems += 1
return nitems
def _create_file_splitter(kind, expression=None):
'''Given an expression it creates a file splitter.
The expression can be a regex or an str.
The item in the file will be defined everytime a line matches the
expression.
'''
item_counters = {'re': _re_item_counter,
'fastq': _fastq_items_counter}
item_splitters = {'re':_items_in_file,
'fastq':_items_in_fastq}
item_counter = item_counters[kind]
item_splitter = item_splitters[kind]
if expression is not None and isinstance(expression, str):
expression = re.compile(expression)
def splitter(file_, work_dirs):
'''It splits the given file into several splits.
Every split will be located in one of the work_dirs, although it is not
guaranteed to create as many splits as work dirs. If in the file there
are less items than work_dirs some work_dirs will be left empty.
It returns a list with the fpaths or fhands for the splitted files.
file_ can be an fhand or an fname.
'''
#the file_ can be an fname or an fhand. which one is it?
file_is_str = None
if isinstance(file_, str):
fname = file_
file_is_str = True
else:
fname = file_.name
file_is_str = False
#how many splits do we want?
nsplits = len(work_dirs)
#how many items are in the file? We assume that all files have the same
#number of items
fhand = open(fname, 'r')
nitems = item_counter(fhand, expression)
#how many splits a we going to create? and how many items will be in
#every split
#if there are more items than splits we create as many splits as items
if nsplits > nitems:
nsplits = nitems
(nsplits1, nitems1), (nsplits2, nitems2) = _calculate_divisions(nitems,
nsplits)
#we have to create nsplits1 files with nitems1 in it and nsplits2 files
#with nitems2 items in it
new_files = []
fhand = open(fname, 'r')
items = item_splitter(fhand, expression)
splits_made = 0
for nsplits, nitems in ((nsplits1, nitems1), (nsplits2, nitems2)):
#we have to create nsplits files with nitems in it
#we don't need the split_index for anything
#pylint: disable-msg=W0612
for split_index in range(nsplits):
suffix = os.path.splitext(fname)[-1]
work_dir = work_dirs[splits_made]
ofh = NamedTemporaryFile(dir=work_dir.name, delete=False,
suffix=suffix)
copy_file_mode(fhand.name, ofh.name)
for item_index in range(nitems):
ofh.write(items.next())
ofh.flush()
#we have to close the files otherwise we can run out of files
#in the os filesystem
if file_is_str:
new_files.append(ofh.name)
else:
new_files.append(ofh)
ofh.close()
splits_made += 1
return new_files
return splitter
fastq_splitter = _create_file_splitter(kind='fastq')
def create_file_splitter_with_re(expression):
'''Given an expression it creates a file splitter.
The expression can be a regex or an str.
The item in the file will be defined everytime a line matches the
expression.
'''
return _create_file_splitter(kind='re', expression=expression)
def get_splitter(expression):
'''If the expression is a known splitter kind it returns it, otherwise it
creates a regular expression based splitter'''
if expression == 'fastq':
return fastq_splitter
else:
return create_file_splitter_with_re(expression)
def create_non_splitter_splitter(copy_files=False):
'''It creates an splitter function that will not split the given file.
The created splitter will create one file for every work_dir given. This
file can be empty (useful for the output streams, or a copy of the given
file (useful for the no_split input streams).
'''
def splitter(file_, work_dirs):
'''It creates one output file for every splits.
Every split will be located in one of the work_dirs.
It returns a list with the fpaths for the new files.
'''
#the file_ can be an fname or an fhand. which one is it?
file_is_str = None
if isinstance(file_, str):
fname = file_
file_is_str = True
else:
fname = file_.name
file_is_str = False
#how many splits do we want?
nsplits = len(work_dirs)
new_fpaths = []
#we have to create nsplits
suffix = os.path.splitext(fname)[-1]
for split_index in range(nsplits):
work_dir = work_dirs[split_index]
#we use delete=False because this temp file is in a temp dir that
#will be completely deleted. If we use delete=True we get an error
#because the file might be already deleted when its __del__ method
#is called
ofh = NamedTemporaryFile(dir=work_dir.name, suffix=suffix,
delete=False)
os.remove(ofh.name)
ofh_name = ofh.name
#we have to close the files otherwise we can run out of files
#in the os filesystem
ofh.close()
if copy_files:
#i've tried with os.symlink but condor does not like it
shutil.copyfile(fname, ofh_name)
#the file will be deleted
#what do we need the fname or the fhand?
if file_is_str:
new_fpaths.append(ofh.name)
else:
new_fpaths.append(ofh)
return new_fpaths
return splitter
diff --git a/psubprocess/utils.py b/psubprocess/utils.py
index a565a88..cd1d5b3 100644
--- a/psubprocess/utils.py
+++ b/psubprocess/utils.py
@@ -1,97 +1,47 @@
'''
Created on 03/12/2009
@author: jose
'''
# Copyright 2009 Jose Blanca, Peio Ziarsolo, COMAV-Univ. Politecnica Valencia
# This file is part of psubprocess.
# psubprocess is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as
# published by the Free Software Foundation, either version 3 of the
# License, or (at your option) any later version.
# psubprocess is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
# You should have received a copy of the GNU Affero General Public License
# along with psubprocess. If not, see <http://www.gnu.org/licenses/>.
import tempfile, os, shutil
class NamedTemporaryDir(object):
'''This class creates temporary directories '''
#pylint: disable-msg=W0622
#we redifine the build in dir because temfile uses that inteface
def __init__(self, dir=None):
'''It initiates the class.'''
self._name = tempfile.mkdtemp(dir=dir)
def get_name(self):
'Returns path to the dict'
return self._name
name = property(get_name)
def close(self):
'''It removes the temp dir'''
if os.path.exists(self._name):
shutil.rmtree(self._name)
def __del__(self):
'''It removes the temp dir when instance is removed and the garbage
collector decides it'''
self.close()
-class NamedTemporaryFile_alway_closed(object):
- '''A temporal file that won't be deleted.
-
- It tries to be always closed and without a refernce to the real file
- '''
- def __init__(self, dir=None, suffix='', delete=False):
- 'The init'
- self.name = tempfile.mkstemp(dir=dir, suffix=suffix)[1]
- self.open = True
- self.delete = delete
- def write(self, string):
- 'It writes something to the file, always append'
- if self.open:
- fhand = open(self.name, 'a')
- fhand.write(string)
- fhand.close()
- else:
- raise ValueError('Write not possible the file is closed')
- def read(self):
- 'It reads somthing from the file'
- if self.open:
- fhand = open(self.name)
- content = fhand.read()
- fhand.close()
- return content
- else:
- raise ValueError('Write not possible the file is closed')
- def flush(self):
- 'Just for compatibility in this class everything is flushed'
- pass
- def close(self):
- 'It closes the file'
- if self.delete:
- os.remove(self.name)
- self.open = False
-
-def NamedTemporaryFile(dir=None, delete=False, suffix=''):
- '''It creates a temporary file that won't be deleted when close
-
- This behaviour can be done with tempfile.NamedTemporaryFile in python > 2.6
- '''
- #pylint: disable-msg=W0613
- #delete is not being used, it's there as a reminder, once we start to use
- #python 2.6 this function should be removed
- #pylint: disable-msg=C0103
- #pylint: disable-msg=W0622
- #We want to mimick tempfile.NamedTemporaryFile
- fpath = tempfile.mkstemp(dir=dir, suffix=suffix)[1]
- return open(fpath, 'w')
-
def copy_file_mode(fpath1, fpath2):
'It copies the os.stats mode from file1 to file2'
mode = os.stat(fpath1)[0]
os.chmod(fpath2, mode)
diff --git a/test/prunner_test.py b/test/prunner_test.py
index 2eb1dec..2f05be2 100644
--- a/test/prunner_test.py
+++ b/test/prunner_test.py
@@ -1,305 +1,305 @@
'''
Created on 16/07/2009
@author: jose
'''
# Copyright 2009 Jose Blanca, Peio Ziarsolo, COMAV-Univ. Politecnica Valencia
# This file is part of psubprocess.
# psubprocess is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as
# published by the Free Software Foundation, either version 3 of the
# License, or (at your option) any later version.
# psubprocess is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
# You should have received a copy of the GNU Affero General Public License
# along with psubprocess. If not, see <http://www.gnu.org/licenses/>.
import unittest
from tempfile import NamedTemporaryFile
import os
from psubprocess import Popen
from psubprocess.streams import STDIN
from test_utils import create_test_binary
class PRunnerTest(unittest.TestCase):
'It test that we can parallelize processes'
@staticmethod
def test_file_in():
'It tests the most basic behaviour'
bin = create_test_binary()
#with infile
in_file = NamedTemporaryFile()
in_file.write('hola')
in_file.flush()
cmd = [bin]
cmd.extend(['-i', in_file.name])
stdout = NamedTemporaryFile()
stderr = NamedTemporaryFile()
cmd_def = [{'options': ('-i', '--input'), 'io': 'in', 'splitter':''}]
popen = Popen(cmd, stdout=stdout, stderr=stderr, cmd_def=cmd_def)
- assert popen.wait() == 0 #waits till finishes and looks to the retcod
+ assert popen.wait() == 0 #waits till finishes and looks to the retcode
assert open(stdout.name).read() == 'hola'
in_file.close()
os.remove(bin)
@staticmethod
def test_job_no_in_stream():
'It test that a job with no in stream is run splits times'
bin = create_test_binary()
cmd = [bin]
cmd.extend(['-o', 'hola', '-e', 'caracola'])
stdout = NamedTemporaryFile()
stderr = NamedTemporaryFile()
cmd_def = [{'options': ('-i', '--input'), 'io': 'in', 'splitter':''}]
splits = 4
popen = Popen(cmd, stdout=stdout, stderr=stderr, cmd_def=cmd_def,
splits=splits)
assert popen.wait() == 0 #waits till finishes and looks to the retcod
assert open(stdout.name).read() == 'hola' * splits
assert open(stderr.name).read() == 'caracola' * splits
os.remove(bin)
@staticmethod
def test_stdin():
'It test that stdin works as input'
bin = create_test_binary()
#with stdin
content = 'hola1\nhola2\nhola3\nhola4\nhola5\nhola6\nhola7\nhola8\n'
content += 'hola9\nhola10|n'
stdin = NamedTemporaryFile()
stdin.write(content)
stdin.flush()
cmd = [bin]
cmd.extend(['-s'])
stdout = NamedTemporaryFile()
stderr = NamedTemporaryFile()
cmd_def = [{'options':STDIN, 'io': 'in', 'splitter':''}]
popen = Popen(cmd, stdout=stdout, stderr=stderr, stdin=stdin,
cmd_def=cmd_def)
assert popen.wait() == 0 #waits till finishes and looks to the retcod
assert open(stdout.name).read() == content
assert open(stderr.name).read() == ''
os.remove(bin)
@staticmethod
def test_infile_outfile():
'It tests that we can set an input file and an output file'
bin = create_test_binary()
#with infile
in_file = NamedTemporaryFile()
content = 'hola1\nhola2\nhola3\nhola4\nhola5\nhola6\nhola7\nhola8\n'
content += 'hola9\nhola10|n'
in_file.write(content)
in_file.flush()
out_file = NamedTemporaryFile()
cmd = [bin]
cmd.extend(['-i', in_file.name, '-t', out_file.name])
stdout = NamedTemporaryFile()
stderr = NamedTemporaryFile()
cmd_def = [{'options': ('-i', '--input'), 'io': 'in', 'splitter':''},
{'options': ('-t', '--output'), 'io': 'out'}]
popen = Popen(cmd, stdout=stdout, stderr=stderr, cmd_def=cmd_def)
assert popen.wait() == 0 #waits till finishes and looks to the retcod
assert not open(stdout.name).read()
assert not open(stderr.name).read()
assert open(out_file.name).read() == content
in_file.close()
os.remove(bin)
@staticmethod
def test_retcode():
'It tests that we get the correct returncode'
bin = create_test_binary()
cmd = [bin]
cmd.extend(['-r', '20'])
stdout = NamedTemporaryFile()
stderr = NamedTemporaryFile()
popen = Popen(cmd, stdout=stdout, stderr=stderr, cmd_def=[])
assert popen.wait() == 20 #waits till finishes and looks to the retcod
assert not open(stdout.name).read()
assert not open(stderr.name).read()
os.remove(bin)
@staticmethod
def xtest_infile_outfile_condor():
'It tests that we can set an input file and an output file'
bin = create_test_binary()
#with infile
in_file = NamedTemporaryFile()
content = 'hola1\nhola2\nhola3\nhola4\nhola5\nhola6\nhola7\nhola8\n'
content += 'hola9\nhola10|n'
in_file.write(content)
in_file.flush()
out_file = NamedTemporaryFile()
cmd = [bin]
cmd.extend(['-i', in_file.name, '-t', out_file.name])
stdout = NamedTemporaryFile()
stderr = NamedTemporaryFile()
cmd_def = [{'options': ('-i', '--input'), 'io': 'in', 'splitter':''},
{'options': ('-t', '--output'), 'io': 'out'}]
from psubprocess import CondorPopen
popen = Popen(cmd, stdout=stdout, stderr=stderr, cmd_def=cmd_def,
runner=CondorPopen,
runner_conf={'transfer_executable':True})
assert popen.wait() == 0 #waits till finishes and looks to the retcode
assert not open(stdout.name).read()
assert not open(stderr.name).read()
assert open(out_file.name).read() == content
in_file.close()
os.remove(bin)
@staticmethod
def test_stdin_real_splitter():
'It test that stdin works as input'
bin = create_test_binary()
#with stdin
content = '>hola1\nhola2\n>hola3\nhola4\n>hola5\nhola6\n>hola7\nhola8\n'
content += '>hola9\nhola10|n'
stdin = NamedTemporaryFile()
stdin.write(content)
stdin.flush()
cmd = [bin]
cmd.extend(['-s'])
stdout = NamedTemporaryFile()
stderr = NamedTemporaryFile()
cmd_def = [{'options':STDIN, 'io': 'in', 'splitter':'>'}]
popen = Popen(cmd, stdout=stdout, stderr=stderr, stdin=stdin,
cmd_def=cmd_def)
assert popen.wait() == 0 #waits till finishes and looks to the retcode
assert open(stdout.name).read() == content
assert open(stderr.name).read() == ''
os.remove(bin)
@staticmethod
def test_2_infile_outfile():
'It tests that we can set 2 input files and an output file'
bin = create_test_binary()
#with infile
content = 'hola1\nhola2\nhola3\nhola4\nhola5\nhola6\nhola7\nhola8\n'
content += 'hola9\nhola10|n'
in_file1 = NamedTemporaryFile()
in_file1.write(content)
in_file1.flush()
in_file2 = NamedTemporaryFile()
in_file2.write(content)
in_file2.flush()
out_file1 = NamedTemporaryFile()
out_file2 = NamedTemporaryFile()
cmd = [bin]
cmd.extend(['-i', in_file1.name, '-t', out_file1.name])
cmd.extend(['-x', in_file2.name, '-z', out_file2.name])
stdout = NamedTemporaryFile()
stderr = NamedTemporaryFile()
cmd_def = [{'options': ('-i', '--input'), 'io': 'in', 'splitter':''},
{'options': ('-x', '--input'), 'io': 'in', 'splitter':''},
{'options': ('-t', '--output'), 'io': 'out'},
{'options': ('-z', '--output'), 'io': 'out'}]
popen = Popen(cmd, stdout=stdout, stderr=stderr, cmd_def=cmd_def)
assert popen.wait() == 0 #waits till finishes and looks to the retcod
assert not open(stdout.name).read()
assert not open(stderr.name).read()
assert open(out_file1.name).read() == content
assert open(out_file2.name).read() == content
in_file1.close()
in_file2.close()
os.remove(bin)
@staticmethod
def test_kill_subjobs():
'It tests that we can kill the subjobs'
bin = create_test_binary()
cmd = [bin]
cmd.extend(['-w'])
stdout = NamedTemporaryFile()
stderr = NamedTemporaryFile()
popen = Popen(cmd, stdout=stdout, stderr=stderr, cmd_def=[])
assert popen.returncode is None
popen.kill()
assert not open(stdout.name).read()
assert not open(stderr.name).read()
os.remove(bin)
@staticmethod
def test_nosplit():
'It test that we can set some input files to be not split'
bin = create_test_binary()
#with infile
in_file = NamedTemporaryFile()
content = 'hola1\nhola2\n'
in_file.write(content)
in_file.flush()
out_file = NamedTemporaryFile()
cmd = [bin]
cmd.extend(['-i', in_file.name, '-t', out_file.name])
stdout = NamedTemporaryFile()
stderr = NamedTemporaryFile()
cmd_def = [{'options': ('-i', '--input'), 'io': 'in',
'special':['no_split']},
{'options': ('-t', '--output'), 'io': 'out'}]
splits = 4
popen = Popen(cmd, stdout=stdout, stderr=stderr, cmd_def=cmd_def,
splits=splits)
assert popen.wait() == 0 #waits till finishes and looks to the retcod
assert not open(stdout.name).read()
assert not open(stderr.name).read()
assert open(out_file.name).read() == content * splits
in_file.close()
os.remove(bin)
@staticmethod
def test_lots_splits_outfile():
'It tests that we can set 2 input files and an output file'
bin = create_test_binary()
- splits = 100
+ splits = 200
content = ['hola%d\n' % split for split in range(splits)]
content = ''.join(content)
in_file1 = NamedTemporaryFile()
in_file1.write(content)
in_file1.flush()
in_file2 = NamedTemporaryFile()
in_file2.write(content)
in_file2.flush()
out_file1 = NamedTemporaryFile()
out_file2 = NamedTemporaryFile()
cmd = [bin]
cmd.extend(['-i', in_file1.name, '-t', out_file1.name])
cmd.extend(['-x', in_file2.name, '-z', out_file2.name])
stdout = NamedTemporaryFile()
stderr = NamedTemporaryFile()
cmd_def = [{'options': ('-i', '--input'), 'io': 'in', 'splitter':''},
{'options': ('-x', '--input'), 'io': 'in', 'splitter':''},
{'options': ('-t', '--output'), 'io': 'out'},
{'options': ('-z', '--output'), 'io': 'out'}]
popen = Popen(cmd, stdout=stdout, stderr=stderr, cmd_def=cmd_def,
splits=splits)
assert popen.wait() == 0 #waits till finishes and looks to the retcod
assert not open(stdout.name).read()
assert not open(stderr.name).read()
assert open(out_file1.name).read() == content
assert open(out_file2.name).read() == content
in_file1.close()
in_file2.close()
os.remove(bin)
if __name__ == "__main__":
- #import sys;sys.argv = ['', 'PRunnerTest.test_lots_splits_outfile']
+ #import sys;sys.argv = ['', 'PRunnerTest.test_file_in']
unittest.main()
diff --git a/test/test_utils.py b/test/test_utils.py
index 3c05a1b..8b8ed99 100644
--- a/test/test_utils.py
+++ b/test/test_utils.py
@@ -1,85 +1,85 @@
'''
Created on 16/07/2009
@author: jose
'''
# Copyright 2009 Jose Blanca, Peio Ziarsolo, COMAV-Univ. Politecnica Valencia
# This file is part of psubprocess.
# psubprocess is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as
# published by the Free Software Foundation, either version 3 of the
# License, or (at your option) any later version.
# psubprocess is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
# You should have received a copy of the GNU Affero General Public License
# along with psubprocess. If not, see <http://www.gnu.org/licenses/>.
from tempfile import NamedTemporaryFile
import os, stat, shutil
-TEST_BINARY = '''#!/usr/bin/env python
+TEST_BINARY = '''#!/usr/bin/env python2.6
import sys, shutil, os, time
args = sys.argv
#-o something send something to stdout
#-e something send something to stderr
#-i some_file send the file content to sdout
#-t some_file copy the -i file to -t file
#-x some_file
#-z some_file copy the -x file to -z file
#-s and stdin write stdin to stout
#-r a number return this retcode
#are the commands in the argv?
arg_indexes = {}
for param in ('-o', '-e', '-i', '-t', '-s', '-r', '-x', '-z', '-w'):
try:
arg_indexes[param] = args.index(param)
except ValueError:
arg_indexes[param] = None
#stdout, stderr
if arg_indexes['-o']:
sys.stdout.write(args[arg_indexes['-o'] + 1])
if arg_indexes['-e']:
sys.stderr.write(args[arg_indexes['-e'] + 1])
#-i -t
if arg_indexes['-i'] and not arg_indexes['-t']:
sys.stdout.write(open(args[arg_indexes['-i'] + 1]).read())
elif arg_indexes['-i'] and arg_indexes['-t']:
shutil.copy(args[arg_indexes['-i'] + 1], args[arg_indexes['-t'] + 1])
if arg_indexes['-x'] and arg_indexes['-z']:
shutil.copy(args[arg_indexes['-x'] + 1], args[arg_indexes['-z'] + 1])
#stdin
if arg_indexes['-s']:
stdin = sys.stdin.read()
sys.stdout.write(stdin)
#retcode
if arg_indexes['-r']:
retcode = int(args[arg_indexes['-r'] + 1])
else:
retcode = 0
#wait
if arg_indexes['-w']:
time.sleep(50)
sys.exit(retcode)
'''
def create_test_binary():
'It creates a file with a test python binary in it'
fhand = NamedTemporaryFile(suffix='.py')
fhand.write(TEST_BINARY)
fhand.flush()
os.chmod(fhand.name, stat.S_IXOTH | stat.S_IRWXU)
fname = '/tmp/test_cmd.py'
shutil.copy(fhand.name, fname)
fhand.close()
#it should be executable
return fname
|
JoseBlanca/psubprocess
|
526dc898e76f559989fb8f296dd8c5bd792cf4a7
|
the registered spliter can now be called using the cmd_def
|
diff --git a/psubprocess/prunner.py b/psubprocess/prunner.py
index 1fb2543..45a13df 100644
--- a/psubprocess/prunner.py
+++ b/psubprocess/prunner.py
@@ -1,487 +1,496 @@
'''It launches parallel processes with an interface similar to Popen.
It divides jobs into subjobs and launches the subjobs.
This module is useful when we have a non-parallel command to run in a
multiprocessor computer or in a multinode cluster. It will take the input files,
it will split them and it will run a subjob for everyone of the splits. It will
wait for the subjobs to finnish and it will join the output files generated
by all subjobs. At the end of the process will get the same output files as if
the command wasn't run in parallel.
This approach will work with commands that process a lot of items. This module
divides the items in sereval set and it assigns each of this sets to one new
subjob. These are the subjobs that will be run in parallel.
To do it requires the parameters used by popen: cmd, stdin, stdout, stderr and
some extra information: runner, splits and cmd_def.
runner is optional and it should be a subprocess.Popen like class. If it's not
given subprocess.Popen will be used. This Popen be the class used to run the
subjobs. If subprocess.Popen is used the subjobs will run in the processors of
the local node on several independent processes. If the Condor Popen is used
the subjobs will run in a condor cluster.
splits is the number of subjobs that we want to generate. If it's not given the
runner will provide a suitable number.
cmd_def is a dict that defines how the cmd defines the input and output files.
We need to tell Popen which are the input and output files in order to split
them and join them. The syntax for cmd_def is explained in the stream.py module
'''
# Copyright 2009 Jose Blanca, Peio Ziarsolo, COMAV-Univ. Politecnica Valencia
# This file is part of psubprocess.
# psubprocess is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as
# published by the Free Software Foundation, either version 3 of the
# License, or (at your option) any later version.
# psubprocess is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
# You should have received a copy of the GNU Affero General Public License
# along with psubprocess. If not, see <http://www.gnu.org/licenses/>.
from subprocess import Popen as StdPopen
import os, copy
from psubprocess.streams import get_streams_from_cmd, STDOUT, STDERR, STDIN
from psubprocess.condor_runner import call
from psubprocess import condor_runner
-from psubprocess.splitters import (create_file_splitter_with_re,
+from psubprocess.splitters import (get_splitter,
create_non_splitter_splitter)
from psubprocess.utils import NamedTemporaryDir, copy_file_mode
from psubprocess.cmd_def_from_cmd import get_cmd_def_from_cmd
RUNNER_MODULES = {}
RUNNER_MODULES['condor_runner'] = condor_runner
class Popen(object):
'''It paralellizes the given processes dividing them into subprocesses.
The interface is similar to subprocess.Popen to ease the use of this class,
although the functionality of this class is much mor limited.
When an instance of this class is created a series of subjobs is launched.
When all subjobs are finished returncode will have an int, if they're still
running returncode will be None.
We can wait for all subjobs to finnish using the wait method or we can
kill or terminate them using kill and terminate.
'''
def __init__(self, cmd, cmd_def=None, runner=None, runner_conf=None,
stdout=None, stderr=None, stdin=None, splits=None):
'''It inits the a Popen instance, it creates and runs the subjobs.
Like the subprocess.Popen it accepts stdin, stdout, stderr, but in this
case all of them should be files, PIPE will not work.
In the cmd_def list we have to tell this Popen how to locate the
input and output files in the cmd and how to split and join them. Look
for the cmd_format in the streams.py file.
keyword arguments:
cmd -- a list with the cmd to parallelize
cmd_def -- the cmd definition list (default [])
runner -- which runner to use (default subprocess.Popen)
runner_conf -- extra parameters for the runner (default {})
stdout -- a fhand to store the stdout (default None)
stderr -- a fhand to store the stderr (default None)
stdin -- a fhand with the stdin (default None)
splits -- number of subjobs to generate
'''
#we want the same interface as subprocess.popen
#pylint: disable-msg=R0913
self._retcode = None
self._outputs_collected = False
#some defaults
#if the runner is not given, we use subprocess.Popen
if runner is None:
runner = StdPopen
#is the cmd_def set in the command?
- cmd, cmd_def = get_cmd_def_from_cmd(cmd)
+ cmd, cmd_cmd_def = get_cmd_def_from_cmd(cmd)
+
+ if cmd_cmd_def:
+ cmd_def = cmd_cmd_def
+ elif cmd_def:
+ cmd_def = cmd_def
+ else:
+ cmd_def = []
if not cmd_def and stdin is not None:
raise ValueError('No cmd_def given but stdin present')
#if the number of splits is not given we calculate them
if splits is None:
splits = self.default_splits(runner)
#we need a work dir to create the temporary split files
self._work_dir = NamedTemporaryDir()
copy_file_mode('.', self._work_dir.name)
#the main job
self._job = {'cmd': cmd, 'work_dir': self._work_dir}
#we create the new subjobs
self._jobs = self._split_jobs(cmd, cmd_def, splits, self._work_dir,
stdout=stdout, stderr=stderr, stdin=stdin)
#launch every subjobs
self._launch_jobs(self._jobs, runner=runner, runner_conf=runner_conf)
@staticmethod
def _launch_jobs(jobs, runner, runner_conf):
'It launches all jobs and it adds its popen instance to them'
jobs['popens'] = []
cwd = os.getcwd()
for job_index, (cmd, streams, work_dir) in enumerate(zip(jobs['cmds'],
jobs['streams'], jobs['work_dirs'])):
#the std stream can be present or not
stdin, stdout, stderr = None, None, None
if jobs['stdins']:
stdin = jobs['stdins'][job_index]
if jobs['stdouts']:
stdout = jobs['stdouts'][job_index]
if jobs['stderrs']:
stderr = jobs['stderrs'][job_index]
#for every job we go to its dir to launch it
os.chdir(work_dir.name)
#we have to be sure that stdin is open for read
if stdin:
stdin = open(stdin.name)
#we have to be sure that stdout and stderr are open for write
if stdout:
stdout = open(stdout.name, 'w')
if stderr:
stderr = open(stderr.name, 'w')
#we launch the job
if runner == StdPopen:
popen = runner(cmd, stdout=stdout, stderr=stderr, stdin=stdin)
else:
popen = runner(cmd, cmd_def=streams, stdout=stdout,
stderr=stderr, stdin=stdin,
runner_conf=runner_conf)
#we record it's popen instane
jobs['popens'].append(popen)
os.chdir(cwd)
def _split_jobs(self, cmd, cmd_def, splits, work_dir, stdout=None,
stderr=None, stdin=None,):
''''I creates one job for every split.
Every job has a cmd, work_dir and streams, this info is in the jobs dict
with the keys: cmds, work_dirs, streams
'''
#too many arguments, but similar interface to our __init__
#pylint: disable-msg=R0913
#pylint: disable-msg=R0914
#the main job streams
main_job_streams = get_streams_from_cmd(cmd, cmd_def, stdout=stdout,
stderr=stderr, stdin=stdin)
self._job['streams'] = main_job_streams
streams, work_dirs = self._split_streams(main_job_streams, splits,
work_dir.name)
#now we have to create a new cmd with the right in and out streams for
#every split
cmds, stdins, stdouts, stderrs = self._create_cmds(cmd, streams)
jobs = {'cmds': cmds, 'work_dirs': work_dirs, 'streams': streams,
'stdins':stdins, 'stdouts':stdouts, 'stderrs':stderrs}
return jobs
@staticmethod
def _create_cmds(cmd, streams):
'''Given a base cmd and a steams list it creates one modified cmds for
every stream'''
#the streams is a list of streams
streamss = streams
cmds = []
stdouts = []
stdins = []
stderrs = []
for streams in streamss:
new_cmd = copy.deepcopy(cmd)
for stream in streams:
#is the stream in the cmd or in is a std one?
if 'cmd_location' in stream:
location = stream['cmd_location']
else:
location = None
if location is None:
continue
elif location == STDIN:
stdins.append(stream['fhand'])
elif location == STDOUT:
stdouts.append(stream['fhand'])
elif location == STDERR:
stderrs.append(stream['fhand'])
else:
#we modify the cmd[location] with the new file
#we use the fname and no path because the jobs will be
#launched from the job working dir
location = stream['cmd_location']
fpath = stream['fname']
fname = os.path.split(fpath)[-1]
new_cmd[location] = fname
cmds.append(new_cmd)
return cmds, stdins, stdouts, stderrs
@staticmethod
def _split_streams(streams, splits, work_dir):
'''Given a list of streams it splits every stream in the given number of
splits'''
#which are the input and output streams?
input_stream_indexes = []
output_stream_indexes = []
for index, stream in enumerate(streams):
if stream['io'] == 'in':
input_stream_indexes.append(index)
elif stream['io'] == 'out':
output_stream_indexes.append(index)
#we create one work dir for every split
work_dirs = []
for index in range(splits):
dir_ = NamedTemporaryDir(dir=work_dir)
work_dirs.append(dir_)
copy_file_mode('.', dir_.name)
#we have to do first the input files because the number of splits could
#be changed by them
#we split the input stream files into several splits
#we have to sort the input_stream_indexes, first we should take the ones
#that have an input file to be split
def do_we_have_to_split(stream_index):
'If the stream has to split a file it will return True'
split = None
stream = streams[stream_index]
#maybe they shouldn't be split
if 'special' in stream and 'no_split' in stream['special']:
split = False
#maybe there is no file to split
if (('fhand' in stream and stream['fhand'] is None) or
('fname' in stream and stream['fname'] is None) or
('fname' not in stream and 'fhand' not in stream)):
split = False
elif (('fhand' in stream and stream['fhand'] is not None) or
('fname' in stream and stream['fname'] is not None)):
split = True
return split
def to_be_split_first(stream1, stream2):
'It sorts the streams, the ones to be split go first'
split1 = do_we_have_to_split(stream1)
split2 = do_we_have_to_split(stream2)
return int(split1) - int(split2)
input_stream_indexes = sorted(input_stream_indexes, to_be_split_first)
first = True
split_files = {}
for index in input_stream_indexes:
stream = streams[index]
#splitter
splitter = None
if 'special' in stream and 'no_split' in stream['special']:
splitter = create_non_splitter_splitter(copy_files=True)
elif 'splitter' not in stream:
msg = 'An splitter should be provided for every input stream'
msg += 'missing for: ' + str(stream)
raise ValueError(msg)
else:
splitter = stream['splitter']
- #the splitter can be a re, in that case with create the function
+ #if the splitter is a function we assume that it will know how to
+ #split the given file, otherwise should be a registered type of
+ #splitter or a regular expression
if '__call__' not in dir(splitter):
- splitter = create_file_splitter_with_re(splitter)
+ splitter = get_splitter(splitter)
#we split the input files in the splits, every file will be in one
#of the given work_dirs
#the stream can have fname or fhands
if 'fhand' in stream:
file_ = stream['fhand']
elif 'fname' in stream:
file_ = stream['fname']
else:
file_ = None
if file_ is None:
#the stream migth have no file associated
files = [None] * len(work_dirs)
else:
files = splitter(file_, work_dirs)
#the files len can be different than splits, in that case we modify
#the splits or we raise an error
if len(files) != splits:
if first:
splits = len(files)
#we discard the empty temporary dirs
work_dirs = work_dirs[0:splits]
else:
msg = 'Not all input files were divided in the same number'
msg += ' of splits'
raise RuntimeError(msg)
first = False
split_files[index] = files #a list of files for every in stream
#we split the ouptut stream files into several splits
output_splitter = create_non_splitter_splitter(copy_files=False)
for index in output_stream_indexes:
stream = streams[index]
#for th output we just create the new names, but we don't split
#any file
if 'fhand' in stream:
fname = stream['fhand']
else:
fname = stream['fname']
files = output_splitter(fname, work_dirs)
split_files[index] = files #a list of files for every in stream
new_streamss = []
#we need one new stream for every split
for split_index in range(splits):
#the streams for one job
new_streams = []
for stream_index, stream in enumerate(streams):
#we duplicate the original stream
new_stream = stream.copy()
#we set the new files
if 'fhand' in stream:
new_stream['fhand'] = split_files[stream_index][split_index]
else:
new_stream['fname'] = split_files[stream_index][split_index]
new_streams.append(new_stream)
new_streamss.append(new_streams)
return new_streamss, work_dirs
@staticmethod
def default_splits(runner):
'Given a runner it returns the number of splits recommended by default'
if runner is StdPopen:
#the number of processors
return os.sysconf('SC_NPROCESSORS_ONLN')
else:
module = runner.__module__.split('.')[-1]
module = RUNNER_MODULES[module]
return module.get_default_splits()
def wait(self):
'It waits for all the works to finnish'
#we wait till all jobs finish
for job in self._jobs['popens']:
job.wait()
#now that all jobs have finished we join the results
self._collect_output_streams()
#we join now the retcodes
self._collect_retcodes()
return self._retcode
def _collect_output_streams(self):
'''It joins all the output streams into the output files and it removes
the work dirs'''
if self._outputs_collected:
return
#for each file in the main job cmd
for stream_index, stream in enumerate(self._job['streams']):
if stream['io'] == 'in':
#now we're dealing only with output files
continue
#every subjob has a part to join for this output stream
part_out_fnames = []
for streams in self._jobs['streams']:
this_stream = streams[stream_index]
if 'fname' in this_stream:
part_out_fnames.append(this_stream['fname'])
else:
part_out_fnames.append(this_stream['fhand'])
#we need a function to join this stream
joiner = None
if joiner in stream:
joiner = stream['joiner']
else:
joiner = default_cat_joiner
if 'fname' in stream:
out_file = stream['fname']
else:
out_file = stream['fhand']
default_cat_joiner(out_file, part_out_fnames)
#now we can delete the tempdirs
for work_dir in self._jobs['work_dirs']:
work_dir.close()
self._outputs_collected = True
def _collect_retcodes(self):
'It gathers the retcodes from all processes'
retcode = None
for popen in self._jobs['popens']:
job_retcode = popen.returncode
if job_retcode is None:
#if some job is yet to be finished the main job is not finished
retcode = None
break
elif job_retcode != 0:
#if one job has finnished badly the main job is badly finished
retcode = job_retcode
break
#it should be 0 at this point
retcode = job_retcode
#if the retcode is not None the jobs have finished and we have to
#collect the outputs
if retcode is not None:
self._collect_output_streams()
self._retcode = retcode
return retcode
def _get_returncode(self):
'It returns the return code'
if self._retcode is None:
self._collect_retcodes()
return self._retcode
returncode = property(_get_returncode)
def kill(self):
'It kills all jobs'
if 'popens' not in self._jobs:
return
for popen in self._jobs['popens']:
#untill 2.6 subprocess.popen do not support kill
if 'kill' in dir(popen):
popen.kill()
else:
pid = popen.pid
call(['kill', '-9', str(pid)])
def terminate(self):
'It kills all jobs'
if 'popens' not in self._jobs:
return
for popen in self._jobs['popens']:
#untill 2.6 subprocess.popen do not support terminate
if 'terminate' in dir(popen):
popen.terminate()
else:
pid = popen.pid
call(['kill', '-6', str(pid)])
def default_cat_joiner(out_file_, in_files_):
'''It joins the given in files into the given out file.
It works with fnames or fhands.
'''
#are we working with fhands or fnames?
file_is_str = None
if isinstance(out_file_, str):
file_is_str = True
else:
file_is_str = False
#the output fhand
if file_is_str:
out_fhand = open(out_file_, 'w')
else:
out_fhand = open(out_file_.name, 'w')
for in_file_ in in_files_:
#the input fhand
if file_is_str:
in_fhand = open(in_file_, 'r')
else:
in_fhand = open(in_file_.name, 'r')
for line in in_fhand:
out_fhand.write(line)
in_fhand.close()
out_fhand.close()
diff --git a/psubprocess/splitters.py b/psubprocess/splitters.py
index 1cef47a..e8f9eb5 100644
--- a/psubprocess/splitters.py
+++ b/psubprocess/splitters.py
@@ -1,225 +1,233 @@
'''
Created on 03/12/2009
@author: jose
'''
# Copyright 2009 Jose Blanca, Peio Ziarsolo, COMAV-Univ. Politecnica Valencia
# This file is part of psubprocess.
# psubprocess is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as
# published by the Free Software Foundation, either version 3 of the
# License, or (at your option) any later version.
# psubprocess is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
# You should have received a copy of the GNU Affero General Public License
# along with psubprocess. If not, see <http://www.gnu.org/licenses/>.
import re, os, shutil
from psubprocess.utils import NamedTemporaryFile, copy_file_mode
from Bio.SeqIO.QualityIO import FastqGeneralIterator
def _calculate_divisions(num_items, splits):
'''It calculates how many items should be in every split to divide
the num_items into splits.
Not all splits will have an equal number of items, it will return a tuple
with two tuples inside:
((num_fragments_1, num_items_1), (num_fragments_2, num_items_2))
splits = num_fragments_1 + num_fragments_2
num_items_1 = num_items_2 + 1
num_fragments_1 could be equal to 0.
This is the best way to create as many splits as possible as similar as
possible.
'''
if splits >= num_items:
return ((0, 1), (splits, 1))
num_fragments1 = num_items % splits
num_fragments2 = splits - num_fragments1
num_items2 = num_items // splits
num_items1 = num_items2 + 1
res = ((num_fragments1, num_items1), (num_fragments2, num_items2))
return res
def _items_in_file(fhand, expression):
'''Given an fhand and an expression it yields the items cutting where the
line matches the expression'''
sofar = fhand.readline()
for line in fhand:
if expression.search(line):
yield sofar
sofar = line
else:
sofar += line
else:
#the last item
yield sofar
def _re_item_counter(fhand, expression):
'It counts how many times the expression is found in the file'
nitems = 0
for line in fhand:
if expression.search(line):
nitems += 1
return nitems
def _items_in_fastq(fhand, expression=None):
'It returns the fastq items'
for item in FastqGeneralIterator(fhand):
yield '@%s\n%s\n+\n%s\n' % (item)
def _fastq_items_counter(fhand, expression=None):
nitems = 0
for item in FastqGeneralIterator(fhand):
nitems += 1
return nitems
def _create_file_splitter(kind, expression=None):
'''Given an expression it creates a file splitter.
The expression can be a regex or an str.
The item in the file will be defined everytime a line matches the
expression.
'''
item_counters = {'re': _re_item_counter,
'fastq': _fastq_items_counter}
item_splitters = {'re':_items_in_file,
'fastq':_items_in_fastq}
item_counter = item_counters[kind]
item_splitter = item_splitters[kind]
if expression is not None and isinstance(expression, str):
expression = re.compile(expression)
def splitter(file_, work_dirs):
'''It splits the given file into several splits.
Every split will be located in one of the work_dirs, although it is not
guaranteed to create as many splits as work dirs. If in the file there
are less items than work_dirs some work_dirs will be left empty.
It returns a list with the fpaths or fhands for the splitted files.
file_ can be an fhand or an fname.
'''
#the file_ can be an fname or an fhand. which one is it?
file_is_str = None
if isinstance(file_, str):
fname = file_
file_is_str = True
else:
fname = file_.name
file_is_str = False
#how many splits do we want?
nsplits = len(work_dirs)
#how many items are in the file? We assume that all files have the same
#number of items
fhand = open(fname, 'r')
nitems = item_counter(fhand, expression)
#how many splits a we going to create? and how many items will be in
#every split
#if there are more items than splits we create as many splits as items
if nsplits > nitems:
nsplits = nitems
(nsplits1, nitems1), (nsplits2, nitems2) = _calculate_divisions(nitems,
nsplits)
#we have to create nsplits1 files with nitems1 in it and nsplits2 files
#with nitems2 items in it
new_files = []
fhand = open(fname, 'r')
items = item_splitter(fhand, expression)
splits_made = 0
for nsplits, nitems in ((nsplits1, nitems1), (nsplits2, nitems2)):
#we have to create nsplits files with nitems in it
#we don't need the split_index for anything
#pylint: disable-msg=W0612
for split_index in range(nsplits):
suffix = os.path.splitext(fname)[-1]
work_dir = work_dirs[splits_made]
ofh = NamedTemporaryFile(dir=work_dir.name, delete=False,
suffix=suffix)
copy_file_mode(fhand.name, ofh.name)
for item_index in range(nitems):
ofh.write(items.next())
ofh.flush()
#we have to close the files otherwise we can run out of files
#in the os filesystem
if file_is_str:
new_files.append(ofh.name)
else:
new_files.append(ofh)
ofh.close()
splits_made += 1
return new_files
return splitter
fastq_splitter = _create_file_splitter(kind='fastq')
def create_file_splitter_with_re(expression):
'''Given an expression it creates a file splitter.
The expression can be a regex or an str.
The item in the file will be defined everytime a line matches the
expression.
'''
return _create_file_splitter(kind='re', expression=expression)
+def get_splitter(expression):
+ '''If the expression is a known splitter kind it returns it, otherwise it
+ creates a regular expression based splitter'''
+ if expression == 'fastq':
+ return fastq_splitter
+ else:
+ return create_file_splitter_with_re(expression)
+
def create_non_splitter_splitter(copy_files=False):
'''It creates an splitter function that will not split the given file.
The created splitter will create one file for every work_dir given. This
file can be empty (useful for the output streams, or a copy of the given
file (useful for the no_split input streams).
'''
def splitter(file_, work_dirs):
'''It creates one output file for every splits.
Every split will be located in one of the work_dirs.
It returns a list with the fpaths for the new files.
'''
#the file_ can be an fname or an fhand. which one is it?
file_is_str = None
if isinstance(file_, str):
fname = file_
file_is_str = True
else:
fname = file_.name
file_is_str = False
#how many splits do we want?
nsplits = len(work_dirs)
new_fpaths = []
#we have to create nsplits
suffix = os.path.splitext(fname)[-1]
for split_index in range(nsplits):
work_dir = work_dirs[split_index]
#we use delete=False because this temp file is in a temp dir that
#will be completely deleted. If we use delete=True we get an error
#because the file might be already deleted when its __del__ method
#is called
ofh = NamedTemporaryFile(dir=work_dir.name, suffix=suffix,
delete=False)
os.remove(ofh.name)
ofh_name = ofh.name
#we have to close the files otherwise we can run out of files
#in the os filesystem
ofh.close()
if copy_files:
#i've tried with os.symlink but condor does not like it
shutil.copyfile(fname, ofh_name)
#the file will be deleted
#what do we need the fname or the fhand?
if file_is_str:
new_fpaths.append(ofh.name)
else:
new_fpaths.append(ofh)
return new_fpaths
return splitter
|
JoseBlanca/psubprocess
|
1849b09fdac302aa9da116d3cf3d2bb22bd78159
|
improved setup.py
|
diff --git a/setup.py b/setup.py
index 6370d93..ca67e08 100644
--- a/setup.py
+++ b/setup.py
@@ -1,20 +1,73 @@
'''
Created on 25/03/2009
@author: jose blanca
'''
+#taken from django-tagging
+
+import os
+
+PACKAGE_DIR = 'psubprocess'
+SCRIPTS_DIR = 'scripts'
+
+def fullsplit(path, result=None):
+ """
+ Split a pathname into components (the opposite of os.path.join) in a
+ platform-neutral way.
+ """
+ if result is None:
+ result = []
+ head, tail = os.path.split(path)
+ if head == '':
+ return [tail] + result
+ if head == path:
+ return result
+ return fullsplit(head, [tail] + result)
+
+# Compile the list of packages available, because distutils doesn't have
+# an easy way to do this.
+packages, data_files, modules = [], [], []
+root_dir = os.path.dirname(__file__)
+pieces = fullsplit(root_dir)
+if pieces[-1] == '':
+ len_root_dir = len(pieces) - 1
+else:
+ len_root_dir = len(pieces)
+
+for dirpath, dirnames, filenames in os.walk(os.path.join(root_dir,
+ PACKAGE_DIR)):
+ if '__init__.py' in filenames:
+ package = '.'.join(fullsplit(dirpath)[len_root_dir:])
+ packages.append(package)
+ for filename in os.listdir(dirpath):
+ if (filename.startswith('.') or filename.startswith('_') or
+ not filename.endswith('.py')):
+ continue
+ modules.append(package + '.' + filename)
+ elif filenames:
+ data_files.append([dirpath, [os.path.join(dirpath, f) for f in filenames]])
+
+scripts = []
+for dirpath, dirnames, filenames in os.walk(os.path.join(root_dir,
+ SCRIPTS_DIR)):
+ for filename in filenames:
+ if filename == '__init__.py':
+ continue
+ elif filename.endswith('.py'):
+ scripts.append(os.path.join(dirpath, filename))
+
from setuptools import setup
setup(
# basic package data
- name = "psubprocess",
+ name = PACKAGE_DIR,
version = "0.0.1",
author='Jose Blanca, Peio Ziarsolo',
author_email='jblanca@btc.upv.es',
description='runs commands in parallel environments',
# package structure
- packages=['psubprocess'],
+ packages=packages,
package_dir={'':'.'},
requires=[],
- scripts=['scripts/run_in_parallel.py']
+ scripts=scripts,
)
|
JoseBlanca/psubprocess
|
c0e49963bd9080310ce97b21ae82240a5ee4fb6c
|
now the cmd_def can be given in the command line
|
diff --git a/psubprocess/cmd_def_from_cmd.py b/psubprocess/cmd_def_from_cmd.py
new file mode 100644
index 0000000..eb62a4e
--- /dev/null
+++ b/psubprocess/cmd_def_from_cmd.py
@@ -0,0 +1,65 @@
+'''
+Created on 22/12/2009
+
+@author: jose
+'''
+
+# Copyright 2009 Jose Blanca, Peio Ziarsolo, COMAV-Univ. Politecnica Valencia
+# This file is part of psubprocess.
+# psubprocess is free software: you can redistribute it and/or modify
+# it under the terms of the GNU Affero General Public License as
+# published by the Free Software Foundation, either version 3 of the
+# License, or (at your option) any later version.
+
+# psubprocess is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU Affero General Public License for more details.
+
+# You should have received a copy of the GNU Affero General Public License
+# along with psubprocess. If not, see <http://www.gnu.org/licenses/>.
+
+def get_cmd_def_from_cmd(cmd):
+ 'Given a cmd it returns a cmd_def infered from it and a clean cmd'
+ cmd_def = []
+ new_cmd = []
+ for index, arg in enumerate(cmd):
+ #if this arg is not like >#whatever# or <#whatever# we don't process it
+ if not (arg[0] in ('>', '<') and arg[-1] == '#'):
+ new_cmd.append(arg)
+ continue
+ arg_def = {}
+ #is an input or an output?
+ if arg[0] == '>':
+ arg_def['io'] = 'in'
+ elif arg[0] == '<':
+ arg_def['io'] = 'out'
+ else:
+ msg = 'Wrong format in the first character of the argument: %s', arg
+ raise ValueError(msg)
+ arg = arg[1:-1]
+ #now we split the section that defines the original argument
+ definition, arg = arg.split('#')
+ #the arg is cleaned now, we can add it to the command
+ new_cmd.append(arg)
+ #from the arg we have to extract the definition option
+ #if the arg begins with - we assume that it will be associated with the
+ #next item in the cmd
+ if arg[0] == '-':
+ arg_def['options'] = (arg.rstrip('-'), )
+ else:
+ #is an argument at the begining or the end and has no option
+ #associated
+ #there's no -i or --input argument
+ arg_def['options'] = index
+
+ #now we can process the rest of the definiton
+ #the items in the definiton should be separed by ';'
+ for definition_item in definition.split(';'):
+ if not definition_item:
+ continue
+ key, value = definition_item.split('=')
+ arg_def[key] = value
+ cmd_def.append(arg_def)
+
+ return new_cmd, cmd_def
diff --git a/psubprocess/prunner.py b/psubprocess/prunner.py
index 8e62ad0..1fb2543 100644
--- a/psubprocess/prunner.py
+++ b/psubprocess/prunner.py
@@ -1,484 +1,487 @@
'''It launches parallel processes with an interface similar to Popen.
It divides jobs into subjobs and launches the subjobs.
This module is useful when we have a non-parallel command to run in a
multiprocessor computer or in a multinode cluster. It will take the input files,
it will split them and it will run a subjob for everyone of the splits. It will
wait for the subjobs to finnish and it will join the output files generated
by all subjobs. At the end of the process will get the same output files as if
the command wasn't run in parallel.
This approach will work with commands that process a lot of items. This module
divides the items in sereval set and it assigns each of this sets to one new
subjob. These are the subjobs that will be run in parallel.
To do it requires the parameters used by popen: cmd, stdin, stdout, stderr and
some extra information: runner, splits and cmd_def.
runner is optional and it should be a subprocess.Popen like class. If it's not
given subprocess.Popen will be used. This Popen be the class used to run the
subjobs. If subprocess.Popen is used the subjobs will run in the processors of
the local node on several independent processes. If the Condor Popen is used
the subjobs will run in a condor cluster.
splits is the number of subjobs that we want to generate. If it's not given the
runner will provide a suitable number.
cmd_def is a dict that defines how the cmd defines the input and output files.
We need to tell Popen which are the input and output files in order to split
them and join them. The syntax for cmd_def is explained in the stream.py module
'''
# Copyright 2009 Jose Blanca, Peio Ziarsolo, COMAV-Univ. Politecnica Valencia
# This file is part of psubprocess.
# psubprocess is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as
# published by the Free Software Foundation, either version 3 of the
# License, or (at your option) any later version.
# psubprocess is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
# You should have received a copy of the GNU Affero General Public License
# along with psubprocess. If not, see <http://www.gnu.org/licenses/>.
from subprocess import Popen as StdPopen
import os, copy
from psubprocess.streams import get_streams_from_cmd, STDOUT, STDERR, STDIN
from psubprocess.condor_runner import call
from psubprocess import condor_runner
from psubprocess.splitters import (create_file_splitter_with_re,
create_non_splitter_splitter)
from psubprocess.utils import NamedTemporaryDir, copy_file_mode
+from psubprocess.cmd_def_from_cmd import get_cmd_def_from_cmd
RUNNER_MODULES = {}
RUNNER_MODULES['condor_runner'] = condor_runner
class Popen(object):
'''It paralellizes the given processes dividing them into subprocesses.
The interface is similar to subprocess.Popen to ease the use of this class,
although the functionality of this class is much mor limited.
When an instance of this class is created a series of subjobs is launched.
When all subjobs are finished returncode will have an int, if they're still
running returncode will be None.
We can wait for all subjobs to finnish using the wait method or we can
kill or terminate them using kill and terminate.
'''
def __init__(self, cmd, cmd_def=None, runner=None, runner_conf=None,
stdout=None, stderr=None, stdin=None, splits=None):
'''It inits the a Popen instance, it creates and runs the subjobs.
Like the subprocess.Popen it accepts stdin, stdout, stderr, but in this
case all of them should be files, PIPE will not work.
In the cmd_def list we have to tell this Popen how to locate the
input and output files in the cmd and how to split and join them. Look
for the cmd_format in the streams.py file.
keyword arguments:
cmd -- a list with the cmd to parallelize
cmd_def -- the cmd definition list (default [])
runner -- which runner to use (default subprocess.Popen)
runner_conf -- extra parameters for the runner (default {})
stdout -- a fhand to store the stdout (default None)
stderr -- a fhand to store the stderr (default None)
stdin -- a fhand with the stdin (default None)
splits -- number of subjobs to generate
'''
#we want the same interface as subprocess.popen
#pylint: disable-msg=R0913
self._retcode = None
self._outputs_collected = False
#some defaults
#if the runner is not given, we use subprocess.Popen
if runner is None:
runner = StdPopen
- if cmd_def is None:
- if stdin is not None:
- raise ValueError('No cmd_def given but stdin present')
- cmd_def = []
+ #is the cmd_def set in the command?
+ cmd, cmd_def = get_cmd_def_from_cmd(cmd)
+
+ if not cmd_def and stdin is not None:
+ raise ValueError('No cmd_def given but stdin present')
+
#if the number of splits is not given we calculate them
if splits is None:
splits = self.default_splits(runner)
#we need a work dir to create the temporary split files
self._work_dir = NamedTemporaryDir()
copy_file_mode('.', self._work_dir.name)
#the main job
self._job = {'cmd': cmd, 'work_dir': self._work_dir}
#we create the new subjobs
self._jobs = self._split_jobs(cmd, cmd_def, splits, self._work_dir,
stdout=stdout, stderr=stderr, stdin=stdin)
#launch every subjobs
self._launch_jobs(self._jobs, runner=runner, runner_conf=runner_conf)
@staticmethod
def _launch_jobs(jobs, runner, runner_conf):
'It launches all jobs and it adds its popen instance to them'
jobs['popens'] = []
cwd = os.getcwd()
for job_index, (cmd, streams, work_dir) in enumerate(zip(jobs['cmds'],
jobs['streams'], jobs['work_dirs'])):
#the std stream can be present or not
stdin, stdout, stderr = None, None, None
if jobs['stdins']:
stdin = jobs['stdins'][job_index]
if jobs['stdouts']:
stdout = jobs['stdouts'][job_index]
if jobs['stderrs']:
stderr = jobs['stderrs'][job_index]
#for every job we go to its dir to launch it
os.chdir(work_dir.name)
#we have to be sure that stdin is open for read
if stdin:
stdin = open(stdin.name)
#we have to be sure that stdout and stderr are open for write
if stdout:
stdout = open(stdout.name, 'w')
if stderr:
stderr = open(stderr.name, 'w')
#we launch the job
if runner == StdPopen:
popen = runner(cmd, stdout=stdout, stderr=stderr, stdin=stdin)
else:
popen = runner(cmd, cmd_def=streams, stdout=stdout,
stderr=stderr, stdin=stdin,
runner_conf=runner_conf)
#we record it's popen instane
jobs['popens'].append(popen)
os.chdir(cwd)
def _split_jobs(self, cmd, cmd_def, splits, work_dir, stdout=None,
stderr=None, stdin=None,):
''''I creates one job for every split.
Every job has a cmd, work_dir and streams, this info is in the jobs dict
with the keys: cmds, work_dirs, streams
'''
#too many arguments, but similar interface to our __init__
#pylint: disable-msg=R0913
#pylint: disable-msg=R0914
#the main job streams
main_job_streams = get_streams_from_cmd(cmd, cmd_def, stdout=stdout,
stderr=stderr, stdin=stdin)
self._job['streams'] = main_job_streams
streams, work_dirs = self._split_streams(main_job_streams, splits,
work_dir.name)
#now we have to create a new cmd with the right in and out streams for
#every split
cmds, stdins, stdouts, stderrs = self._create_cmds(cmd, streams)
jobs = {'cmds': cmds, 'work_dirs': work_dirs, 'streams': streams,
'stdins':stdins, 'stdouts':stdouts, 'stderrs':stderrs}
return jobs
@staticmethod
def _create_cmds(cmd, streams):
'''Given a base cmd and a steams list it creates one modified cmds for
every stream'''
#the streams is a list of streams
streamss = streams
cmds = []
stdouts = []
stdins = []
stderrs = []
for streams in streamss:
new_cmd = copy.deepcopy(cmd)
for stream in streams:
#is the stream in the cmd or in is a std one?
if 'cmd_location' in stream:
location = stream['cmd_location']
else:
location = None
if location is None:
continue
elif location == STDIN:
stdins.append(stream['fhand'])
elif location == STDOUT:
stdouts.append(stream['fhand'])
elif location == STDERR:
stderrs.append(stream['fhand'])
else:
#we modify the cmd[location] with the new file
#we use the fname and no path because the jobs will be
#launched from the job working dir
location = stream['cmd_location']
fpath = stream['fname']
fname = os.path.split(fpath)[-1]
new_cmd[location] = fname
cmds.append(new_cmd)
return cmds, stdins, stdouts, stderrs
@staticmethod
def _split_streams(streams, splits, work_dir):
'''Given a list of streams it splits every stream in the given number of
splits'''
#which are the input and output streams?
input_stream_indexes = []
output_stream_indexes = []
for index, stream in enumerate(streams):
if stream['io'] == 'in':
input_stream_indexes.append(index)
elif stream['io'] == 'out':
output_stream_indexes.append(index)
#we create one work dir for every split
work_dirs = []
for index in range(splits):
dir_ = NamedTemporaryDir(dir=work_dir)
work_dirs.append(dir_)
copy_file_mode('.', dir_.name)
#we have to do first the input files because the number of splits could
#be changed by them
#we split the input stream files into several splits
#we have to sort the input_stream_indexes, first we should take the ones
#that have an input file to be split
def do_we_have_to_split(stream_index):
'If the stream has to split a file it will return True'
split = None
stream = streams[stream_index]
#maybe they shouldn't be split
if 'special' in stream and 'no_split' in stream['special']:
split = False
#maybe there is no file to split
if (('fhand' in stream and stream['fhand'] is None) or
('fname' in stream and stream['fname'] is None) or
('fname' not in stream and 'fhand' not in stream)):
split = False
elif (('fhand' in stream and stream['fhand'] is not None) or
('fname' in stream and stream['fname'] is not None)):
split = True
return split
def to_be_split_first(stream1, stream2):
'It sorts the streams, the ones to be split go first'
split1 = do_we_have_to_split(stream1)
split2 = do_we_have_to_split(stream2)
return int(split1) - int(split2)
input_stream_indexes = sorted(input_stream_indexes, to_be_split_first)
first = True
split_files = {}
for index in input_stream_indexes:
stream = streams[index]
#splitter
splitter = None
if 'special' in stream and 'no_split' in stream['special']:
splitter = create_non_splitter_splitter(copy_files=True)
elif 'splitter' not in stream:
msg = 'An splitter should be provided for every input stream'
msg += 'missing for: ' + str(stream)
raise ValueError(msg)
else:
splitter = stream['splitter']
#the splitter can be a re, in that case with create the function
if '__call__' not in dir(splitter):
splitter = create_file_splitter_with_re(splitter)
#we split the input files in the splits, every file will be in one
#of the given work_dirs
#the stream can have fname or fhands
if 'fhand' in stream:
file_ = stream['fhand']
elif 'fname' in stream:
file_ = stream['fname']
else:
file_ = None
if file_ is None:
#the stream migth have no file associated
files = [None] * len(work_dirs)
else:
files = splitter(file_, work_dirs)
#the files len can be different than splits, in that case we modify
#the splits or we raise an error
if len(files) != splits:
if first:
splits = len(files)
#we discard the empty temporary dirs
work_dirs = work_dirs[0:splits]
else:
msg = 'Not all input files were divided in the same number'
msg += ' of splits'
raise RuntimeError(msg)
first = False
split_files[index] = files #a list of files for every in stream
#we split the ouptut stream files into several splits
output_splitter = create_non_splitter_splitter(copy_files=False)
for index in output_stream_indexes:
stream = streams[index]
#for th output we just create the new names, but we don't split
#any file
if 'fhand' in stream:
fname = stream['fhand']
else:
fname = stream['fname']
files = output_splitter(fname, work_dirs)
split_files[index] = files #a list of files for every in stream
new_streamss = []
#we need one new stream for every split
for split_index in range(splits):
#the streams for one job
new_streams = []
for stream_index, stream in enumerate(streams):
#we duplicate the original stream
new_stream = stream.copy()
#we set the new files
if 'fhand' in stream:
new_stream['fhand'] = split_files[stream_index][split_index]
else:
new_stream['fname'] = split_files[stream_index][split_index]
new_streams.append(new_stream)
new_streamss.append(new_streams)
return new_streamss, work_dirs
@staticmethod
def default_splits(runner):
'Given a runner it returns the number of splits recommended by default'
if runner is StdPopen:
#the number of processors
return os.sysconf('SC_NPROCESSORS_ONLN')
else:
module = runner.__module__.split('.')[-1]
module = RUNNER_MODULES[module]
return module.get_default_splits()
def wait(self):
'It waits for all the works to finnish'
#we wait till all jobs finish
for job in self._jobs['popens']:
job.wait()
#now that all jobs have finished we join the results
self._collect_output_streams()
#we join now the retcodes
self._collect_retcodes()
return self._retcode
def _collect_output_streams(self):
'''It joins all the output streams into the output files and it removes
the work dirs'''
if self._outputs_collected:
return
#for each file in the main job cmd
for stream_index, stream in enumerate(self._job['streams']):
if stream['io'] == 'in':
#now we're dealing only with output files
continue
#every subjob has a part to join for this output stream
part_out_fnames = []
for streams in self._jobs['streams']:
this_stream = streams[stream_index]
if 'fname' in this_stream:
part_out_fnames.append(this_stream['fname'])
else:
part_out_fnames.append(this_stream['fhand'])
#we need a function to join this stream
joiner = None
if joiner in stream:
joiner = stream['joiner']
else:
joiner = default_cat_joiner
if 'fname' in stream:
out_file = stream['fname']
else:
out_file = stream['fhand']
default_cat_joiner(out_file, part_out_fnames)
#now we can delete the tempdirs
for work_dir in self._jobs['work_dirs']:
work_dir.close()
self._outputs_collected = True
def _collect_retcodes(self):
'It gathers the retcodes from all processes'
retcode = None
for popen in self._jobs['popens']:
job_retcode = popen.returncode
if job_retcode is None:
#if some job is yet to be finished the main job is not finished
retcode = None
break
elif job_retcode != 0:
#if one job has finnished badly the main job is badly finished
retcode = job_retcode
break
#it should be 0 at this point
retcode = job_retcode
#if the retcode is not None the jobs have finished and we have to
#collect the outputs
if retcode is not None:
self._collect_output_streams()
self._retcode = retcode
return retcode
def _get_returncode(self):
'It returns the return code'
if self._retcode is None:
self._collect_retcodes()
return self._retcode
returncode = property(_get_returncode)
def kill(self):
'It kills all jobs'
if 'popens' not in self._jobs:
return
for popen in self._jobs['popens']:
#untill 2.6 subprocess.popen do not support kill
if 'kill' in dir(popen):
popen.kill()
else:
pid = popen.pid
call(['kill', '-9', str(pid)])
def terminate(self):
'It kills all jobs'
if 'popens' not in self._jobs:
return
for popen in self._jobs['popens']:
#untill 2.6 subprocess.popen do not support terminate
if 'terminate' in dir(popen):
popen.terminate()
else:
pid = popen.pid
call(['kill', '-6', str(pid)])
def default_cat_joiner(out_file_, in_files_):
'''It joins the given in files into the given out file.
It works with fnames or fhands.
'''
#are we working with fhands or fnames?
file_is_str = None
if isinstance(out_file_, str):
file_is_str = True
else:
file_is_str = False
#the output fhand
if file_is_str:
out_fhand = open(out_file_, 'w')
else:
out_fhand = open(out_file_.name, 'w')
for in_file_ in in_files_:
#the input fhand
if file_is_str:
in_fhand = open(in_file_, 'r')
else:
in_fhand = open(in_file_.name, 'r')
for line in in_fhand:
out_fhand.write(line)
in_fhand.close()
out_fhand.close()
diff --git a/test/cmd_def_from_cmd_test.py b/test/cmd_def_from_cmd_test.py
new file mode 100644
index 0000000..6622fdb
--- /dev/null
+++ b/test/cmd_def_from_cmd_test.py
@@ -0,0 +1,91 @@
+'''
+Created on 22/12/2009
+
+@author: jose
+'''
+
+# Copyright 2009 Jose Blanca, Peio Ziarsolo, COMAV-Univ. Politecnica Valencia
+# This file is part of psubprocess.
+# psubprocess is free software: you can redistribute it and/or modify
+# it under the terms of the GNU Affero General Public License as
+# published by the Free Software Foundation, either version 3 of the
+# License, or (at your option) any later version.
+
+# psubprocess is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU Affero General Public License for more details.
+
+# You should have received a copy of the GNU Affero General Public License
+# along with psubprocess. If not, see <http://www.gnu.org/licenses/>.
+
+import unittest, os
+from tempfile import NamedTemporaryFile
+
+from test_utils import create_test_binary
+from psubprocess import Popen
+from psubprocess.cmd_def_from_cmd import get_cmd_def_from_cmd
+
+
+class CmddefFromCmdTest(unittest.TestCase):
+ 'It checks that we can build cmd definitions looking at the command given'
+
+ @staticmethod
+ def test_no_cmd_def():
+ 'A command withtout the especial syntax returns and empty cmd_def'
+ cmd = ['cat', 'hola.txt']
+ processed_cmd, cmd_def = get_cmd_def_from_cmd(cmd)
+ assert processed_cmd == cmd
+ assert not cmd_def
+
+ @staticmethod
+ def test_splitter():
+ 'We can get the splitter from the cmd'
+ #with splitter
+ cmd = ['cat', '>splitter=>#hola.txt#']
+ processed_cmd, cmd_def = get_cmd_def_from_cmd(cmd)
+ assert processed_cmd == ['cat', 'hola.txt']
+ assert cmd_def == [{'options': 1, 'io': 'in', 'splitter':'>'}]
+
+ #with no splitter
+ cmd = ['cat', '>#hola.txt#']
+ processed_cmd, cmd_def = get_cmd_def_from_cmd(cmd)
+ assert processed_cmd == ['cat', 'hola.txt']
+ assert cmd_def == [{'options': 1, 'io': 'in'}]
+
+ #with a parameter
+ cmd = ['cat', '>#-i#', 'hola.txt']
+ processed_cmd, cmd_def = get_cmd_def_from_cmd(cmd)
+ assert processed_cmd == ['cat', '-i' ,'hola.txt']
+ assert cmd_def == [{'options': ('-i',), 'io': 'in'}]
+
+
+ @staticmethod
+ def test_prunner_with_cmddef():
+ 'It tests that we can run the prunner setting the cmd_def in the cmd'
+ bin = create_test_binary()
+ #with infile
+ in_file = NamedTemporaryFile()
+ content = 'hola1\nhola2\nhola3\nhola4\nhola5\nhola6\nhola7\nhola8\n'
+ content += 'hola9\nhola10|n'
+ in_file.write(content)
+ in_file.flush()
+ out_file = NamedTemporaryFile()
+
+ cmd = [bin]
+ cmd.extend(['>splitter=#-i#', in_file.name,
+ '<#-t#', out_file.name])
+ stdout = NamedTemporaryFile()
+ stderr = NamedTemporaryFile()
+ popen = Popen(cmd, stdout=stdout, stderr=stderr)
+ assert popen.wait() == 0 #waits till finishes and looks to the retcod
+ assert not open(stdout.name).read()
+ assert not open(stderr.name).read()
+ assert open(out_file.name).read() == content
+ in_file.close()
+ os.remove(bin)
+
+
+if __name__ == "__main__":
+ #import sys;sys.argv = ['', 'Test.testName']
+ unittest.main()
\ No newline at end of file
|
JoseBlanca/psubprocess
|
1223502b3c36205ecc999676422107e550ed0843
|
unknown changes
|
diff --git a/psubprocess/prunner.py b/psubprocess/prunner.py
index 8125dfe..8e62ad0 100644
--- a/psubprocess/prunner.py
+++ b/psubprocess/prunner.py
@@ -1,479 +1,484 @@
'''It launches parallel processes with an interface similar to Popen.
It divides jobs into subjobs and launches the subjobs.
This module is useful when we have a non-parallel command to run in a
multiprocessor computer or in a multinode cluster. It will take the input files,
it will split them and it will run a subjob for everyone of the splits. It will
wait for the subjobs to finnish and it will join the output files generated
by all subjobs. At the end of the process will get the same output files as if
the command wasn't run in parallel.
This approach will work with commands that process a lot of items. This module
divides the items in sereval set and it assigns each of this sets to one new
subjob. These are the subjobs that will be run in parallel.
To do it requires the parameters used by popen: cmd, stdin, stdout, stderr and
some extra information: runner, splits and cmd_def.
runner is optional and it should be a subprocess.Popen like class. If it's not
given subprocess.Popen will be used. This Popen be the class used to run the
subjobs. If subprocess.Popen is used the subjobs will run in the processors of
the local node on several independent processes. If the Condor Popen is used
the subjobs will run in a condor cluster.
splits is the number of subjobs that we want to generate. If it's not given the
runner will provide a suitable number.
cmd_def is a dict that defines how the cmd defines the input and output files.
We need to tell Popen which are the input and output files in order to split
them and join them. The syntax for cmd_def is explained in the stream.py module
'''
# Copyright 2009 Jose Blanca, Peio Ziarsolo, COMAV-Univ. Politecnica Valencia
# This file is part of psubprocess.
# psubprocess is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as
# published by the Free Software Foundation, either version 3 of the
# License, or (at your option) any later version.
# psubprocess is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
# You should have received a copy of the GNU Affero General Public License
# along with psubprocess. If not, see <http://www.gnu.org/licenses/>.
from subprocess import Popen as StdPopen
import os, copy
from psubprocess.streams import get_streams_from_cmd, STDOUT, STDERR, STDIN
from psubprocess.condor_runner import call
from psubprocess import condor_runner
from psubprocess.splitters import (create_file_splitter_with_re,
create_non_splitter_splitter)
from psubprocess.utils import NamedTemporaryDir, copy_file_mode
RUNNER_MODULES = {}
RUNNER_MODULES['condor_runner'] = condor_runner
class Popen(object):
'''It paralellizes the given processes dividing them into subprocesses.
The interface is similar to subprocess.Popen to ease the use of this class,
although the functionality of this class is much mor limited.
When an instance of this class is created a series of subjobs is launched.
When all subjobs are finished returncode will have an int, if they're still
running returncode will be None.
We can wait for all subjobs to finnish using the wait method or we can
kill or terminate them using kill and terminate.
'''
def __init__(self, cmd, cmd_def=None, runner=None, runner_conf=None,
stdout=None, stderr=None, stdin=None, splits=None):
'''It inits the a Popen instance, it creates and runs the subjobs.
Like the subprocess.Popen it accepts stdin, stdout, stderr, but in this
case all of them should be files, PIPE will not work.
In the cmd_def list we have to tell this Popen how to locate the
input and output files in the cmd and how to split and join them. Look
for the cmd_format in the streams.py file.
keyword arguments:
cmd -- a list with the cmd to parallelize
cmd_def -- the cmd definition list (default [])
runner -- which runner to use (default subprocess.Popen)
runner_conf -- extra parameters for the runner (default {})
stdout -- a fhand to store the stdout (default None)
stderr -- a fhand to store the stderr (default None)
stdin -- a fhand with the stdin (default None)
splits -- number of subjobs to generate
'''
#we want the same interface as subprocess.popen
#pylint: disable-msg=R0913
self._retcode = None
self._outputs_collected = False
#some defaults
#if the runner is not given, we use subprocess.Popen
if runner is None:
runner = StdPopen
if cmd_def is None:
if stdin is not None:
raise ValueError('No cmd_def given but stdin present')
cmd_def = []
#if the number of splits is not given we calculate them
if splits is None:
splits = self.default_splits(runner)
#we need a work dir to create the temporary split files
self._work_dir = NamedTemporaryDir()
copy_file_mode('.', self._work_dir.name)
#the main job
self._job = {'cmd': cmd, 'work_dir': self._work_dir}
#we create the new subjobs
self._jobs = self._split_jobs(cmd, cmd_def, splits, self._work_dir,
stdout=stdout, stderr=stderr, stdin=stdin)
#launch every subjobs
self._launch_jobs(self._jobs, runner=runner, runner_conf=runner_conf)
@staticmethod
def _launch_jobs(jobs, runner, runner_conf):
'It launches all jobs and it adds its popen instance to them'
jobs['popens'] = []
cwd = os.getcwd()
for job_index, (cmd, streams, work_dir) in enumerate(zip(jobs['cmds'],
jobs['streams'], jobs['work_dirs'])):
#the std stream can be present or not
stdin, stdout, stderr = None, None, None
if jobs['stdins']:
stdin = jobs['stdins'][job_index]
if jobs['stdouts']:
stdout = jobs['stdouts'][job_index]
if jobs['stderrs']:
stderr = jobs['stderrs'][job_index]
#for every job we go to its dir to launch it
os.chdir(work_dir.name)
#we have to be sure that stdin is open for read
if stdin:
stdin = open(stdin.name)
+ #we have to be sure that stdout and stderr are open for write
+ if stdout:
+ stdout = open(stdout.name, 'w')
+ if stderr:
+ stderr = open(stderr.name, 'w')
#we launch the job
if runner == StdPopen:
popen = runner(cmd, stdout=stdout, stderr=stderr, stdin=stdin)
else:
popen = runner(cmd, cmd_def=streams, stdout=stdout,
stderr=stderr, stdin=stdin,
runner_conf=runner_conf)
#we record it's popen instane
jobs['popens'].append(popen)
os.chdir(cwd)
def _split_jobs(self, cmd, cmd_def, splits, work_dir, stdout=None,
stderr=None, stdin=None,):
''''I creates one job for every split.
Every job has a cmd, work_dir and streams, this info is in the jobs dict
with the keys: cmds, work_dirs, streams
'''
#too many arguments, but similar interface to our __init__
#pylint: disable-msg=R0913
#pylint: disable-msg=R0914
#the main job streams
main_job_streams = get_streams_from_cmd(cmd, cmd_def, stdout=stdout,
stderr=stderr, stdin=stdin)
self._job['streams'] = main_job_streams
streams, work_dirs = self._split_streams(main_job_streams, splits,
work_dir.name)
#now we have to create a new cmd with the right in and out streams for
#every split
cmds, stdins, stdouts, stderrs = self._create_cmds(cmd, streams)
jobs = {'cmds': cmds, 'work_dirs': work_dirs, 'streams': streams,
'stdins':stdins, 'stdouts':stdouts, 'stderrs':stderrs}
return jobs
@staticmethod
def _create_cmds(cmd, streams):
'''Given a base cmd and a steams list it creates one modified cmds for
every stream'''
#the streams is a list of streams
streamss = streams
cmds = []
stdouts = []
stdins = []
stderrs = []
for streams in streamss:
new_cmd = copy.deepcopy(cmd)
for stream in streams:
#is the stream in the cmd or in is a std one?
if 'cmd_location' in stream:
location = stream['cmd_location']
else:
location = None
if location is None:
continue
elif location == STDIN:
stdins.append(stream['fhand'])
elif location == STDOUT:
stdouts.append(stream['fhand'])
elif location == STDERR:
stderrs.append(stream['fhand'])
else:
#we modify the cmd[location] with the new file
#we use the fname and no path because the jobs will be
#launched from the job working dir
location = stream['cmd_location']
fpath = stream['fname']
fname = os.path.split(fpath)[-1]
new_cmd[location] = fname
cmds.append(new_cmd)
return cmds, stdins, stdouts, stderrs
@staticmethod
def _split_streams(streams, splits, work_dir):
'''Given a list of streams it splits every stream in the given number of
splits'''
#which are the input and output streams?
input_stream_indexes = []
output_stream_indexes = []
for index, stream in enumerate(streams):
if stream['io'] == 'in':
input_stream_indexes.append(index)
elif stream['io'] == 'out':
output_stream_indexes.append(index)
#we create one work dir for every split
work_dirs = []
for index in range(splits):
dir_ = NamedTemporaryDir(dir=work_dir)
work_dirs.append(dir_)
copy_file_mode('.', dir_.name)
#we have to do first the input files because the number of splits could
#be changed by them
#we split the input stream files into several splits
#we have to sort the input_stream_indexes, first we should take the ones
#that have an input file to be split
def do_we_have_to_split(stream_index):
'If the stream has to split a file it will return True'
split = None
stream = streams[stream_index]
#maybe they shouldn't be split
if 'special' in stream and 'no_split' in stream['special']:
split = False
#maybe there is no file to split
if (('fhand' in stream and stream['fhand'] is None) or
('fname' in stream and stream['fname'] is None) or
('fname' not in stream and 'fhand' not in stream)):
split = False
elif (('fhand' in stream and stream['fhand'] is not None) or
('fname' in stream and stream['fname'] is not None)):
split = True
return split
def to_be_split_first(stream1, stream2):
'It sorts the streams, the ones to be split go first'
split1 = do_we_have_to_split(stream1)
split2 = do_we_have_to_split(stream2)
return int(split1) - int(split2)
input_stream_indexes = sorted(input_stream_indexes, to_be_split_first)
first = True
split_files = {}
for index in input_stream_indexes:
stream = streams[index]
#splitter
splitter = None
if 'special' in stream and 'no_split' in stream['special']:
splitter = create_non_splitter_splitter(copy_files=True)
elif 'splitter' not in stream:
msg = 'An splitter should be provided for every input stream'
msg += 'missing for: ' + str(stream)
raise ValueError(msg)
else:
splitter = stream['splitter']
#the splitter can be a re, in that case with create the function
if '__call__' not in dir(splitter):
splitter = create_file_splitter_with_re(splitter)
#we split the input files in the splits, every file will be in one
#of the given work_dirs
#the stream can have fname or fhands
if 'fhand' in stream:
file_ = stream['fhand']
elif 'fname' in stream:
file_ = stream['fname']
else:
file_ = None
if file_ is None:
#the stream migth have no file associated
files = [None] * len(work_dirs)
else:
files = splitter(file_, work_dirs)
#the files len can be different than splits, in that case we modify
#the splits or we raise an error
if len(files) != splits:
if first:
splits = len(files)
#we discard the empty temporary dirs
work_dirs = work_dirs[0:splits]
else:
msg = 'Not all input files were divided in the same number'
msg += ' of splits'
raise RuntimeError(msg)
first = False
split_files[index] = files #a list of files for every in stream
#we split the ouptut stream files into several splits
output_splitter = create_non_splitter_splitter(copy_files=False)
for index in output_stream_indexes:
stream = streams[index]
#for th output we just create the new names, but we don't split
#any file
if 'fhand' in stream:
fname = stream['fhand']
else:
fname = stream['fname']
files = output_splitter(fname, work_dirs)
split_files[index] = files #a list of files for every in stream
new_streamss = []
#we need one new stream for every split
for split_index in range(splits):
#the streams for one job
new_streams = []
for stream_index, stream in enumerate(streams):
#we duplicate the original stream
new_stream = stream.copy()
#we set the new files
if 'fhand' in stream:
new_stream['fhand'] = split_files[stream_index][split_index]
else:
new_stream['fname'] = split_files[stream_index][split_index]
new_streams.append(new_stream)
new_streamss.append(new_streams)
return new_streamss, work_dirs
@staticmethod
def default_splits(runner):
'Given a runner it returns the number of splits recommended by default'
if runner is StdPopen:
#the number of processors
return os.sysconf('SC_NPROCESSORS_ONLN')
else:
module = runner.__module__.split('.')[-1]
module = RUNNER_MODULES[module]
return module.get_default_splits()
def wait(self):
'It waits for all the works to finnish'
#we wait till all jobs finish
for job in self._jobs['popens']:
job.wait()
#now that all jobs have finished we join the results
self._collect_output_streams()
#we join now the retcodes
self._collect_retcodes()
return self._retcode
def _collect_output_streams(self):
'''It joins all the output streams into the output files and it removes
the work dirs'''
if self._outputs_collected:
return
#for each file in the main job cmd
for stream_index, stream in enumerate(self._job['streams']):
if stream['io'] == 'in':
#now we're dealing only with output files
continue
#every subjob has a part to join for this output stream
part_out_fnames = []
for streams in self._jobs['streams']:
this_stream = streams[stream_index]
if 'fname' in this_stream:
part_out_fnames.append(this_stream['fname'])
else:
part_out_fnames.append(this_stream['fhand'])
#we need a function to join this stream
joiner = None
if joiner in stream:
joiner = stream['joiner']
else:
joiner = default_cat_joiner
if 'fname' in stream:
out_file = stream['fname']
else:
out_file = stream['fhand']
default_cat_joiner(out_file, part_out_fnames)
#now we can delete the tempdirs
for work_dir in self._jobs['work_dirs']:
work_dir.close()
self._outputs_collected = True
def _collect_retcodes(self):
'It gathers the retcodes from all processes'
retcode = None
for popen in self._jobs['popens']:
job_retcode = popen.returncode
if job_retcode is None:
#if some job is yet to be finished the main job is not finished
retcode = None
break
elif job_retcode != 0:
#if one job has finnished badly the main job is badly finished
retcode = job_retcode
break
#it should be 0 at this point
retcode = job_retcode
#if the retcode is not None the jobs have finished and we have to
#collect the outputs
if retcode is not None:
self._collect_output_streams()
self._retcode = retcode
return retcode
def _get_returncode(self):
'It returns the return code'
if self._retcode is None:
self._collect_retcodes()
return self._retcode
returncode = property(_get_returncode)
def kill(self):
'It kills all jobs'
if 'popens' not in self._jobs:
return
for popen in self._jobs['popens']:
#untill 2.6 subprocess.popen do not support kill
if 'kill' in dir(popen):
popen.kill()
else:
pid = popen.pid
call(['kill', '-9', str(pid)])
def terminate(self):
'It kills all jobs'
if 'popens' not in self._jobs:
return
for popen in self._jobs['popens']:
#untill 2.6 subprocess.popen do not support terminate
if 'terminate' in dir(popen):
popen.terminate()
else:
pid = popen.pid
call(['kill', '-6', str(pid)])
def default_cat_joiner(out_file_, in_files_):
'''It joins the given in files into the given out file.
It works with fnames or fhands.
'''
#are we working with fhands or fnames?
file_is_str = None
if isinstance(out_file_, str):
file_is_str = True
else:
file_is_str = False
#the output fhand
if file_is_str:
out_fhand = open(out_file_, 'w')
else:
out_fhand = open(out_file_.name, 'w')
for in_file_ in in_files_:
#the input fhand
if file_is_str:
in_fhand = open(in_file_, 'r')
else:
in_fhand = open(in_file_.name, 'r')
for line in in_fhand:
out_fhand.write(line)
in_fhand.close()
out_fhand.close()
diff --git a/psubprocess/utils.py b/psubprocess/utils.py
index 5058a4f..a565a88 100644
--- a/psubprocess/utils.py
+++ b/psubprocess/utils.py
@@ -1,61 +1,97 @@
'''
Created on 03/12/2009
@author: jose
'''
# Copyright 2009 Jose Blanca, Peio Ziarsolo, COMAV-Univ. Politecnica Valencia
# This file is part of psubprocess.
# psubprocess is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as
# published by the Free Software Foundation, either version 3 of the
# License, or (at your option) any later version.
# psubprocess is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
# You should have received a copy of the GNU Affero General Public License
# along with psubprocess. If not, see <http://www.gnu.org/licenses/>.
import tempfile, os, shutil
class NamedTemporaryDir(object):
'''This class creates temporary directories '''
#pylint: disable-msg=W0622
#we redifine the build in dir because temfile uses that inteface
def __init__(self, dir=None):
'''It initiates the class.'''
self._name = tempfile.mkdtemp(dir=dir)
def get_name(self):
'Returns path to the dict'
return self._name
name = property(get_name)
def close(self):
'''It removes the temp dir'''
if os.path.exists(self._name):
shutil.rmtree(self._name)
def __del__(self):
'''It removes the temp dir when instance is removed and the garbage
collector decides it'''
self.close()
+class NamedTemporaryFile_alway_closed(object):
+ '''A temporal file that won't be deleted.
+
+ It tries to be always closed and without a refernce to the real file
+ '''
+ def __init__(self, dir=None, suffix='', delete=False):
+ 'The init'
+ self.name = tempfile.mkstemp(dir=dir, suffix=suffix)[1]
+ self.open = True
+ self.delete = delete
+ def write(self, string):
+ 'It writes something to the file, always append'
+ if self.open:
+ fhand = open(self.name, 'a')
+ fhand.write(string)
+ fhand.close()
+ else:
+ raise ValueError('Write not possible the file is closed')
+ def read(self):
+ 'It reads somthing from the file'
+ if self.open:
+ fhand = open(self.name)
+ content = fhand.read()
+ fhand.close()
+ return content
+ else:
+ raise ValueError('Write not possible the file is closed')
+ def flush(self):
+ 'Just for compatibility in this class everything is flushed'
+ pass
+ def close(self):
+ 'It closes the file'
+ if self.delete:
+ os.remove(self.name)
+ self.open = False
+
def NamedTemporaryFile(dir=None, delete=False, suffix=''):
'''It creates a temporary file that won't be deleted when close
This behaviour can be done with tempfile.NamedTemporaryFile in python > 2.6
'''
#pylint: disable-msg=W0613
#delete is not being used, it's there as a reminder, once we start to use
#python 2.6 this function should be removed
#pylint: disable-msg=C0103
#pylint: disable-msg=W0622
#We want to mimick tempfile.NamedTemporaryFile
fpath = tempfile.mkstemp(dir=dir, suffix=suffix)[1]
return open(fpath, 'w')
def copy_file_mode(fpath1, fpath2):
'It copies the os.stats mode from file1 to file2'
mode = os.stat(fpath1)[0]
os.chmod(fpath2, mode)
diff --git a/test/prunner_test.py b/test/prunner_test.py
index bb5c647..2eb1dec 100644
--- a/test/prunner_test.py
+++ b/test/prunner_test.py
@@ -1,305 +1,305 @@
'''
Created on 16/07/2009
@author: jose
'''
# Copyright 2009 Jose Blanca, Peio Ziarsolo, COMAV-Univ. Politecnica Valencia
# This file is part of psubprocess.
# psubprocess is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as
# published by the Free Software Foundation, either version 3 of the
# License, or (at your option) any later version.
# psubprocess is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
# You should have received a copy of the GNU Affero General Public License
# along with psubprocess. If not, see <http://www.gnu.org/licenses/>.
import unittest
from tempfile import NamedTemporaryFile
import os
from psubprocess import Popen
from psubprocess.streams import STDIN
from test_utils import create_test_binary
class PRunnerTest(unittest.TestCase):
'It test that we can parallelize processes'
@staticmethod
def test_file_in():
'It tests the most basic behaviour'
bin = create_test_binary()
#with infile
in_file = NamedTemporaryFile()
in_file.write('hola')
in_file.flush()
cmd = [bin]
cmd.extend(['-i', in_file.name])
stdout = NamedTemporaryFile()
stderr = NamedTemporaryFile()
cmd_def = [{'options': ('-i', '--input'), 'io': 'in', 'splitter':''}]
popen = Popen(cmd, stdout=stdout, stderr=stderr, cmd_def=cmd_def)
assert popen.wait() == 0 #waits till finishes and looks to the retcod
assert open(stdout.name).read() == 'hola'
in_file.close()
os.remove(bin)
@staticmethod
def test_job_no_in_stream():
'It test that a job with no in stream is run splits times'
bin = create_test_binary()
cmd = [bin]
cmd.extend(['-o', 'hola', '-e', 'caracola'])
stdout = NamedTemporaryFile()
stderr = NamedTemporaryFile()
cmd_def = [{'options': ('-i', '--input'), 'io': 'in', 'splitter':''}]
splits = 4
popen = Popen(cmd, stdout=stdout, stderr=stderr, cmd_def=cmd_def,
splits=splits)
assert popen.wait() == 0 #waits till finishes and looks to the retcod
assert open(stdout.name).read() == 'hola' * splits
assert open(stderr.name).read() == 'caracola' * splits
os.remove(bin)
@staticmethod
def test_stdin():
'It test that stdin works as input'
bin = create_test_binary()
#with stdin
content = 'hola1\nhola2\nhola3\nhola4\nhola5\nhola6\nhola7\nhola8\n'
content += 'hola9\nhola10|n'
stdin = NamedTemporaryFile()
stdin.write(content)
stdin.flush()
cmd = [bin]
cmd.extend(['-s'])
stdout = NamedTemporaryFile()
stderr = NamedTemporaryFile()
cmd_def = [{'options':STDIN, 'io': 'in', 'splitter':''}]
popen = Popen(cmd, stdout=stdout, stderr=stderr, stdin=stdin,
cmd_def=cmd_def)
assert popen.wait() == 0 #waits till finishes and looks to the retcod
assert open(stdout.name).read() == content
assert open(stderr.name).read() == ''
os.remove(bin)
@staticmethod
def test_infile_outfile():
'It tests that we can set an input file and an output file'
bin = create_test_binary()
#with infile
in_file = NamedTemporaryFile()
content = 'hola1\nhola2\nhola3\nhola4\nhola5\nhola6\nhola7\nhola8\n'
content += 'hola9\nhola10|n'
in_file.write(content)
in_file.flush()
out_file = NamedTemporaryFile()
cmd = [bin]
cmd.extend(['-i', in_file.name, '-t', out_file.name])
stdout = NamedTemporaryFile()
stderr = NamedTemporaryFile()
cmd_def = [{'options': ('-i', '--input'), 'io': 'in', 'splitter':''},
{'options': ('-t', '--output'), 'io': 'out'}]
popen = Popen(cmd, stdout=stdout, stderr=stderr, cmd_def=cmd_def)
assert popen.wait() == 0 #waits till finishes and looks to the retcod
assert not open(stdout.name).read()
assert not open(stderr.name).read()
assert open(out_file.name).read() == content
in_file.close()
os.remove(bin)
@staticmethod
def test_retcode():
'It tests that we get the correct returncode'
bin = create_test_binary()
cmd = [bin]
cmd.extend(['-r', '20'])
stdout = NamedTemporaryFile()
stderr = NamedTemporaryFile()
popen = Popen(cmd, stdout=stdout, stderr=stderr, cmd_def=[])
assert popen.wait() == 20 #waits till finishes and looks to the retcod
assert not open(stdout.name).read()
assert not open(stderr.name).read()
os.remove(bin)
@staticmethod
def xtest_infile_outfile_condor():
'It tests that we can set an input file and an output file'
bin = create_test_binary()
#with infile
in_file = NamedTemporaryFile()
content = 'hola1\nhola2\nhola3\nhola4\nhola5\nhola6\nhola7\nhola8\n'
content += 'hola9\nhola10|n'
in_file.write(content)
in_file.flush()
out_file = NamedTemporaryFile()
cmd = [bin]
cmd.extend(['-i', in_file.name, '-t', out_file.name])
stdout = NamedTemporaryFile()
stderr = NamedTemporaryFile()
cmd_def = [{'options': ('-i', '--input'), 'io': 'in', 'splitter':''},
{'options': ('-t', '--output'), 'io': 'out'}]
from psubprocess import CondorPopen
popen = Popen(cmd, stdout=stdout, stderr=stderr, cmd_def=cmd_def,
runner=CondorPopen,
runner_conf={'transfer_executable':True})
assert popen.wait() == 0 #waits till finishes and looks to the retcode
assert not open(stdout.name).read()
assert not open(stderr.name).read()
assert open(out_file.name).read() == content
in_file.close()
os.remove(bin)
@staticmethod
def test_stdin_real_splitter():
'It test that stdin works as input'
bin = create_test_binary()
#with stdin
content = '>hola1\nhola2\n>hola3\nhola4\n>hola5\nhola6\n>hola7\nhola8\n'
content += '>hola9\nhola10|n'
stdin = NamedTemporaryFile()
stdin.write(content)
stdin.flush()
cmd = [bin]
cmd.extend(['-s'])
stdout = NamedTemporaryFile()
stderr = NamedTemporaryFile()
cmd_def = [{'options':STDIN, 'io': 'in', 'splitter':'>'}]
popen = Popen(cmd, stdout=stdout, stderr=stderr, stdin=stdin,
cmd_def=cmd_def)
assert popen.wait() == 0 #waits till finishes and looks to the retcode
assert open(stdout.name).read() == content
assert open(stderr.name).read() == ''
os.remove(bin)
@staticmethod
def test_2_infile_outfile():
'It tests that we can set 2 input files and an output file'
bin = create_test_binary()
#with infile
content = 'hola1\nhola2\nhola3\nhola4\nhola5\nhola6\nhola7\nhola8\n'
content += 'hola9\nhola10|n'
in_file1 = NamedTemporaryFile()
in_file1.write(content)
in_file1.flush()
in_file2 = NamedTemporaryFile()
in_file2.write(content)
in_file2.flush()
out_file1 = NamedTemporaryFile()
out_file2 = NamedTemporaryFile()
cmd = [bin]
cmd.extend(['-i', in_file1.name, '-t', out_file1.name])
cmd.extend(['-x', in_file2.name, '-z', out_file2.name])
stdout = NamedTemporaryFile()
stderr = NamedTemporaryFile()
cmd_def = [{'options': ('-i', '--input'), 'io': 'in', 'splitter':''},
{'options': ('-x', '--input'), 'io': 'in', 'splitter':''},
{'options': ('-t', '--output'), 'io': 'out'},
{'options': ('-z', '--output'), 'io': 'out'}]
popen = Popen(cmd, stdout=stdout, stderr=stderr, cmd_def=cmd_def)
assert popen.wait() == 0 #waits till finishes and looks to the retcod
assert not open(stdout.name).read()
assert not open(stderr.name).read()
assert open(out_file1.name).read() == content
assert open(out_file2.name).read() == content
in_file1.close()
in_file2.close()
os.remove(bin)
@staticmethod
def test_kill_subjobs():
'It tests that we can kill the subjobs'
bin = create_test_binary()
cmd = [bin]
cmd.extend(['-w'])
stdout = NamedTemporaryFile()
stderr = NamedTemporaryFile()
popen = Popen(cmd, stdout=stdout, stderr=stderr, cmd_def=[])
assert popen.returncode is None
popen.kill()
assert not open(stdout.name).read()
assert not open(stderr.name).read()
os.remove(bin)
@staticmethod
def test_nosplit():
'It test that we can set some input files to be not split'
bin = create_test_binary()
#with infile
in_file = NamedTemporaryFile()
content = 'hola1\nhola2\n'
in_file.write(content)
in_file.flush()
out_file = NamedTemporaryFile()
cmd = [bin]
cmd.extend(['-i', in_file.name, '-t', out_file.name])
stdout = NamedTemporaryFile()
stderr = NamedTemporaryFile()
cmd_def = [{'options': ('-i', '--input'), 'io': 'in',
'special':['no_split']},
{'options': ('-t', '--output'), 'io': 'out'}]
splits = 4
popen = Popen(cmd, stdout=stdout, stderr=stderr, cmd_def=cmd_def,
splits=splits)
assert popen.wait() == 0 #waits till finishes and looks to the retcod
assert not open(stdout.name).read()
assert not open(stderr.name).read()
assert open(out_file.name).read() == content * splits
in_file.close()
os.remove(bin)
@staticmethod
def test_lots_splits_outfile():
'It tests that we can set 2 input files and an output file'
bin = create_test_binary()
- splits = 15
+ splits = 100
content = ['hola%d\n' % split for split in range(splits)]
content = ''.join(content)
in_file1 = NamedTemporaryFile()
in_file1.write(content)
in_file1.flush()
in_file2 = NamedTemporaryFile()
in_file2.write(content)
in_file2.flush()
out_file1 = NamedTemporaryFile()
out_file2 = NamedTemporaryFile()
cmd = [bin]
cmd.extend(['-i', in_file1.name, '-t', out_file1.name])
cmd.extend(['-x', in_file2.name, '-z', out_file2.name])
stdout = NamedTemporaryFile()
stderr = NamedTemporaryFile()
cmd_def = [{'options': ('-i', '--input'), 'io': 'in', 'splitter':''},
{'options': ('-x', '--input'), 'io': 'in', 'splitter':''},
{'options': ('-t', '--output'), 'io': 'out'},
{'options': ('-z', '--output'), 'io': 'out'}]
popen = Popen(cmd, stdout=stdout, stderr=stderr, cmd_def=cmd_def,
splits=splits)
assert popen.wait() == 0 #waits till finishes and looks to the retcod
assert not open(stdout.name).read()
assert not open(stderr.name).read()
assert open(out_file1.name).read() == content
assert open(out_file2.name).read() == content
in_file1.close()
in_file2.close()
os.remove(bin)
if __name__ == "__main__":
- import sys;sys.argv = ['', 'PRunnerTest.test_lots_splits_outfile']
+ #import sys;sys.argv = ['', 'PRunnerTest.test_lots_splits_outfile']
unittest.main()
|
JoseBlanca/psubprocess
|
b4572635a28686bb0a4529cfba2073e339ed73e3
|
too many open files bug reproduced
|
diff --git a/psubprocess/condor_runner.py b/psubprocess/condor_runner.py
index d3212fe..d4ed412 100644
--- a/psubprocess/condor_runner.py
+++ b/psubprocess/condor_runner.py
@@ -1,339 +1,334 @@
'''The main aim of this module is to provide an easy way to launch condor jobs.
Condor is a specialized workload management system for compute-intensive jobs.
Like other full-featured batch systems, Condor provides a job queueing
mechanism, scheduling policy, priority scheme, resource monitoring, and
resource management. More on condor on its web site:
http://www.cs.wisc.edu/condor/
The interface used is similar to the subprocess.Popen one.
Besides the standard parameters like cmd, stdout, stderr, and stdin, this condor
Popen takes a couple of extra paramteres cmd_def and runner_conf. The cmd_def
syntax is explained in the streams.py file. Condor Popen needs the cmd_def to
be able to get from the cmd which are the input and output files. The input
files should be specified in the condor job file, in the case that we want
to transfer them to the computing nodes. Besides the input and output files
in the cmd should have no paths, otherwise the command would fail in the other
machines. That's why we need cmd_def.
Created on 14/07/2009
'''
# Copyright 2009 Jose Blanca, Peio Ziarsolo, COMAV-Univ. Politecnica Valencia
# This file is part of psubprocess.
# psubprocess is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as
# published by the Free Software Foundation, either version 3 of the
# License, or (at your option) any later version.
# psubprocess is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
# You should have received a copy of the GNU Affero General Public License
# along with psubprocess. If not, see <http://www.gnu.org/licenses/>.
-from tempfile import NamedTemporaryFile
+from psubprocess.utils import NamedTemporaryFile
import subprocess, signal, os.path
+from subprocess import Popen as PythonPopen
+
from psubprocess.streams import get_streams_from_cmd
def call(cmd):
'It calls a command and it returns stdout, stderr and retcode'
def subprocess_setup():
''' Python installs a SIGPIPE handler by default. This is usually not
what non-Python subprocesses expect. Taken from this url:
http://www.chiark.greenend.org.uk/ucgi/~cjwatson/blosxom/2009/07/02#
2009-07-02-python-sigpipe'''
signal.signal(signal.SIGPIPE, signal.SIG_DFL)
-# if stdin is None:
-# pstdin = None
-# else:
-# pstdin = subprocess.PIPE
- process = subprocess.Popen(cmd, stdout=subprocess.PIPE,
+ process = PythonPopen(cmd, stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
preexec_fn=subprocess_setup)
-# if stdin is None:
-# stdout, stderr = process.communicate()
-# else:
-# a = stdin.read()
-# print a
-# stdout, stderr = subprocess.Popen.stdin = stdin
-# print stdin.read()
-# stdout, stderr = process.communicate(stdin)
stdout, stderr = process.communicate()
retcode = process.returncode
return stdout, stderr, retcode
def write_condor_job_file(fhand, parameters):
'It writes a condor job file using the given fhand'
to_print = 'Executable = %s\nArguments = "%s"\nUniverse = vanilla\n' % \
(parameters['executable'], parameters['arguments'])
to_print += 'Log = %s\n' % parameters['log_file'].name
if parameters['transfer_files']:
to_print += 'When_to_transfer_output = ON_EXIT\n'
to_print += 'Getenv = True\n'
if ('transfer_executable' in parameters and
parameters['transfer_executable']):
to_print += 'Transfer_executable = %s\n' % \
parameters['transfer_executable']
if 'input_fnames' in parameters and parameters['input_fnames']:
ins = ','.join(parameters['input_fnames'])
to_print += 'Transfer_input_files = %s\n' % ins
if parameters['transfer_files']:
to_print += 'Should_transfer_files = IF_NEEDED\n'
if 'requirements' in parameters:
to_print += "Requirements = %s\n" % parameters['requirements']
if 'stdout' in parameters:
to_print += 'Output = %s\n' % parameters['stdout'].name
if 'stderr' in parameters:
to_print += 'Error = %s\n' % parameters['stderr'].name
if 'stdin' in parameters:
to_print += 'Input = %s\n' % parameters['stdin'].name
to_print += 'Queue\n'
fhand.write(to_print)
fhand.flush()
+ fhand.close()
class Popen(object):
'''It launches and controls a condor job.
The job is launched when an instance is created. After that we can get the
cluster id with the method.pid. The rest of the interface is very similar
to the subprocess.Popen one. There's no communicate method because there's
no support for PIPE.
'''
def __init__(self, cmd, cmd_def=None, runner_conf=None, stdout=None,
stderr=None, stdin=None):
'''It launches a condor job.
The interface is similar to the subprocess.Popen one, although there are
some differences.
stdout, stdin and stderr should be file handlers, there's no support for
PIPEs. The extra parameter cmd_def is required if we need to transfer
the input and output files to the computing nodes of the cluster using
the condor file transfer mechanism. The cmd_def syntax is explained in
the streams.py file.
runner_conf is a dict that admits several parameters that control how
condor is run:
- transfer_files: do we want to transfer the files using the condor
transfer file mechanism? (default True)
- condor_log: the condor log file. If it's not given Popen will
create a condor log file in the tempdir.
- transfer_executable: do we want to transfer the executable?
(default False)
- requirements: The requirements line for the condor job file.
(default None)
'''
#we use the same parameters as subprocess.Popen
#pylint: disable-msg=R0913
if cmd_def is None:
cmd_def = []
#runner conf
if runner_conf is None:
runner_conf = {}
#some defaults
if 'transfer_files' not in runner_conf:
runner_conf['transfer_files'] = True
if 'condor_log' not in runner_conf:
self._log_file = NamedTemporaryFile(suffix='.log')
+ self._log_file.close()
else:
self._log_file = runner_conf['condor_log']
+ #print 'condor_log', self._log_file
#create condor job file
condor_job_file = self._create_condor_job_file(cmd, cmd_def,
self._log_file,
runner_conf,
stdout, stderr, stdin)
self._condor_job_file = condor_job_file
#print open(condor_job_file.name).read()
#launch condor
self._retcode = None
self._cluster_number = None
+ #print 'launching'
self._launch_condor(condor_job_file)
+ #print 'launched'
def _launch_condor(self, condor_job_file):
'Given the condor_job_file it launches the condor job'
try:
stdout, stderr, retcode = call(['condor_submit',
condor_job_file.name])
except OSError, msg:
raise OSError('condor_submit not found in your path.' + str(msg))
if retcode:
msg = 'There was a problem with condor_submit: ' + stderr
raise RuntimeError(msg)
#the condor cluster number is given by condor_submit
#1 job(s) submitted to cluster 15.
for line in stdout.splitlines():
if 'submitted to cluster' in line:
self._cluster_number = line.strip().strip('.').split()[-1]
def _get_pid(self):
'It returns the condor cluster number'
return self._cluster_number
pid = property(_get_pid)
def _get_returncode(self):
'It returns the return code'
return self._retcode
returncode = property(_get_returncode)
@staticmethod
def _remove_paths_from_cmd(cmd, streams, conf):
'''It removes the absolute and relative paths from the cmd,
it returns the modified cmd'''
cmd_mod = cmd[:]
for stream in streams:
if 'fname' not in stream:
continue
fpath = stream['fname']
#for the output files we can't deal with transfering files with
#paths. Condor will deliver those files into the initialdir, not
#where we expected.
if (stream['io'] != 'in' and conf['transfer_files']
and os.path.split(fpath)[-1] != fpath):
msg = 'output files with paths are not transferable'
raise ValueError(msg)
index = cmd_mod.index(fpath)
fpath = os.path.split(fpath)[-1]
cmd_mod[index] = fpath
return cmd_mod
def _create_condor_job_file(self, cmd, cmd_def, log_file, runner_conf,
stdout, stderr, stdin):
'Given a cmd and the cmd_def it returns the condor job file'
#streams
streams = get_streams_from_cmd(cmd, cmd_def, stdout=stdout,
stderr=stderr, stdin=stdin)
#we need some parameters to write the condor file
parameters = {}
#the executable
binary = cmd[0]
#the binary should be an absolute path
if not os.path.isabs(binary):
#the path to the binary could be relative
if os.sep in binary:
#we make the path absolute
binary = os.path.abspath(binary)
else:
#we have to look in the system $PATH
binary = call(['which', binary])[0].strip()
parameters['executable'] = binary
parameters['log_file'] = log_file
#the cmd shouldn't have absolute path in the files because they will be
#transfered to another node in the condor working dir and they wouldn't
#be found with an absolute path
cmd_no_path = self._remove_paths_from_cmd(cmd, streams, runner_conf)
parameters['arguments'] = ' '.join(cmd_no_path[1:])
if stdout is not None:
parameters['stdout'] = stdout
if stderr is not None:
parameters['stderr'] = stderr
if stdin is not None:
parameters['stdin'] = stdin
transfer_bin = False
if 'transfer_executable' in runner_conf:
transfer_bin = runner_conf['transfer_executable']
parameters['transfer_executable'] = transfer_bin
transfer_files = runner_conf['transfer_executable']
parameters['transfer_files'] = str(transfer_files)
if 'requirements' in runner_conf:
parameters['requirements'] = runner_conf['requirements']
in_fnames = []
for stream in streams:
if stream['io'] == 'in':
fname = None
if 'fname' in stream:
fname = stream['fname']
else:
fname = stream['fhand'].name
in_fnames.append(fname)
parameters['input_fnames'] = in_fnames
#now we can create the job file
condor_job_file = NamedTemporaryFile()
write_condor_job_file(condor_job_file, parameters=parameters)
return condor_job_file
def _update_retcode(self):
'It updates the retcode looking at the log file, it returns the retcode'
for line in open(self._log_file.name):
if 'return value' in line:
ret = line.split('return value')[1].strip().strip(')')
self._retcode = int(ret)
return self._retcode
def poll(self):
'It checks if condor has run ower condor cluster'
cluster_number = self._cluster_number
cmd = ['condor_q', cluster_number,
'-format', '"%d.\n"', 'ClusterId']
stdout, stderr, retcode = call(cmd)
if retcode:
msg = 'There was a problem with condor_q: ' + stderr
raise RuntimeError(msg)
if cluster_number not in stdout:
#the job is finished
return self._update_retcode()
return self._retcode
def wait(self):
'It waits until the condor job is finished'
try:
stderr, retcode = call(['condor_wait', self._log_file.name])[1:]
except OSError:
raise OSError('condor_wait not found in your path')
if retcode:
msg = 'There was a problem with condor_wait: ' + stderr
raise RuntimeError(msg)
return self._update_retcode()
def kill(self):
'It runs condor_rm for the condor job'
try:
stderr, retcode = call(['condor_rm', self.pid])[1:]
except OSError:
raise OSError('condor_rm not found in your path')
if retcode:
msg = 'There was a problem with condor_rm: ' + stderr
raise RuntimeError(msg)
return self._update_retcode()
def terminate(self):
'It runs condor_rm for the condor job'
self.kill()
def get_default_splits():
'It returns a suggested number of splits for this Popen runner'
try:
stdout, stderr, retcode = call(['condor_status', '-total'])
except OSError:
raise OSError('condor_status not found in your path')
if retcode:
msg = 'There was a problem with condor_status: ' + stderr
raise RuntimeError(msg)
for line in stdout.splitlines():
line = line.strip().lower()
if line.startswith('total') and 'owner' not in line:
return int(line.split()[1]) * 2
diff --git a/psubprocess/prunner.py b/psubprocess/prunner.py
index ca180e1..8125dfe 100644
--- a/psubprocess/prunner.py
+++ b/psubprocess/prunner.py
@@ -1,480 +1,479 @@
'''It launches parallel processes with an interface similar to Popen.
It divides jobs into subjobs and launches the subjobs.
This module is useful when we have a non-parallel command to run in a
multiprocessor computer or in a multinode cluster. It will take the input files,
it will split them and it will run a subjob for everyone of the splits. It will
wait for the subjobs to finnish and it will join the output files generated
by all subjobs. At the end of the process will get the same output files as if
the command wasn't run in parallel.
This approach will work with commands that process a lot of items. This module
divides the items in sereval set and it assigns each of this sets to one new
subjob. These are the subjobs that will be run in parallel.
To do it requires the parameters used by popen: cmd, stdin, stdout, stderr and
some extra information: runner, splits and cmd_def.
runner is optional and it should be a subprocess.Popen like class. If it's not
given subprocess.Popen will be used. This Popen be the class used to run the
subjobs. If subprocess.Popen is used the subjobs will run in the processors of
the local node on several independent processes. If the Condor Popen is used
the subjobs will run in a condor cluster.
splits is the number of subjobs that we want to generate. If it's not given the
runner will provide a suitable number.
cmd_def is a dict that defines how the cmd defines the input and output files.
We need to tell Popen which are the input and output files in order to split
them and join them. The syntax for cmd_def is explained in the stream.py module
'''
# Copyright 2009 Jose Blanca, Peio Ziarsolo, COMAV-Univ. Politecnica Valencia
# This file is part of psubprocess.
# psubprocess is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as
# published by the Free Software Foundation, either version 3 of the
# License, or (at your option) any later version.
# psubprocess is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
# You should have received a copy of the GNU Affero General Public License
# along with psubprocess. If not, see <http://www.gnu.org/licenses/>.
from subprocess import Popen as StdPopen
import os, copy
from psubprocess.streams import get_streams_from_cmd, STDOUT, STDERR, STDIN
from psubprocess.condor_runner import call
from psubprocess import condor_runner
from psubprocess.splitters import (create_file_splitter_with_re,
create_non_splitter_splitter)
from psubprocess.utils import NamedTemporaryDir, copy_file_mode
RUNNER_MODULES = {}
RUNNER_MODULES['condor_runner'] = condor_runner
class Popen(object):
'''It paralellizes the given processes dividing them into subprocesses.
The interface is similar to subprocess.Popen to ease the use of this class,
although the functionality of this class is much mor limited.
When an instance of this class is created a series of subjobs is launched.
When all subjobs are finished returncode will have an int, if they're still
running returncode will be None.
We can wait for all subjobs to finnish using the wait method or we can
kill or terminate them using kill and terminate.
'''
def __init__(self, cmd, cmd_def=None, runner=None, runner_conf=None,
stdout=None, stderr=None, stdin=None, splits=None):
'''It inits the a Popen instance, it creates and runs the subjobs.
Like the subprocess.Popen it accepts stdin, stdout, stderr, but in this
case all of them should be files, PIPE will not work.
In the cmd_def list we have to tell this Popen how to locate the
input and output files in the cmd and how to split and join them. Look
for the cmd_format in the streams.py file.
keyword arguments:
cmd -- a list with the cmd to parallelize
cmd_def -- the cmd definition list (default [])
runner -- which runner to use (default subprocess.Popen)
runner_conf -- extra parameters for the runner (default {})
stdout -- a fhand to store the stdout (default None)
stderr -- a fhand to store the stderr (default None)
stdin -- a fhand with the stdin (default None)
splits -- number of subjobs to generate
'''
#we want the same interface as subprocess.popen
#pylint: disable-msg=R0913
self._retcode = None
self._outputs_collected = False
#some defaults
#if the runner is not given, we use subprocess.Popen
if runner is None:
runner = StdPopen
if cmd_def is None:
if stdin is not None:
raise ValueError('No cmd_def given but stdin present')
cmd_def = []
#if the number of splits is not given we calculate them
if splits is None:
splits = self.default_splits(runner)
#we need a work dir to create the temporary split files
self._work_dir = NamedTemporaryDir()
copy_file_mode('.', self._work_dir.name)
#the main job
self._job = {'cmd': cmd, 'work_dir': self._work_dir}
#we create the new subjobs
self._jobs = self._split_jobs(cmd, cmd_def, splits, self._work_dir,
stdout=stdout, stderr=stderr, stdin=stdin)
#launch every subjobs
self._launch_jobs(self._jobs, runner=runner, runner_conf=runner_conf)
@staticmethod
def _launch_jobs(jobs, runner, runner_conf):
'It launches all jobs and it adds its popen instance to them'
jobs['popens'] = []
cwd = os.getcwd()
for job_index, (cmd, streams, work_dir) in enumerate(zip(jobs['cmds'],
jobs['streams'], jobs['work_dirs'])):
#the std stream can be present or not
stdin, stdout, stderr = None, None, None
if jobs['stdins']:
stdin = jobs['stdins'][job_index]
if jobs['stdouts']:
stdout = jobs['stdouts'][job_index]
if jobs['stderrs']:
stderr = jobs['stderrs'][job_index]
#for every job we go to its dir to launch it
os.chdir(work_dir.name)
#we have to be sure that stdin is open for read
if stdin:
stdin = open(stdin.name)
#we launch the job
if runner == StdPopen:
popen = runner(cmd, stdout=stdout, stderr=stderr, stdin=stdin)
else:
popen = runner(cmd, cmd_def=streams, stdout=stdout,
stderr=stderr, stdin=stdin,
runner_conf=runner_conf)
#we record it's popen instane
jobs['popens'].append(popen)
os.chdir(cwd)
def _split_jobs(self, cmd, cmd_def, splits, work_dir, stdout=None,
stderr=None, stdin=None,):
''''I creates one job for every split.
Every job has a cmd, work_dir and streams, this info is in the jobs dict
with the keys: cmds, work_dirs, streams
'''
#too many arguments, but similar interface to our __init__
#pylint: disable-msg=R0913
#pylint: disable-msg=R0914
#the main job streams
main_job_streams = get_streams_from_cmd(cmd, cmd_def, stdout=stdout,
stderr=stderr, stdin=stdin)
self._job['streams'] = main_job_streams
streams, work_dirs = self._split_streams(main_job_streams, splits,
work_dir.name)
#now we have to create a new cmd with the right in and out streams for
#every split
cmds, stdins, stdouts, stderrs = self._create_cmds(cmd, streams)
jobs = {'cmds': cmds, 'work_dirs': work_dirs, 'streams': streams,
'stdins':stdins, 'stdouts':stdouts, 'stderrs':stderrs}
return jobs
@staticmethod
def _create_cmds(cmd, streams):
'''Given a base cmd and a steams list it creates one modified cmds for
every stream'''
#the streams is a list of streams
streamss = streams
cmds = []
stdouts = []
stdins = []
stderrs = []
for streams in streamss:
new_cmd = copy.deepcopy(cmd)
for stream in streams:
#is the stream in the cmd or in is a std one?
if 'cmd_location' in stream:
location = stream['cmd_location']
else:
location = None
if location is None:
continue
elif location == STDIN:
stdins.append(stream['fhand'])
elif location == STDOUT:
stdouts.append(stream['fhand'])
elif location == STDERR:
stderrs.append(stream['fhand'])
else:
#we modify the cmd[location] with the new file
#we use the fname and no path because the jobs will be
#launched from the job working dir
location = stream['cmd_location']
fpath = stream['fname']
fname = os.path.split(fpath)[-1]
new_cmd[location] = fname
cmds.append(new_cmd)
return cmds, stdins, stdouts, stderrs
@staticmethod
def _split_streams(streams, splits, work_dir):
'''Given a list of streams it splits every stream in the given number of
splits'''
#which are the input and output streams?
input_stream_indexes = []
output_stream_indexes = []
for index, stream in enumerate(streams):
if stream['io'] == 'in':
input_stream_indexes.append(index)
elif stream['io'] == 'out':
output_stream_indexes.append(index)
#we create one work dir for every split
work_dirs = []
for index in range(splits):
dir_ = NamedTemporaryDir(dir=work_dir)
work_dirs.append(dir_)
copy_file_mode('.', dir_.name)
#we have to do first the input files because the number of splits could
#be changed by them
#we split the input stream files into several splits
#we have to sort the input_stream_indexes, first we should take the ones
#that have an input file to be split
def do_we_have_to_split(stream_index):
'If the stream has to split a file it will return True'
split = None
stream = streams[stream_index]
#maybe they shouldn't be split
if 'special' in stream and 'no_split' in stream['special']:
split = False
#maybe there is no file to split
if (('fhand' in stream and stream['fhand'] is None) or
('fname' in stream and stream['fname'] is None) or
('fname' not in stream and 'fhand' not in stream)):
split = False
elif (('fhand' in stream and stream['fhand'] is not None) or
('fname' in stream and stream['fname'] is not None)):
split = True
return split
def to_be_split_first(stream1, stream2):
'It sorts the streams, the ones to be split go first'
split1 = do_we_have_to_split(stream1)
split2 = do_we_have_to_split(stream2)
return int(split1) - int(split2)
input_stream_indexes = sorted(input_stream_indexes, to_be_split_first)
first = True
split_files = {}
for index in input_stream_indexes:
stream = streams[index]
#splitter
splitter = None
if 'special' in stream and 'no_split' in stream['special']:
splitter = create_non_splitter_splitter(copy_files=True)
elif 'splitter' not in stream:
msg = 'An splitter should be provided for every input stream'
msg += 'missing for: ' + str(stream)
raise ValueError(msg)
else:
splitter = stream['splitter']
#the splitter can be a re, in that case with create the function
if '__call__' not in dir(splitter):
splitter = create_file_splitter_with_re(splitter)
#we split the input files in the splits, every file will be in one
#of the given work_dirs
#the stream can have fname or fhands
if 'fhand' in stream:
file_ = stream['fhand']
elif 'fname' in stream:
file_ = stream['fname']
else:
file_ = None
if file_ is None:
#the stream migth have no file associated
files = [None] * len(work_dirs)
else:
files = splitter(file_, work_dirs)
#the files len can be different than splits, in that case we modify
#the splits or we raise an error
if len(files) != splits:
if first:
splits = len(files)
#we discard the empty temporary dirs
work_dirs = work_dirs[0:splits]
else:
msg = 'Not all input files were divided in the same number'
msg += ' of splits'
raise RuntimeError(msg)
first = False
split_files[index] = files #a list of files for every in stream
#we split the ouptut stream files into several splits
output_splitter = create_non_splitter_splitter(copy_files=False)
for index in output_stream_indexes:
stream = streams[index]
#for th output we just create the new names, but we don't split
#any file
if 'fhand' in stream:
fname = stream['fhand']
else:
fname = stream['fname']
files = output_splitter(fname, work_dirs)
split_files[index] = files #a list of files for every in stream
new_streamss = []
#we need one new stream for every split
for split_index in range(splits):
#the streams for one job
new_streams = []
for stream_index, stream in enumerate(streams):
#we duplicate the original stream
new_stream = stream.copy()
#we set the new files
if 'fhand' in stream:
new_stream['fhand'] = split_files[stream_index][split_index]
else:
new_stream['fname'] = split_files[stream_index][split_index]
new_streams.append(new_stream)
new_streamss.append(new_streams)
-
return new_streamss, work_dirs
@staticmethod
def default_splits(runner):
'Given a runner it returns the number of splits recommended by default'
if runner is StdPopen:
#the number of processors
return os.sysconf('SC_NPROCESSORS_ONLN')
else:
module = runner.__module__.split('.')[-1]
module = RUNNER_MODULES[module]
return module.get_default_splits()
def wait(self):
'It waits for all the works to finnish'
#we wait till all jobs finish
for job in self._jobs['popens']:
job.wait()
#now that all jobs have finished we join the results
self._collect_output_streams()
#we join now the retcodes
self._collect_retcodes()
return self._retcode
def _collect_output_streams(self):
'''It joins all the output streams into the output files and it removes
the work dirs'''
if self._outputs_collected:
return
#for each file in the main job cmd
for stream_index, stream in enumerate(self._job['streams']):
if stream['io'] == 'in':
#now we're dealing only with output files
continue
#every subjob has a part to join for this output stream
part_out_fnames = []
for streams in self._jobs['streams']:
this_stream = streams[stream_index]
if 'fname' in this_stream:
part_out_fnames.append(this_stream['fname'])
else:
part_out_fnames.append(this_stream['fhand'])
#we need a function to join this stream
joiner = None
if joiner in stream:
joiner = stream['joiner']
else:
joiner = default_cat_joiner
if 'fname' in stream:
out_file = stream['fname']
else:
out_file = stream['fhand']
default_cat_joiner(out_file, part_out_fnames)
#now we can delete the tempdirs
for work_dir in self._jobs['work_dirs']:
work_dir.close()
self._outputs_collected = True
def _collect_retcodes(self):
'It gathers the retcodes from all processes'
retcode = None
for popen in self._jobs['popens']:
job_retcode = popen.returncode
if job_retcode is None:
#if some job is yet to be finished the main job is not finished
retcode = None
break
elif job_retcode != 0:
#if one job has finnished badly the main job is badly finished
retcode = job_retcode
break
#it should be 0 at this point
retcode = job_retcode
#if the retcode is not None the jobs have finished and we have to
#collect the outputs
if retcode is not None:
self._collect_output_streams()
self._retcode = retcode
return retcode
def _get_returncode(self):
'It returns the return code'
if self._retcode is None:
self._collect_retcodes()
return self._retcode
returncode = property(_get_returncode)
def kill(self):
'It kills all jobs'
if 'popens' not in self._jobs:
return
for popen in self._jobs['popens']:
#untill 2.6 subprocess.popen do not support kill
if 'kill' in dir(popen):
popen.kill()
else:
pid = popen.pid
call(['kill', '-9', str(pid)])
def terminate(self):
'It kills all jobs'
if 'popens' not in self._jobs:
return
for popen in self._jobs['popens']:
#untill 2.6 subprocess.popen do not support terminate
if 'terminate' in dir(popen):
popen.terminate()
else:
pid = popen.pid
call(['kill', '-6', str(pid)])
def default_cat_joiner(out_file_, in_files_):
'''It joins the given in files into the given out file.
It works with fnames or fhands.
'''
#are we working with fhands or fnames?
file_is_str = None
if isinstance(out_file_, str):
file_is_str = True
else:
file_is_str = False
#the output fhand
if file_is_str:
out_fhand = open(out_file_, 'w')
else:
out_fhand = open(out_file_.name, 'w')
for in_file_ in in_files_:
#the input fhand
if file_is_str:
in_fhand = open(in_file_, 'r')
else:
in_fhand = open(in_file_.name, 'r')
for line in in_fhand:
out_fhand.write(line)
in_fhand.close()
out_fhand.close()
diff --git a/psubprocess/splitters.py b/psubprocess/splitters.py
index 5d9632c..1cef47a 100644
--- a/psubprocess/splitters.py
+++ b/psubprocess/splitters.py
@@ -1,224 +1,225 @@
'''
Created on 03/12/2009
@author: jose
'''
# Copyright 2009 Jose Blanca, Peio Ziarsolo, COMAV-Univ. Politecnica Valencia
# This file is part of psubprocess.
# psubprocess is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as
# published by the Free Software Foundation, either version 3 of the
# License, or (at your option) any later version.
# psubprocess is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
# You should have received a copy of the GNU Affero General Public License
# along with psubprocess. If not, see <http://www.gnu.org/licenses/>.
import re, os, shutil
from psubprocess.utils import NamedTemporaryFile, copy_file_mode
from Bio.SeqIO.QualityIO import FastqGeneralIterator
def _calculate_divisions(num_items, splits):
'''It calculates how many items should be in every split to divide
the num_items into splits.
Not all splits will have an equal number of items, it will return a tuple
with two tuples inside:
((num_fragments_1, num_items_1), (num_fragments_2, num_items_2))
splits = num_fragments_1 + num_fragments_2
num_items_1 = num_items_2 + 1
num_fragments_1 could be equal to 0.
This is the best way to create as many splits as possible as similar as
possible.
'''
if splits >= num_items:
return ((0, 1), (splits, 1))
num_fragments1 = num_items % splits
num_fragments2 = splits - num_fragments1
num_items2 = num_items // splits
num_items1 = num_items2 + 1
res = ((num_fragments1, num_items1), (num_fragments2, num_items2))
return res
def _items_in_file(fhand, expression):
'''Given an fhand and an expression it yields the items cutting where the
line matches the expression'''
sofar = fhand.readline()
for line in fhand:
if expression.search(line):
yield sofar
sofar = line
else:
sofar += line
else:
#the last item
yield sofar
def _re_item_counter(fhand, expression):
'It counts how many times the expression is found in the file'
nitems = 0
for line in fhand:
if expression.search(line):
nitems += 1
return nitems
def _items_in_fastq(fhand, expression=None):
'It returns the fastq items'
for item in FastqGeneralIterator(fhand):
yield '@%s\n%s\n+\n%s\n' % (item)
def _fastq_items_counter(fhand, expression=None):
nitems = 0
for item in FastqGeneralIterator(fhand):
nitems += 1
return nitems
def _create_file_splitter(kind, expression=None):
'''Given an expression it creates a file splitter.
The expression can be a regex or an str.
The item in the file will be defined everytime a line matches the
expression.
'''
item_counters = {'re': _re_item_counter,
'fastq': _fastq_items_counter}
item_splitters = {'re':_items_in_file,
'fastq':_items_in_fastq}
item_counter = item_counters[kind]
item_splitter = item_splitters[kind]
if expression is not None and isinstance(expression, str):
expression = re.compile(expression)
def splitter(file_, work_dirs):
'''It splits the given file into several splits.
Every split will be located in one of the work_dirs, although it is not
guaranteed to create as many splits as work dirs. If in the file there
are less items than work_dirs some work_dirs will be left empty.
It returns a list with the fpaths or fhands for the splitted files.
file_ can be an fhand or an fname.
'''
#the file_ can be an fname or an fhand. which one is it?
file_is_str = None
if isinstance(file_, str):
fname = file_
file_is_str = True
else:
fname = file_.name
file_is_str = False
#how many splits do we want?
nsplits = len(work_dirs)
#how many items are in the file? We assume that all files have the same
#number of items
fhand = open(fname, 'r')
nitems = item_counter(fhand, expression)
#how many splits a we going to create? and how many items will be in
#every split
#if there are more items than splits we create as many splits as items
if nsplits > nitems:
nsplits = nitems
(nsplits1, nitems1), (nsplits2, nitems2) = _calculate_divisions(nitems,
nsplits)
#we have to create nsplits1 files with nitems1 in it and nsplits2 files
#with nitems2 items in it
new_files = []
fhand = open(fname, 'r')
items = item_splitter(fhand, expression)
splits_made = 0
for nsplits, nitems in ((nsplits1, nitems1), (nsplits2, nitems2)):
#we have to create nsplits files with nitems in it
#we don't need the split_index for anything
#pylint: disable-msg=W0612
for split_index in range(nsplits):
suffix = os.path.splitext(fname)[-1]
work_dir = work_dirs[splits_made]
ofh = NamedTemporaryFile(dir=work_dir.name, delete=False,
suffix=suffix)
copy_file_mode(fhand.name, ofh.name)
for item_index in range(nitems):
ofh.write(items.next())
ofh.flush()
#we have to close the files otherwise we can run out of files
#in the os filesystem
- ofh.close()
if file_is_str:
new_files.append(ofh.name)
else:
new_files.append(ofh)
+ ofh.close()
splits_made += 1
return new_files
return splitter
fastq_splitter = _create_file_splitter(kind='fastq')
def create_file_splitter_with_re(expression):
'''Given an expression it creates a file splitter.
The expression can be a regex or an str.
The item in the file will be defined everytime a line matches the
expression.
'''
return _create_file_splitter(kind='re', expression=expression)
def create_non_splitter_splitter(copy_files=False):
'''It creates an splitter function that will not split the given file.
The created splitter will create one file for every work_dir given. This
file can be empty (useful for the output streams, or a copy of the given
file (useful for the no_split input streams).
'''
def splitter(file_, work_dirs):
'''It creates one output file for every splits.
Every split will be located in one of the work_dirs.
It returns a list with the fpaths for the new files.
'''
#the file_ can be an fname or an fhand. which one is it?
file_is_str = None
if isinstance(file_, str):
fname = file_
file_is_str = True
else:
fname = file_.name
file_is_str = False
#how many splits do we want?
nsplits = len(work_dirs)
new_fpaths = []
#we have to create nsplits
suffix = os.path.splitext(fname)[-1]
for split_index in range(nsplits):
work_dir = work_dirs[split_index]
#we use delete=False because this temp file is in a temp dir that
#will be completely deleted. If we use delete=True we get an error
#because the file might be already deleted when its __del__ method
#is called
ofh = NamedTemporaryFile(dir=work_dir.name, suffix=suffix,
delete=False)
+ os.remove(ofh.name)
+ ofh_name = ofh.name
+ #we have to close the files otherwise we can run out of files
+ #in the os filesystem
+ ofh.close()
+
if copy_files:
- os.remove(ofh.name)
- ofh_name = ofh.name
- #we have to close the files otherwise we can run out of files
- #in the os filesystem
- ofh.close()
#i've tried with os.symlink but condor does not like it
shutil.copyfile(fname, ofh_name)
#the file will be deleted
#what do we need the fname or the fhand?
if file_is_str:
new_fpaths.append(ofh.name)
else:
new_fpaths.append(ofh)
return new_fpaths
return splitter
diff --git a/test/condor_runner_test.py b/test/condor_runner_test.py
index 5c705fb..2c65cc2 100644
--- a/test/condor_runner_test.py
+++ b/test/condor_runner_test.py
@@ -1,195 +1,196 @@
'''
Created on 14/07/2009
@author: jose
'''
# Copyright 2009 Jose Blanca, Peio Ziarsolo, COMAV-Univ. Politecnica Valencia
# This file is part of psubprocess.
# psubprocess is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as
# published by the Free Software Foundation, either version 3 of the
# License, or (at your option) any later version.
# psubprocess is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
# You should have received a copy of the GNU Affero General Public License
# along with psubprocess. If not, see <http://www.gnu.org/licenses/>.
import unittest
-from tempfile import NamedTemporaryFile
-from StringIO import StringIO
+from tempfile import NamedTemporaryFile, mkstemp
import os
from psubprocess.condor_runner import (write_condor_job_file, Popen,
get_default_splits, call)
from test_utils import create_test_binary
class CondorRunnerTest(unittest.TestCase):
'It tests the condor runner'
@staticmethod
def test_write_condor_job_file():
'It tests that we can write a condor job file with the right parameters'
fhand1 = NamedTemporaryFile()
fhand2 = NamedTemporaryFile()
flog = NamedTemporaryFile()
stderr_ = NamedTemporaryFile()
stdout_ = NamedTemporaryFile()
stdin_ = NamedTemporaryFile()
expected = '''Executable = /bin/ls
Arguments = "-i %s -j %s"
Universe = vanilla
Log = %s
When_to_transfer_output = ON_EXIT
Getenv = True
Transfer_executable = True
Transfer_input_files = %s,%s
Should_transfer_files = IF_NEEDED
Output = %s
Error = %s
Input = %s
Queue
''' % (fhand1.name, fhand2.name, flog.name, fhand1.name, fhand2.name,
stdout_.name, stderr_.name, stdin_.name)
- fhand = StringIO()
+ fhand = open(mkstemp()[1], 'w')
+
parameters = {'executable':'/bin/ls', 'log_file':flog,
'input_fnames':[fhand1.name, fhand2.name],
'arguments':'-i %s -j %s' % (fhand1.name, fhand2.name),
'transfer_executable':True, 'transfer_files':True,
'stdout':stdout_, 'stderr':stderr_, 'stdin':stdin_}
write_condor_job_file(fhand, parameters=parameters)
- condor = fhand.getvalue()
+ condor = open(fhand.name).read()
assert condor == expected
+ os.remove(fhand.name)
@staticmethod
def test_run_condor_stdout():
'It test that we can run condor job and retrieve stdout and stderr'
bin = create_test_binary()
#a simple job
cmd = [bin]
cmd.extend(['-o', 'hola', '-e', 'caracola'])
stdout = NamedTemporaryFile()
stderr = NamedTemporaryFile()
popen = Popen(cmd, runner_conf={'transfer_executable':True},
stdout=stdout, stderr=stderr)
assert popen.wait() == 0 #waits till finishes and looks to the retcode
assert open(stdout.name).read() == 'hola'
assert open(stderr.name).read() == 'caracola'
os.remove(bin)
@staticmethod
def test_run_condor_stdin():
'It test that we can run condor job with stdin'
bin = create_test_binary()
#a simple job
cmd = [bin]
cmd.extend(['-s'])
stdin = NamedTemporaryFile()
stdout = NamedTemporaryFile()
stdin.write('hola')
stdin.flush()
popen = Popen(cmd, runner_conf={'transfer_executable':True},
stdout=stdout, stdin=stdin)
assert popen.wait() == 0 #waits till finishes and looks to the retcode
assert open(stdout.name).read() == 'hola'
os.remove(bin)
@staticmethod
def test_run_condor_retcode():
'It test that we can run condor job and get the retcode'
bin = create_test_binary()
#a simple job
cmd = [bin]
cmd.extend(['-r', '10'])
popen = Popen(cmd, runner_conf={'transfer_executable':True})
assert popen.wait() == 10 #waits till finishes and looks to the retcode
os.remove(bin)
@staticmethod
def test_run_condor_in_file():
'It test that we can run condor job with an input file'
bin = create_test_binary()
in_file = NamedTemporaryFile()
in_file.write('hola')
in_file.flush()
cmd = [bin]
cmd.extend(['-i', in_file.name])
stdout = NamedTemporaryFile()
stderr = NamedTemporaryFile()
cmd_def = [{'options': ('-i', '--input'), 'io': 'in'}]
popen = Popen(cmd, runner_conf={'transfer_executable':True},
stdout=stdout, stderr=stderr, cmd_def=cmd_def)
assert popen.wait() == 0 #waits till finishes and looks to the retcod
assert open(stdout.name).read() == 'hola'
os.remove(bin)
def test_run_condor_in_out_file(self):
'It test that we can run condor job with an output file'
bin = create_test_binary()
in_file = NamedTemporaryFile()
in_file.write('hola')
in_file.flush()
out_file = open('output.txt', 'w')
cmd = [bin]
cmd.extend(['-i', in_file.name, '-t', out_file.name])
stdout = NamedTemporaryFile()
stderr = NamedTemporaryFile()
cmd_def = [{'options': ('-i', '--input'), 'io': 'in'},
{'options': ('-t', '--output'), 'io': 'out'}]
popen = Popen(cmd, runner_conf={'transfer_executable':True},
stdout=stdout, stderr=stderr, cmd_def=cmd_def)
popen.wait()
assert popen.wait() == 0 #waits till finishes and looks to the retcod
assert open(out_file.name).read() == 'hola'
os.remove(out_file.name)
#and output file with path won't be allowed unless the transfer file
#mechanism is not used
out_file = NamedTemporaryFile()
cmd = [bin]
cmd.extend(['-i', in_file.name, '-t', out_file.name])
stdout = NamedTemporaryFile()
stderr = NamedTemporaryFile()
cmd_def = [{'options': ('-i', '--input'), 'io': 'in'},
{'options': ('-t', '--output'), 'io': 'out'}]
try:
popen = Popen(cmd, runner_conf={'transfer_executable':True},
stdout=stdout, stderr=stderr, cmd_def=cmd_def)
self.fail('ValueError expected')
#pylint: disable-msg=W0704
except ValueError:
pass
os.remove(bin)
@staticmethod
def test_default_splits():
'It tests that we can get a suggested number of splits'
assert get_default_splits() > 0
assert isinstance(get_default_splits(), int)
@staticmethod
def test_run_condor_kill():
'It test that we can kill a condor job'
bin = create_test_binary()
#a simple job
cmd = [bin]
cmd.extend(['-w'])
popen = Popen(cmd, runner_conf={'transfer_executable':True})
pid = str(popen.pid)
popen.kill()
stdout = call(['condor_q', pid])[0]
assert pid not in stdout
os.remove(bin)
if __name__ == "__main__":
- #import sys;sys.argv = ['', 'Test.testName']
+ #import sys;sys.argv = ['', 'CondorRunnerTest.test_write_condor_job_file']
unittest.main()
\ No newline at end of file
diff --git a/test/prunner_test.py b/test/prunner_test.py
index 82eb6d6..bb5c647 100644
--- a/test/prunner_test.py
+++ b/test/prunner_test.py
@@ -1,268 +1,305 @@
'''
Created on 16/07/2009
@author: jose
'''
# Copyright 2009 Jose Blanca, Peio Ziarsolo, COMAV-Univ. Politecnica Valencia
# This file is part of psubprocess.
# psubprocess is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as
# published by the Free Software Foundation, either version 3 of the
# License, or (at your option) any later version.
# psubprocess is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
# You should have received a copy of the GNU Affero General Public License
# along with psubprocess. If not, see <http://www.gnu.org/licenses/>.
import unittest
from tempfile import NamedTemporaryFile
import os
from psubprocess import Popen
from psubprocess.streams import STDIN
from test_utils import create_test_binary
class PRunnerTest(unittest.TestCase):
'It test that we can parallelize processes'
@staticmethod
def test_file_in():
'It tests the most basic behaviour'
bin = create_test_binary()
#with infile
in_file = NamedTemporaryFile()
in_file.write('hola')
in_file.flush()
cmd = [bin]
cmd.extend(['-i', in_file.name])
stdout = NamedTemporaryFile()
stderr = NamedTemporaryFile()
cmd_def = [{'options': ('-i', '--input'), 'io': 'in', 'splitter':''}]
popen = Popen(cmd, stdout=stdout, stderr=stderr, cmd_def=cmd_def)
assert popen.wait() == 0 #waits till finishes and looks to the retcod
assert open(stdout.name).read() == 'hola'
in_file.close()
os.remove(bin)
@staticmethod
def test_job_no_in_stream():
'It test that a job with no in stream is run splits times'
bin = create_test_binary()
cmd = [bin]
cmd.extend(['-o', 'hola', '-e', 'caracola'])
stdout = NamedTemporaryFile()
stderr = NamedTemporaryFile()
cmd_def = [{'options': ('-i', '--input'), 'io': 'in', 'splitter':''}]
splits = 4
popen = Popen(cmd, stdout=stdout, stderr=stderr, cmd_def=cmd_def,
splits=splits)
assert popen.wait() == 0 #waits till finishes and looks to the retcod
assert open(stdout.name).read() == 'hola' * splits
assert open(stderr.name).read() == 'caracola' * splits
os.remove(bin)
@staticmethod
def test_stdin():
'It test that stdin works as input'
bin = create_test_binary()
#with stdin
content = 'hola1\nhola2\nhola3\nhola4\nhola5\nhola6\nhola7\nhola8\n'
content += 'hola9\nhola10|n'
stdin = NamedTemporaryFile()
stdin.write(content)
stdin.flush()
cmd = [bin]
cmd.extend(['-s'])
stdout = NamedTemporaryFile()
stderr = NamedTemporaryFile()
cmd_def = [{'options':STDIN, 'io': 'in', 'splitter':''}]
popen = Popen(cmd, stdout=stdout, stderr=stderr, stdin=stdin,
cmd_def=cmd_def)
assert popen.wait() == 0 #waits till finishes and looks to the retcod
assert open(stdout.name).read() == content
assert open(stderr.name).read() == ''
os.remove(bin)
@staticmethod
def test_infile_outfile():
'It tests that we can set an input file and an output file'
bin = create_test_binary()
#with infile
in_file = NamedTemporaryFile()
content = 'hola1\nhola2\nhola3\nhola4\nhola5\nhola6\nhola7\nhola8\n'
content += 'hola9\nhola10|n'
in_file.write(content)
in_file.flush()
out_file = NamedTemporaryFile()
cmd = [bin]
cmd.extend(['-i', in_file.name, '-t', out_file.name])
stdout = NamedTemporaryFile()
stderr = NamedTemporaryFile()
cmd_def = [{'options': ('-i', '--input'), 'io': 'in', 'splitter':''},
{'options': ('-t', '--output'), 'io': 'out'}]
popen = Popen(cmd, stdout=stdout, stderr=stderr, cmd_def=cmd_def)
assert popen.wait() == 0 #waits till finishes and looks to the retcod
assert not open(stdout.name).read()
assert not open(stderr.name).read()
assert open(out_file.name).read() == content
in_file.close()
os.remove(bin)
@staticmethod
def test_retcode():
'It tests that we get the correct returncode'
bin = create_test_binary()
cmd = [bin]
cmd.extend(['-r', '20'])
stdout = NamedTemporaryFile()
stderr = NamedTemporaryFile()
popen = Popen(cmd, stdout=stdout, stderr=stderr, cmd_def=[])
assert popen.wait() == 20 #waits till finishes and looks to the retcod
assert not open(stdout.name).read()
assert not open(stderr.name).read()
os.remove(bin)
@staticmethod
def xtest_infile_outfile_condor():
'It tests that we can set an input file and an output file'
bin = create_test_binary()
#with infile
in_file = NamedTemporaryFile()
content = 'hola1\nhola2\nhola3\nhola4\nhola5\nhola6\nhola7\nhola8\n'
content += 'hola9\nhola10|n'
in_file.write(content)
in_file.flush()
out_file = NamedTemporaryFile()
cmd = [bin]
cmd.extend(['-i', in_file.name, '-t', out_file.name])
stdout = NamedTemporaryFile()
stderr = NamedTemporaryFile()
cmd_def = [{'options': ('-i', '--input'), 'io': 'in', 'splitter':''},
{'options': ('-t', '--output'), 'io': 'out'}]
from psubprocess import CondorPopen
popen = Popen(cmd, stdout=stdout, stderr=stderr, cmd_def=cmd_def,
runner=CondorPopen,
runner_conf={'transfer_executable':True})
- assert popen.wait() == 0 #waits till finishes and looks to the retcod
+ assert popen.wait() == 0 #waits till finishes and looks to the retcode
assert not open(stdout.name).read()
assert not open(stderr.name).read()
assert open(out_file.name).read() == content
in_file.close()
os.remove(bin)
@staticmethod
def test_stdin_real_splitter():
'It test that stdin works as input'
bin = create_test_binary()
#with stdin
content = '>hola1\nhola2\n>hola3\nhola4\n>hola5\nhola6\n>hola7\nhola8\n'
content += '>hola9\nhola10|n'
stdin = NamedTemporaryFile()
stdin.write(content)
stdin.flush()
cmd = [bin]
cmd.extend(['-s'])
stdout = NamedTemporaryFile()
stderr = NamedTemporaryFile()
cmd_def = [{'options':STDIN, 'io': 'in', 'splitter':'>'}]
popen = Popen(cmd, stdout=stdout, stderr=stderr, stdin=stdin,
cmd_def=cmd_def)
- assert popen.wait() == 0 #waits till finishes and looks to the retcod
+ assert popen.wait() == 0 #waits till finishes and looks to the retcode
assert open(stdout.name).read() == content
assert open(stderr.name).read() == ''
os.remove(bin)
@staticmethod
def test_2_infile_outfile():
'It tests that we can set 2 input files and an output file'
bin = create_test_binary()
#with infile
content = 'hola1\nhola2\nhola3\nhola4\nhola5\nhola6\nhola7\nhola8\n'
content += 'hola9\nhola10|n'
in_file1 = NamedTemporaryFile()
in_file1.write(content)
in_file1.flush()
in_file2 = NamedTemporaryFile()
in_file2.write(content)
in_file2.flush()
out_file1 = NamedTemporaryFile()
out_file2 = NamedTemporaryFile()
cmd = [bin]
cmd.extend(['-i', in_file1.name, '-t', out_file1.name])
cmd.extend(['-x', in_file2.name, '-z', out_file2.name])
stdout = NamedTemporaryFile()
stderr = NamedTemporaryFile()
cmd_def = [{'options': ('-i', '--input'), 'io': 'in', 'splitter':''},
{'options': ('-x', '--input'), 'io': 'in', 'splitter':''},
{'options': ('-t', '--output'), 'io': 'out'},
{'options': ('-z', '--output'), 'io': 'out'}]
popen = Popen(cmd, stdout=stdout, stderr=stderr, cmd_def=cmd_def)
assert popen.wait() == 0 #waits till finishes and looks to the retcod
assert not open(stdout.name).read()
assert not open(stderr.name).read()
assert open(out_file1.name).read() == content
assert open(out_file2.name).read() == content
in_file1.close()
in_file2.close()
os.remove(bin)
@staticmethod
def test_kill_subjobs():
'It tests that we can kill the subjobs'
bin = create_test_binary()
cmd = [bin]
cmd.extend(['-w'])
stdout = NamedTemporaryFile()
stderr = NamedTemporaryFile()
popen = Popen(cmd, stdout=stdout, stderr=stderr, cmd_def=[])
assert popen.returncode is None
popen.kill()
assert not open(stdout.name).read()
assert not open(stderr.name).read()
os.remove(bin)
@staticmethod
def test_nosplit():
'It test that we can set some input files to be not split'
bin = create_test_binary()
#with infile
in_file = NamedTemporaryFile()
content = 'hola1\nhola2\n'
in_file.write(content)
in_file.flush()
out_file = NamedTemporaryFile()
cmd = [bin]
cmd.extend(['-i', in_file.name, '-t', out_file.name])
stdout = NamedTemporaryFile()
stderr = NamedTemporaryFile()
cmd_def = [{'options': ('-i', '--input'), 'io': 'in',
'special':['no_split']},
{'options': ('-t', '--output'), 'io': 'out'}]
splits = 4
popen = Popen(cmd, stdout=stdout, stderr=stderr, cmd_def=cmd_def,
splits=splits)
assert popen.wait() == 0 #waits till finishes and looks to the retcod
assert not open(stdout.name).read()
assert not open(stderr.name).read()
assert open(out_file.name).read() == content * splits
in_file.close()
os.remove(bin)
+ @staticmethod
+ def test_lots_splits_outfile():
+ 'It tests that we can set 2 input files and an output file'
+ bin = create_test_binary()
+
+ splits = 15
+ content = ['hola%d\n' % split for split in range(splits)]
+ content = ''.join(content)
+ in_file1 = NamedTemporaryFile()
+ in_file1.write(content)
+ in_file1.flush()
+ in_file2 = NamedTemporaryFile()
+ in_file2.write(content)
+ in_file2.flush()
+ out_file1 = NamedTemporaryFile()
+ out_file2 = NamedTemporaryFile()
+
+ cmd = [bin]
+ cmd.extend(['-i', in_file1.name, '-t', out_file1.name])
+ cmd.extend(['-x', in_file2.name, '-z', out_file2.name])
+ stdout = NamedTemporaryFile()
+ stderr = NamedTemporaryFile()
+ cmd_def = [{'options': ('-i', '--input'), 'io': 'in', 'splitter':''},
+ {'options': ('-x', '--input'), 'io': 'in', 'splitter':''},
+ {'options': ('-t', '--output'), 'io': 'out'},
+ {'options': ('-z', '--output'), 'io': 'out'}]
+ popen = Popen(cmd, stdout=stdout, stderr=stderr, cmd_def=cmd_def,
+ splits=splits)
+ assert popen.wait() == 0 #waits till finishes and looks to the retcod
+ assert not open(stdout.name).read()
+ assert not open(stderr.name).read()
+ assert open(out_file1.name).read() == content
+ assert open(out_file2.name).read() == content
+ in_file1.close()
+ in_file2.close()
+ os.remove(bin)
+
if __name__ == "__main__":
- #import sys;sys.argv = ['', 'Test.testName']
+ import sys;sys.argv = ['', 'PRunnerTest.test_lots_splits_outfile']
unittest.main()
|
JoseBlanca/psubprocess
|
95b7f40fed3d181c5f45e5312545b69ca6923fa7
|
some minor cleaning in the condor job file writer
|
diff --git a/psubprocess/condor_runner.py b/psubprocess/condor_runner.py
index 2a280f2..d3212fe 100644
--- a/psubprocess/condor_runner.py
+++ b/psubprocess/condor_runner.py
@@ -1,340 +1,339 @@
'''The main aim of this module is to provide an easy way to launch condor jobs.
Condor is a specialized workload management system for compute-intensive jobs.
Like other full-featured batch systems, Condor provides a job queueing
mechanism, scheduling policy, priority scheme, resource monitoring, and
resource management. More on condor on its web site:
http://www.cs.wisc.edu/condor/
The interface used is similar to the subprocess.Popen one.
Besides the standard parameters like cmd, stdout, stderr, and stdin, this condor
Popen takes a couple of extra paramteres cmd_def and runner_conf. The cmd_def
syntax is explained in the streams.py file. Condor Popen needs the cmd_def to
be able to get from the cmd which are the input and output files. The input
files should be specified in the condor job file, in the case that we want
to transfer them to the computing nodes. Besides the input and output files
in the cmd should have no paths, otherwise the command would fail in the other
machines. That's why we need cmd_def.
Created on 14/07/2009
'''
# Copyright 2009 Jose Blanca, Peio Ziarsolo, COMAV-Univ. Politecnica Valencia
# This file is part of psubprocess.
# psubprocess is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as
# published by the Free Software Foundation, either version 3 of the
# License, or (at your option) any later version.
# psubprocess is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
# You should have received a copy of the GNU Affero General Public License
# along with psubprocess. If not, see <http://www.gnu.org/licenses/>.
from tempfile import NamedTemporaryFile
import subprocess, signal, os.path
from psubprocess.streams import get_streams_from_cmd
def call(cmd):
'It calls a command and it returns stdout, stderr and retcode'
def subprocess_setup():
''' Python installs a SIGPIPE handler by default. This is usually not
what non-Python subprocesses expect. Taken from this url:
http://www.chiark.greenend.org.uk/ucgi/~cjwatson/blosxom/2009/07/02#
2009-07-02-python-sigpipe'''
signal.signal(signal.SIGPIPE, signal.SIG_DFL)
# if stdin is None:
# pstdin = None
# else:
# pstdin = subprocess.PIPE
process = subprocess.Popen(cmd, stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
preexec_fn=subprocess_setup)
# if stdin is None:
# stdout, stderr = process.communicate()
# else:
# a = stdin.read()
# print a
# stdout, stderr = subprocess.Popen.stdin = stdin
# print stdin.read()
# stdout, stderr = process.communicate(stdin)
stdout, stderr = process.communicate()
retcode = process.returncode
return stdout, stderr, retcode
def write_condor_job_file(fhand, parameters):
'It writes a condor job file using the given fhand'
to_print = 'Executable = %s\nArguments = "%s"\nUniverse = vanilla\n' % \
(parameters['executable'], parameters['arguments'])
- fhand.write(to_print)
- to_print = 'Log = %s\n' % parameters['log_file'].name
- fhand.write(to_print)
+
+ to_print += 'Log = %s\n' % parameters['log_file'].name
+
if parameters['transfer_files']:
- to_print = 'When_to_transfer_output = ON_EXIT\n'
- fhand.write(to_print)
- to_print = 'Getenv = True\n'
- fhand.write(to_print)
+ to_print += 'When_to_transfer_output = ON_EXIT\n'
+
+ to_print += 'Getenv = True\n'
+
if ('transfer_executable' in parameters and
parameters['transfer_executable']):
- to_print = 'Transfer_executable = %s\n' % \
+ to_print += 'Transfer_executable = %s\n' % \
parameters['transfer_executable']
- fhand.write(to_print)
+
if 'input_fnames' in parameters and parameters['input_fnames']:
ins = ','.join(parameters['input_fnames'])
- to_print = 'Transfer_input_files = %s\n' % ins
- fhand.write(to_print)
+ to_print += 'Transfer_input_files = %s\n' % ins
+
if parameters['transfer_files']:
- to_print = 'Should_transfer_files = IF_NEEDED\n'
- fhand.write(to_print)
+ to_print += 'Should_transfer_files = IF_NEEDED\n'
+
if 'requirements' in parameters:
- to_print = "Requirements = %s\n" % parameters['requirements']
- fhand.write(to_print)
+ to_print += "Requirements = %s\n" % parameters['requirements']
+
if 'stdout' in parameters:
- to_print = 'Output = %s\n' % parameters['stdout'].name
- fhand.write(to_print)
+ to_print += 'Output = %s\n' % parameters['stdout'].name
+
if 'stderr' in parameters:
- to_print = 'Error = %s\n' % parameters['stderr'].name
- fhand.write(to_print)
+ to_print += 'Error = %s\n' % parameters['stderr'].name
+
if 'stdin' in parameters:
- to_print = 'Input = %s\n' % parameters['stdin'].name
- fhand.write(to_print)
- to_print = 'Queue\n'
- fhand.write(to_print)
+ to_print += 'Input = %s\n' % parameters['stdin'].name
+ to_print += 'Queue\n'
+ fhand.write(to_print)
fhand.flush()
class Popen(object):
'''It launches and controls a condor job.
The job is launched when an instance is created. After that we can get the
cluster id with the method.pid. The rest of the interface is very similar
to the subprocess.Popen one. There's no communicate method because there's
no support for PIPE.
'''
def __init__(self, cmd, cmd_def=None, runner_conf=None, stdout=None,
stderr=None, stdin=None):
'''It launches a condor job.
The interface is similar to the subprocess.Popen one, although there are
some differences.
stdout, stdin and stderr should be file handlers, there's no support for
PIPEs. The extra parameter cmd_def is required if we need to transfer
the input and output files to the computing nodes of the cluster using
the condor file transfer mechanism. The cmd_def syntax is explained in
the streams.py file.
runner_conf is a dict that admits several parameters that control how
condor is run:
- transfer_files: do we want to transfer the files using the condor
transfer file mechanism? (default True)
- condor_log: the condor log file. If it's not given Popen will
create a condor log file in the tempdir.
- transfer_executable: do we want to transfer the executable?
(default False)
- requirements: The requirements line for the condor job file.
(default None)
'''
#we use the same parameters as subprocess.Popen
#pylint: disable-msg=R0913
if cmd_def is None:
cmd_def = []
#runner conf
if runner_conf is None:
runner_conf = {}
#some defaults
if 'transfer_files' not in runner_conf:
runner_conf['transfer_files'] = True
if 'condor_log' not in runner_conf:
self._log_file = NamedTemporaryFile(suffix='.log')
else:
self._log_file = runner_conf['condor_log']
#create condor job file
condor_job_file = self._create_condor_job_file(cmd, cmd_def,
self._log_file,
runner_conf,
stdout, stderr, stdin)
self._condor_job_file = condor_job_file
#print open(condor_job_file.name).read()
#launch condor
self._retcode = None
self._cluster_number = None
self._launch_condor(condor_job_file)
def _launch_condor(self, condor_job_file):
'Given the condor_job_file it launches the condor job'
try:
stdout, stderr, retcode = call(['condor_submit',
condor_job_file.name])
except OSError, msg:
raise OSError('condor_submit not found in your path.' + str(msg))
if retcode:
msg = 'There was a problem with condor_submit: ' + stderr
raise RuntimeError(msg)
#the condor cluster number is given by condor_submit
#1 job(s) submitted to cluster 15.
for line in stdout.splitlines():
if 'submitted to cluster' in line:
self._cluster_number = line.strip().strip('.').split()[-1]
def _get_pid(self):
'It returns the condor cluster number'
return self._cluster_number
pid = property(_get_pid)
def _get_returncode(self):
'It returns the return code'
return self._retcode
returncode = property(_get_returncode)
@staticmethod
def _remove_paths_from_cmd(cmd, streams, conf):
'''It removes the absolute and relative paths from the cmd,
it returns the modified cmd'''
cmd_mod = cmd[:]
for stream in streams:
if 'fname' not in stream:
continue
fpath = stream['fname']
#for the output files we can't deal with transfering files with
#paths. Condor will deliver those files into the initialdir, not
#where we expected.
if (stream['io'] != 'in' and conf['transfer_files']
and os.path.split(fpath)[-1] != fpath):
msg = 'output files with paths are not transferable'
raise ValueError(msg)
index = cmd_mod.index(fpath)
fpath = os.path.split(fpath)[-1]
cmd_mod[index] = fpath
return cmd_mod
def _create_condor_job_file(self, cmd, cmd_def, log_file, runner_conf,
stdout, stderr, stdin):
'Given a cmd and the cmd_def it returns the condor job file'
#streams
streams = get_streams_from_cmd(cmd, cmd_def, stdout=stdout,
stderr=stderr, stdin=stdin)
#we need some parameters to write the condor file
parameters = {}
#the executable
binary = cmd[0]
#the binary should be an absolute path
if not os.path.isabs(binary):
#the path to the binary could be relative
if os.sep in binary:
#we make the path absolute
binary = os.path.abspath(binary)
else:
#we have to look in the system $PATH
binary = call(['which', binary])[0].strip()
parameters['executable'] = binary
parameters['log_file'] = log_file
#the cmd shouldn't have absolute path in the files because they will be
#transfered to another node in the condor working dir and they wouldn't
#be found with an absolute path
cmd_no_path = self._remove_paths_from_cmd(cmd, streams, runner_conf)
parameters['arguments'] = ' '.join(cmd_no_path[1:])
if stdout is not None:
parameters['stdout'] = stdout
if stderr is not None:
parameters['stderr'] = stderr
if stdin is not None:
parameters['stdin'] = stdin
transfer_bin = False
if 'transfer_executable' in runner_conf:
transfer_bin = runner_conf['transfer_executable']
parameters['transfer_executable'] = transfer_bin
transfer_files = runner_conf['transfer_executable']
parameters['transfer_files'] = str(transfer_files)
if 'requirements' in runner_conf:
parameters['requirements'] = runner_conf['requirements']
in_fnames = []
for stream in streams:
if stream['io'] == 'in':
fname = None
if 'fname' in stream:
fname = stream['fname']
else:
fname = stream['fhand'].name
in_fnames.append(fname)
parameters['input_fnames'] = in_fnames
#now we can create the job file
condor_job_file = NamedTemporaryFile()
write_condor_job_file(condor_job_file, parameters=parameters)
return condor_job_file
def _update_retcode(self):
'It updates the retcode looking at the log file, it returns the retcode'
for line in open(self._log_file.name):
if 'return value' in line:
ret = line.split('return value')[1].strip().strip(')')
self._retcode = int(ret)
return self._retcode
def poll(self):
'It checks if condor has run ower condor cluster'
cluster_number = self._cluster_number
cmd = ['condor_q', cluster_number,
'-format', '"%d.\n"', 'ClusterId']
stdout, stderr, retcode = call(cmd)
if retcode:
msg = 'There was a problem with condor_q: ' + stderr
raise RuntimeError(msg)
if cluster_number not in stdout:
#the job is finished
return self._update_retcode()
return self._retcode
def wait(self):
'It waits until the condor job is finished'
try:
stderr, retcode = call(['condor_wait', self._log_file.name])[1:]
except OSError:
raise OSError('condor_wait not found in your path')
if retcode:
msg = 'There was a problem with condor_wait: ' + stderr
raise RuntimeError(msg)
return self._update_retcode()
def kill(self):
'It runs condor_rm for the condor job'
try:
stderr, retcode = call(['condor_rm', self.pid])[1:]
except OSError:
raise OSError('condor_rm not found in your path')
if retcode:
msg = 'There was a problem with condor_rm: ' + stderr
raise RuntimeError(msg)
return self._update_retcode()
def terminate(self):
'It runs condor_rm for the condor job'
self.kill()
def get_default_splits():
'It returns a suggested number of splits for this Popen runner'
try:
stdout, stderr, retcode = call(['condor_status', '-total'])
except OSError:
raise OSError('condor_status not found in your path')
if retcode:
msg = 'There was a problem with condor_status: ' + stderr
raise RuntimeError(msg)
for line in stdout.splitlines():
line = line.strip().lower()
if line.startswith('total') and 'owner' not in line:
return int(line.split()[1]) * 2
|
JoseBlanca/psubprocess
|
60c304c54d8e64a24e74c87044336c3c6f0c8586
|
refactoring of the splitter functions and adding of a new fastq splitter
|
diff --git a/psubprocess/prunner.py b/psubprocess/prunner.py
index 5601d52..ca180e1 100644
--- a/psubprocess/prunner.py
+++ b/psubprocess/prunner.py
@@ -1,674 +1,480 @@
'''It launches parallel processes with an interface similar to Popen.
It divides jobs into subjobs and launches the subjobs.
This module is useful when we have a non-parallel command to run in a
multiprocessor computer or in a multinode cluster. It will take the input files,
it will split them and it will run a subjob for everyone of the splits. It will
wait for the subjobs to finnish and it will join the output files generated
by all subjobs. At the end of the process will get the same output files as if
the command wasn't run in parallel.
This approach will work with commands that process a lot of items. This module
divides the items in sereval set and it assigns each of this sets to one new
subjob. These are the subjobs that will be run in parallel.
To do it requires the parameters used by popen: cmd, stdin, stdout, stderr and
some extra information: runner, splits and cmd_def.
runner is optional and it should be a subprocess.Popen like class. If it's not
given subprocess.Popen will be used. This Popen be the class used to run the
subjobs. If subprocess.Popen is used the subjobs will run in the processors of
the local node on several independent processes. If the Condor Popen is used
the subjobs will run in a condor cluster.
splits is the number of subjobs that we want to generate. If it's not given the
runner will provide a suitable number.
cmd_def is a dict that defines how the cmd defines the input and output files.
We need to tell Popen which are the input and output files in order to split
them and join them. The syntax for cmd_def is explained in the stream.py module
'''
# Copyright 2009 Jose Blanca, Peio Ziarsolo, COMAV-Univ. Politecnica Valencia
# This file is part of psubprocess.
# psubprocess is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as
# published by the Free Software Foundation, either version 3 of the
# License, or (at your option) any later version.
# psubprocess is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
# You should have received a copy of the GNU Affero General Public License
# along with psubprocess. If not, see <http://www.gnu.org/licenses/>.
from subprocess import Popen as StdPopen
-import os, tempfile, shutil, copy
+import os, copy
from psubprocess.streams import get_streams_from_cmd, STDOUT, STDERR, STDIN
from psubprocess.condor_runner import call
from psubprocess import condor_runner
+from psubprocess.splitters import (create_file_splitter_with_re,
+ create_non_splitter_splitter)
+from psubprocess.utils import NamedTemporaryDir, copy_file_mode
RUNNER_MODULES = {}
RUNNER_MODULES['condor_runner'] = condor_runner
-class NamedTemporaryDir(object):
- '''This class creates temporary directories '''
- #pylint: disable-msg=W0622
- #we redifine the build in dir because temfile uses that inteface
- def __init__(self, dir=None):
- '''It initiates the class.'''
- self._name = tempfile.mkdtemp(dir=dir)
- def get_name(self):
- 'Returns path to the dict'
- return self._name
- name = property(get_name)
- def close(self):
- '''It removes the temp dir'''
- if os.path.exists(self._name):
- shutil.rmtree(self._name)
- def __del__(self):
- '''It removes de temp dir when instance is removed and the garbaje
- colector decides it'''
- self.close()
-
-def NamedTemporaryFile(dir=None, delete=False, suffix=''):
- '''It creates a temporary file that won't be deleted when close
-
- This behaviour can be done with tempfile.NamedTemporaryFile in python > 2.6
- '''
- #pylint: disable-msg=W0613
- #delete is not being used, it's there as a reminder, once we start to use
- #python 2.6 this function should be removed
- #pylint: disable-msg=C0103
- #pylint: disable-msg=W0622
- #We want to mimick tempfile.NamedTemporaryFile
- fpath = tempfile.mkstemp(dir=dir, suffix=suffix)[1]
- return open(fpath, 'w')
-
-def copy_file_mode(fpath1, fpath2):
- 'It copies the os.stats mode from file1 to file2'
- mode = os.stat(fpath1)[0]
- os.chmod(fpath2, mode)
class Popen(object):
'''It paralellizes the given processes dividing them into subprocesses.
The interface is similar to subprocess.Popen to ease the use of this class,
although the functionality of this class is much mor limited.
When an instance of this class is created a series of subjobs is launched.
When all subjobs are finished returncode will have an int, if they're still
running returncode will be None.
We can wait for all subjobs to finnish using the wait method or we can
kill or terminate them using kill and terminate.
'''
def __init__(self, cmd, cmd_def=None, runner=None, runner_conf=None,
stdout=None, stderr=None, stdin=None, splits=None):
'''It inits the a Popen instance, it creates and runs the subjobs.
Like the subprocess.Popen it accepts stdin, stdout, stderr, but in this
case all of them should be files, PIPE will not work.
In the cmd_def list we have to tell this Popen how to locate the
input and output files in the cmd and how to split and join them. Look
for the cmd_format in the streams.py file.
keyword arguments:
cmd -- a list with the cmd to parallelize
cmd_def -- the cmd definition list (default [])
runner -- which runner to use (default subprocess.Popen)
runner_conf -- extra parameters for the runner (default {})
stdout -- a fhand to store the stdout (default None)
stderr -- a fhand to store the stderr (default None)
stdin -- a fhand with the stdin (default None)
splits -- number of subjobs to generate
'''
#we want the same interface as subprocess.popen
#pylint: disable-msg=R0913
self._retcode = None
self._outputs_collected = False
#some defaults
#if the runner is not given, we use subprocess.Popen
if runner is None:
runner = StdPopen
if cmd_def is None:
if stdin is not None:
raise ValueError('No cmd_def given but stdin present')
cmd_def = []
#if the number of splits is not given we calculate them
if splits is None:
splits = self.default_splits(runner)
#we need a work dir to create the temporary split files
self._work_dir = NamedTemporaryDir()
copy_file_mode('.', self._work_dir.name)
#the main job
self._job = {'cmd': cmd, 'work_dir': self._work_dir}
#we create the new subjobs
self._jobs = self._split_jobs(cmd, cmd_def, splits, self._work_dir,
stdout=stdout, stderr=stderr, stdin=stdin)
#launch every subjobs
self._launch_jobs(self._jobs, runner=runner, runner_conf=runner_conf)
@staticmethod
def _launch_jobs(jobs, runner, runner_conf):
'It launches all jobs and it adds its popen instance to them'
jobs['popens'] = []
cwd = os.getcwd()
for job_index, (cmd, streams, work_dir) in enumerate(zip(jobs['cmds'],
jobs['streams'], jobs['work_dirs'])):
#the std stream can be present or not
stdin, stdout, stderr = None, None, None
if jobs['stdins']:
stdin = jobs['stdins'][job_index]
if jobs['stdouts']:
stdout = jobs['stdouts'][job_index]
if jobs['stderrs']:
stderr = jobs['stderrs'][job_index]
#for every job we go to its dir to launch it
os.chdir(work_dir.name)
#we have to be sure that stdin is open for read
if stdin:
stdin = open(stdin.name)
#we launch the job
if runner == StdPopen:
popen = runner(cmd, stdout=stdout, stderr=stderr, stdin=stdin)
else:
popen = runner(cmd, cmd_def=streams, stdout=stdout,
stderr=stderr, stdin=stdin,
runner_conf=runner_conf)
#we record it's popen instane
jobs['popens'].append(popen)
os.chdir(cwd)
def _split_jobs(self, cmd, cmd_def, splits, work_dir, stdout=None,
stderr=None, stdin=None,):
''''I creates one job for every split.
Every job has a cmd, work_dir and streams, this info is in the jobs dict
with the keys: cmds, work_dirs, streams
'''
#too many arguments, but similar interface to our __init__
#pylint: disable-msg=R0913
#pylint: disable-msg=R0914
#the main job streams
main_job_streams = get_streams_from_cmd(cmd, cmd_def, stdout=stdout,
stderr=stderr, stdin=stdin)
self._job['streams'] = main_job_streams
streams, work_dirs = self._split_streams(main_job_streams, splits,
work_dir.name)
#now we have to create a new cmd with the right in and out streams for
#every split
cmds, stdins, stdouts, stderrs = self._create_cmds(cmd, streams)
jobs = {'cmds': cmds, 'work_dirs': work_dirs, 'streams': streams,
'stdins':stdins, 'stdouts':stdouts, 'stderrs':stderrs}
return jobs
@staticmethod
def _create_cmds(cmd, streams):
'''Given a base cmd and a steams list it creates one modified cmds for
every stream'''
#the streams is a list of streams
streamss = streams
cmds = []
stdouts = []
stdins = []
stderrs = []
for streams in streamss:
new_cmd = copy.deepcopy(cmd)
for stream in streams:
#is the stream in the cmd or in is a std one?
if 'cmd_location' in stream:
location = stream['cmd_location']
else:
location = None
if location is None:
continue
elif location == STDIN:
stdins.append(stream['fhand'])
elif location == STDOUT:
stdouts.append(stream['fhand'])
elif location == STDERR:
stderrs.append(stream['fhand'])
else:
#we modify the cmd[location] with the new file
#we use the fname and no path because the jobs will be
#launched from the job working dir
location = stream['cmd_location']
fpath = stream['fname']
fname = os.path.split(fpath)[-1]
new_cmd[location] = fname
cmds.append(new_cmd)
return cmds, stdins, stdouts, stderrs
@staticmethod
def _split_streams(streams, splits, work_dir):
'''Given a list of streams it splits every stream in the given number of
splits'''
#which are the input and output streams?
input_stream_indexes = []
output_stream_indexes = []
for index, stream in enumerate(streams):
if stream['io'] == 'in':
input_stream_indexes.append(index)
elif stream['io'] == 'out':
output_stream_indexes.append(index)
#we create one work dir for every split
work_dirs = []
for index in range(splits):
dir_ = NamedTemporaryDir(dir=work_dir)
work_dirs.append(dir_)
copy_file_mode('.', dir_.name)
#we have to do first the input files because the number of splits could
#be changed by them
#we split the input stream files into several splits
#we have to sort the input_stream_indexes, first we should take the ones
#that have an input file to be split
def do_we_have_to_split(stream_index):
'If the stream has to split a file it will return True'
split = None
stream = streams[stream_index]
#maybe they shouldn't be split
if 'special' in stream and 'no_split' in stream['special']:
split = False
#maybe there is no file to split
if (('fhand' in stream and stream['fhand'] is None) or
('fname' in stream and stream['fname'] is None) or
('fname' not in stream and 'fhand' not in stream)):
split = False
elif (('fhand' in stream and stream['fhand'] is not None) or
('fname' in stream and stream['fname'] is not None)):
split = True
return split
def to_be_split_first(stream1, stream2):
'It sorts the streams, the ones to be split go first'
split1 = do_we_have_to_split(stream1)
split2 = do_we_have_to_split(stream2)
return int(split1) - int(split2)
input_stream_indexes = sorted(input_stream_indexes, to_be_split_first)
first = True
split_files = {}
for index in input_stream_indexes:
stream = streams[index]
#splitter
splitter = None
if 'special' in stream and 'no_split' in stream['special']:
- splitter = _create_non_splitter_splitter(copy_files=True)
+ splitter = create_non_splitter_splitter(copy_files=True)
elif 'splitter' not in stream:
msg = 'An splitter should be provided for every input stream'
msg += 'missing for: ' + str(stream)
raise ValueError(msg)
else:
splitter = stream['splitter']
#the splitter can be a re, in that case with create the function
if '__call__' not in dir(splitter):
- splitter = _create_file_splitter_with_re(splitter)
+ splitter = create_file_splitter_with_re(splitter)
#we split the input files in the splits, every file will be in one
#of the given work_dirs
#the stream can have fname or fhands
if 'fhand' in stream:
file_ = stream['fhand']
elif 'fname' in stream:
file_ = stream['fname']
else:
file_ = None
if file_ is None:
#the stream migth have no file associated
files = [None] * len(work_dirs)
else:
files = splitter(file_, work_dirs)
#the files len can be different than splits, in that case we modify
#the splits or we raise an error
if len(files) != splits:
if first:
splits = len(files)
#we discard the empty temporary dirs
work_dirs = work_dirs[0:splits]
else:
msg = 'Not all input files were divided in the same number'
msg += ' of splits'
raise RuntimeError(msg)
first = False
split_files[index] = files #a list of files for every in stream
#we split the ouptut stream files into several splits
- output_splitter = _create_non_splitter_splitter(copy_files=False)
+ output_splitter = create_non_splitter_splitter(copy_files=False)
for index in output_stream_indexes:
stream = streams[index]
#for th output we just create the new names, but we don't split
#any file
if 'fhand' in stream:
fname = stream['fhand']
else:
fname = stream['fname']
files = output_splitter(fname, work_dirs)
split_files[index] = files #a list of files for every in stream
new_streamss = []
#we need one new stream for every split
for split_index in range(splits):
#the streams for one job
new_streams = []
for stream_index, stream in enumerate(streams):
#we duplicate the original stream
new_stream = stream.copy()
#we set the new files
if 'fhand' in stream:
new_stream['fhand'] = split_files[stream_index][split_index]
else:
new_stream['fname'] = split_files[stream_index][split_index]
new_streams.append(new_stream)
new_streamss.append(new_streams)
return new_streamss, work_dirs
@staticmethod
def default_splits(runner):
'Given a runner it returns the number of splits recommended by default'
if runner is StdPopen:
#the number of processors
return os.sysconf('SC_NPROCESSORS_ONLN')
else:
module = runner.__module__.split('.')[-1]
module = RUNNER_MODULES[module]
return module.get_default_splits()
def wait(self):
'It waits for all the works to finnish'
#we wait till all jobs finish
for job in self._jobs['popens']:
job.wait()
#now that all jobs have finished we join the results
self._collect_output_streams()
#we join now the retcodes
self._collect_retcodes()
return self._retcode
def _collect_output_streams(self):
'''It joins all the output streams into the output files and it removes
the work dirs'''
if self._outputs_collected:
return
#for each file in the main job cmd
for stream_index, stream in enumerate(self._job['streams']):
if stream['io'] == 'in':
#now we're dealing only with output files
continue
#every subjob has a part to join for this output stream
part_out_fnames = []
for streams in self._jobs['streams']:
this_stream = streams[stream_index]
if 'fname' in this_stream:
part_out_fnames.append(this_stream['fname'])
else:
part_out_fnames.append(this_stream['fhand'])
#we need a function to join this stream
joiner = None
if joiner in stream:
joiner = stream['joiner']
else:
joiner = default_cat_joiner
if 'fname' in stream:
out_file = stream['fname']
else:
out_file = stream['fhand']
default_cat_joiner(out_file, part_out_fnames)
#now we can delete the tempdirs
for work_dir in self._jobs['work_dirs']:
work_dir.close()
self._outputs_collected = True
def _collect_retcodes(self):
'It gathers the retcodes from all processes'
retcode = None
for popen in self._jobs['popens']:
job_retcode = popen.returncode
if job_retcode is None:
#if some job is yet to be finished the main job is not finished
retcode = None
break
elif job_retcode != 0:
#if one job has finnished badly the main job is badly finished
retcode = job_retcode
break
#it should be 0 at this point
retcode = job_retcode
#if the retcode is not None the jobs have finished and we have to
#collect the outputs
if retcode is not None:
self._collect_output_streams()
self._retcode = retcode
return retcode
def _get_returncode(self):
'It returns the return code'
if self._retcode is None:
self._collect_retcodes()
return self._retcode
returncode = property(_get_returncode)
def kill(self):
'It kills all jobs'
if 'popens' not in self._jobs:
return
for popen in self._jobs['popens']:
#untill 2.6 subprocess.popen do not support kill
if 'kill' in dir(popen):
popen.kill()
else:
pid = popen.pid
call(['kill', '-9', str(pid)])
def terminate(self):
'It kills all jobs'
if 'popens' not in self._jobs:
return
for popen in self._jobs['popens']:
#untill 2.6 subprocess.popen do not support terminate
if 'terminate' in dir(popen):
popen.terminate()
else:
pid = popen.pid
call(['kill', '-6', str(pid)])
-def _calculate_divisions(num_items, splits):
- '''It calculates how many items should be in every split to divide
- the num_items into splits.
- Not all splits will have an equal number of items, it will return a tuple
- with two tuples inside:
- ((num_fragments_1, num_items_1), (num_fragments_2, num_items_2))
- splits = num_fragments_1 + num_fragments_2
- num_items_1 = num_items_2 + 1
- num_fragments_1 could be equal to 0.
- This is the best way to create as many splits as possible as similar as
- possible.
- '''
- if splits >= num_items:
- return ((0, 1), (splits, 1))
- num_fragments1 = num_items % splits
- num_fragments2 = splits - num_fragments1
- num_items2 = num_items // splits
- num_items1 = num_items2 + 1
- res = ((num_fragments1, num_items1), (num_fragments2, num_items2))
- return res
-
-def _items_in_file(fhand, expression_kind, expression):
- '''Given an fhand and an expression it yields the items cutting where the
- line matches the expression'''
- sofar = fhand.readline()
- for line in fhand:
- if ((expression_kind == 'str' and expression in line) or
- (expression_kind != 'str' and expression.search(line))):
- yield sofar
- sofar = line
- else:
- sofar += line
- else:
- #the last item
- yield sofar
-
-def _create_file_splitter_with_re(expression):
- '''Given an expression it creates a file splitter.
-
- The expression can be a regex or an str.
- The item in the file will be defined everytime a line matches the
- expression.
- '''
- expression_kind = None
- if isinstance(expression, str):
- expression_kind = 'str'
- else:
- expression_kind = 're'
- def splitter(file_, work_dirs):
- '''It splits the given file into several splits.
-
- Every split will be located in one of the work_dirs, although it is not
- guaranteed to create as many splits as work dirs. If in the file there
- are less items than work_dirs some work_dirs will be left empty.
- It returns a list with the fpaths or fhands for the splitted files.
- file_ can be an fhand or an fname.
- '''
- #the file_ can be an fname or an fhand. which one is it?
- file_is_str = None
- if isinstance(file_, str):
- fname = file_
- file_is_str = True
- else:
- fname = file_.name
- file_is_str = False
-
- #how many splits do we want?
- nsplits = len(work_dirs)
- #how many items are in the file? We assume that all files have the same
- #number of items
- nitems = 0
- for line in open(fname, 'r'):
- if ((expression_kind == 'str' and expression in line) or
- (expression_kind != 'str' and expression.search(line))):
- nitems += 1
-
- #how many splits a we going to create? and how many items will be in
- #every split
- #if there are more items than splits we create as many splits as items
- if nsplits > nitems:
- nsplits = nitems
- (nsplits1, nitems1), (nsplits2, nitems2) = _calculate_divisions(nitems,
- nsplits)
- #we have to create nsplits1 files with nitems1 in it and nsplits2 files
- #with nitems2 items in it
- new_files = []
- fhand = open(fname, 'r')
- items = _items_in_file(fhand, expression_kind, expression)
- splits_made = 0
- for nsplits, nitems in ((nsplits1, nitems1), (nsplits2, nitems2)):
- #we have to create nsplits files with nitems in it
- #we don't need the split_index for anything
- #pylint: disable-msg=W0612
- for split_index in range(nsplits):
- suffix = os.path.splitext(fname)[-1]
- work_dir = work_dirs[splits_made]
- ofh = NamedTemporaryFile(dir=work_dir.name, delete=False,
- suffix=suffix)
- copy_file_mode(fhand.name, ofh.name)
- for item_index in range(nitems):
- ofh.write(items.next())
- ofh.flush()
- if file_is_str:
- new_files.append(ofh.name)
- ofh.close()
- else:
- new_files.append(ofh)
- splits_made += 1
- return new_files
- return splitter
-
-def _create_non_splitter_splitter(copy_files=False):
- '''It creates an splitter function that will not split the given file.
-
- The created splitter will create one file for every work_dir given. This
- file can be empty (useful for the output streams, or a copy of the given
- file (useful for the no_split input streams).
- '''
-
- def splitter(file_, work_dirs):
- '''It creates one output file for every splits.
-
- Every split will be located in one of the work_dirs.
- It returns a list with the fpaths for the new files.
- '''
- #the file_ can be an fname or an fhand. which one is it?
- file_is_str = None
- if isinstance(file_, str):
- fname = file_
- file_is_str = True
- else:
- fname = file_.name
- file_is_str = False
- #how many splits do we want?
- nsplits = len(work_dirs)
-
- new_fpaths = []
- #we have to create nsplits
- suffix = os.path.splitext(fname)[-1]
- for split_index in range(nsplits):
- work_dir = work_dirs[split_index]
- #we use delete=False because this temp file is in a temp dir that
- #will be completely deleted. If we use delete=True we get an error
- #because the file might be already deleted when its __del__ method
- #is called
- ofh = NamedTemporaryFile(dir=work_dir.name, suffix=suffix,
- delete=False)
- if copy_files:
- os.remove(ofh.name)
- #i've tried with os.symlink but condor does not like it
- shutil.copyfile(fname, ofh.name)
- #the file will be deleted
- #what do we need the fname or the fhand?
- if file_is_str:
- new_fpaths.append(ofh.name)
- else:
- new_fpaths.append(ofh)
- return new_fpaths
- return splitter
def default_cat_joiner(out_file_, in_files_):
'''It joins the given in files into the given out file.
It works with fnames or fhands.
'''
#are we working with fhands or fnames?
file_is_str = None
if isinstance(out_file_, str):
file_is_str = True
else:
file_is_str = False
#the output fhand
if file_is_str:
out_fhand = open(out_file_, 'w')
else:
out_fhand = open(out_file_.name, 'w')
for in_file_ in in_files_:
#the input fhand
if file_is_str:
in_fhand = open(in_file_, 'r')
else:
in_fhand = open(in_file_.name, 'r')
for line in in_fhand:
out_fhand.write(line)
in_fhand.close()
out_fhand.close()
diff --git a/psubprocess/splitters.py b/psubprocess/splitters.py
new file mode 100644
index 0000000..5d9632c
--- /dev/null
+++ b/psubprocess/splitters.py
@@ -0,0 +1,224 @@
+'''
+Created on 03/12/2009
+
+@author: jose
+'''
+
+# Copyright 2009 Jose Blanca, Peio Ziarsolo, COMAV-Univ. Politecnica Valencia
+# This file is part of psubprocess.
+# psubprocess is free software: you can redistribute it and/or modify
+# it under the terms of the GNU Affero General Public License as
+# published by the Free Software Foundation, either version 3 of the
+# License, or (at your option) any later version.
+
+# psubprocess is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU Affero General Public License for more details.
+
+# You should have received a copy of the GNU Affero General Public License
+# along with psubprocess. If not, see <http://www.gnu.org/licenses/>.
+
+import re, os, shutil
+from psubprocess.utils import NamedTemporaryFile, copy_file_mode
+from Bio.SeqIO.QualityIO import FastqGeneralIterator
+
+def _calculate_divisions(num_items, splits):
+ '''It calculates how many items should be in every split to divide
+ the num_items into splits.
+ Not all splits will have an equal number of items, it will return a tuple
+ with two tuples inside:
+ ((num_fragments_1, num_items_1), (num_fragments_2, num_items_2))
+ splits = num_fragments_1 + num_fragments_2
+ num_items_1 = num_items_2 + 1
+ num_fragments_1 could be equal to 0.
+ This is the best way to create as many splits as possible as similar as
+ possible.
+ '''
+ if splits >= num_items:
+ return ((0, 1), (splits, 1))
+ num_fragments1 = num_items % splits
+ num_fragments2 = splits - num_fragments1
+ num_items2 = num_items // splits
+ num_items1 = num_items2 + 1
+ res = ((num_fragments1, num_items1), (num_fragments2, num_items2))
+ return res
+
+def _items_in_file(fhand, expression):
+ '''Given an fhand and an expression it yields the items cutting where the
+ line matches the expression'''
+ sofar = fhand.readline()
+ for line in fhand:
+ if expression.search(line):
+ yield sofar
+ sofar = line
+ else:
+ sofar += line
+ else:
+ #the last item
+ yield sofar
+
+def _re_item_counter(fhand, expression):
+ 'It counts how many times the expression is found in the file'
+ nitems = 0
+ for line in fhand:
+ if expression.search(line):
+ nitems += 1
+ return nitems
+
+def _items_in_fastq(fhand, expression=None):
+ 'It returns the fastq items'
+ for item in FastqGeneralIterator(fhand):
+ yield '@%s\n%s\n+\n%s\n' % (item)
+
+def _fastq_items_counter(fhand, expression=None):
+ nitems = 0
+ for item in FastqGeneralIterator(fhand):
+ nitems += 1
+ return nitems
+
+def _create_file_splitter(kind, expression=None):
+ '''Given an expression it creates a file splitter.
+
+ The expression can be a regex or an str.
+ The item in the file will be defined everytime a line matches the
+ expression.
+ '''
+ item_counters = {'re': _re_item_counter,
+ 'fastq': _fastq_items_counter}
+ item_splitters = {'re':_items_in_file,
+ 'fastq':_items_in_fastq}
+
+ item_counter = item_counters[kind]
+ item_splitter = item_splitters[kind]
+
+ if expression is not None and isinstance(expression, str):
+ expression = re.compile(expression)
+
+ def splitter(file_, work_dirs):
+ '''It splits the given file into several splits.
+
+ Every split will be located in one of the work_dirs, although it is not
+ guaranteed to create as many splits as work dirs. If in the file there
+ are less items than work_dirs some work_dirs will be left empty.
+ It returns a list with the fpaths or fhands for the splitted files.
+ file_ can be an fhand or an fname.
+ '''
+ #the file_ can be an fname or an fhand. which one is it?
+ file_is_str = None
+ if isinstance(file_, str):
+ fname = file_
+ file_is_str = True
+ else:
+ fname = file_.name
+ file_is_str = False
+
+ #how many splits do we want?
+ nsplits = len(work_dirs)
+ #how many items are in the file? We assume that all files have the same
+ #number of items
+
+ fhand = open(fname, 'r')
+ nitems = item_counter(fhand, expression)
+
+ #how many splits a we going to create? and how many items will be in
+ #every split
+ #if there are more items than splits we create as many splits as items
+ if nsplits > nitems:
+ nsplits = nitems
+ (nsplits1, nitems1), (nsplits2, nitems2) = _calculate_divisions(nitems,
+ nsplits)
+ #we have to create nsplits1 files with nitems1 in it and nsplits2 files
+ #with nitems2 items in it
+ new_files = []
+ fhand = open(fname, 'r')
+ items = item_splitter(fhand, expression)
+ splits_made = 0
+ for nsplits, nitems in ((nsplits1, nitems1), (nsplits2, nitems2)):
+ #we have to create nsplits files with nitems in it
+ #we don't need the split_index for anything
+ #pylint: disable-msg=W0612
+ for split_index in range(nsplits):
+ suffix = os.path.splitext(fname)[-1]
+ work_dir = work_dirs[splits_made]
+ ofh = NamedTemporaryFile(dir=work_dir.name, delete=False,
+ suffix=suffix)
+ copy_file_mode(fhand.name, ofh.name)
+ for item_index in range(nitems):
+ ofh.write(items.next())
+ ofh.flush()
+ #we have to close the files otherwise we can run out of files
+ #in the os filesystem
+ ofh.close()
+ if file_is_str:
+ new_files.append(ofh.name)
+ else:
+ new_files.append(ofh)
+ splits_made += 1
+ return new_files
+ return splitter
+
+fastq_splitter = _create_file_splitter(kind='fastq')
+
+def create_file_splitter_with_re(expression):
+ '''Given an expression it creates a file splitter.
+
+ The expression can be a regex or an str.
+ The item in the file will be defined everytime a line matches the
+ expression.
+ '''
+ return _create_file_splitter(kind='re', expression=expression)
+
+def create_non_splitter_splitter(copy_files=False):
+ '''It creates an splitter function that will not split the given file.
+
+ The created splitter will create one file for every work_dir given. This
+ file can be empty (useful for the output streams, or a copy of the given
+ file (useful for the no_split input streams).
+ '''
+
+ def splitter(file_, work_dirs):
+ '''It creates one output file for every splits.
+
+ Every split will be located in one of the work_dirs.
+ It returns a list with the fpaths for the new files.
+ '''
+ #the file_ can be an fname or an fhand. which one is it?
+ file_is_str = None
+ if isinstance(file_, str):
+ fname = file_
+ file_is_str = True
+ else:
+ fname = file_.name
+ file_is_str = False
+ #how many splits do we want?
+ nsplits = len(work_dirs)
+
+ new_fpaths = []
+ #we have to create nsplits
+ suffix = os.path.splitext(fname)[-1]
+ for split_index in range(nsplits):
+ work_dir = work_dirs[split_index]
+ #we use delete=False because this temp file is in a temp dir that
+ #will be completely deleted. If we use delete=True we get an error
+ #because the file might be already deleted when its __del__ method
+ #is called
+ ofh = NamedTemporaryFile(dir=work_dir.name, suffix=suffix,
+ delete=False)
+ if copy_files:
+ os.remove(ofh.name)
+ ofh_name = ofh.name
+ #we have to close the files otherwise we can run out of files
+ #in the os filesystem
+ ofh.close()
+ #i've tried with os.symlink but condor does not like it
+ shutil.copyfile(fname, ofh_name)
+
+ #the file will be deleted
+ #what do we need the fname or the fhand?
+ if file_is_str:
+ new_fpaths.append(ofh.name)
+ else:
+ new_fpaths.append(ofh)
+ return new_fpaths
+ return splitter
diff --git a/psubprocess/utils.py b/psubprocess/utils.py
new file mode 100644
index 0000000..5058a4f
--- /dev/null
+++ b/psubprocess/utils.py
@@ -0,0 +1,61 @@
+'''
+Created on 03/12/2009
+
+@author: jose
+'''
+
+# Copyright 2009 Jose Blanca, Peio Ziarsolo, COMAV-Univ. Politecnica Valencia
+# This file is part of psubprocess.
+# psubprocess is free software: you can redistribute it and/or modify
+# it under the terms of the GNU Affero General Public License as
+# published by the Free Software Foundation, either version 3 of the
+# License, or (at your option) any later version.
+
+# psubprocess is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU Affero General Public License for more details.
+
+# You should have received a copy of the GNU Affero General Public License
+# along with psubprocess. If not, see <http://www.gnu.org/licenses/>.
+
+import tempfile, os, shutil
+
+class NamedTemporaryDir(object):
+ '''This class creates temporary directories '''
+ #pylint: disable-msg=W0622
+ #we redifine the build in dir because temfile uses that inteface
+ def __init__(self, dir=None):
+ '''It initiates the class.'''
+ self._name = tempfile.mkdtemp(dir=dir)
+ def get_name(self):
+ 'Returns path to the dict'
+ return self._name
+ name = property(get_name)
+ def close(self):
+ '''It removes the temp dir'''
+ if os.path.exists(self._name):
+ shutil.rmtree(self._name)
+ def __del__(self):
+ '''It removes the temp dir when instance is removed and the garbage
+ collector decides it'''
+ self.close()
+
+def NamedTemporaryFile(dir=None, delete=False, suffix=''):
+ '''It creates a temporary file that won't be deleted when close
+
+ This behaviour can be done with tempfile.NamedTemporaryFile in python > 2.6
+ '''
+ #pylint: disable-msg=W0613
+ #delete is not being used, it's there as a reminder, once we start to use
+ #python 2.6 this function should be removed
+ #pylint: disable-msg=C0103
+ #pylint: disable-msg=W0622
+ #We want to mimick tempfile.NamedTemporaryFile
+ fpath = tempfile.mkstemp(dir=dir, suffix=suffix)[1]
+ return open(fpath, 'w')
+
+def copy_file_mode(fpath1, fpath2):
+ 'It copies the os.stats mode from file1 to file2'
+ mode = os.stat(fpath1)[0]
+ os.chmod(fpath2, mode)
diff --git a/test/prunner_test.py b/test/prunner_test.py
index 6a954ab..82eb6d6 100644
--- a/test/prunner_test.py
+++ b/test/prunner_test.py
@@ -1,268 +1,268 @@
'''
Created on 16/07/2009
@author: jose
'''
# Copyright 2009 Jose Blanca, Peio Ziarsolo, COMAV-Univ. Politecnica Valencia
# This file is part of psubprocess.
# psubprocess is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as
# published by the Free Software Foundation, either version 3 of the
# License, or (at your option) any later version.
# psubprocess is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
# You should have received a copy of the GNU Affero General Public License
# along with psubprocess. If not, see <http://www.gnu.org/licenses/>.
import unittest
from tempfile import NamedTemporaryFile
import os
from psubprocess import Popen
from psubprocess.streams import STDIN
from test_utils import create_test_binary
class PRunnerTest(unittest.TestCase):
'It test that we can parallelize processes'
@staticmethod
def test_file_in():
'It tests the most basic behaviour'
bin = create_test_binary()
#with infile
in_file = NamedTemporaryFile()
in_file.write('hola')
in_file.flush()
cmd = [bin]
cmd.extend(['-i', in_file.name])
stdout = NamedTemporaryFile()
stderr = NamedTemporaryFile()
cmd_def = [{'options': ('-i', '--input'), 'io': 'in', 'splitter':''}]
popen = Popen(cmd, stdout=stdout, stderr=stderr, cmd_def=cmd_def)
assert popen.wait() == 0 #waits till finishes and looks to the retcod
assert open(stdout.name).read() == 'hola'
in_file.close()
os.remove(bin)
@staticmethod
def test_job_no_in_stream():
'It test that a job with no in stream is run splits times'
bin = create_test_binary()
cmd = [bin]
cmd.extend(['-o', 'hola', '-e', 'caracola'])
stdout = NamedTemporaryFile()
stderr = NamedTemporaryFile()
cmd_def = [{'options': ('-i', '--input'), 'io': 'in', 'splitter':''}]
splits = 4
popen = Popen(cmd, stdout=stdout, stderr=stderr, cmd_def=cmd_def,
splits=splits)
assert popen.wait() == 0 #waits till finishes and looks to the retcod
assert open(stdout.name).read() == 'hola' * splits
assert open(stderr.name).read() == 'caracola' * splits
os.remove(bin)
@staticmethod
def test_stdin():
'It test that stdin works as input'
bin = create_test_binary()
#with stdin
content = 'hola1\nhola2\nhola3\nhola4\nhola5\nhola6\nhola7\nhola8\n'
content += 'hola9\nhola10|n'
stdin = NamedTemporaryFile()
stdin.write(content)
stdin.flush()
cmd = [bin]
cmd.extend(['-s'])
stdout = NamedTemporaryFile()
stderr = NamedTemporaryFile()
cmd_def = [{'options':STDIN, 'io': 'in', 'splitter':''}]
popen = Popen(cmd, stdout=stdout, stderr=stderr, stdin=stdin,
cmd_def=cmd_def)
assert popen.wait() == 0 #waits till finishes and looks to the retcod
assert open(stdout.name).read() == content
assert open(stderr.name).read() == ''
os.remove(bin)
@staticmethod
def test_infile_outfile():
'It tests that we can set an input file and an output file'
bin = create_test_binary()
#with infile
in_file = NamedTemporaryFile()
content = 'hola1\nhola2\nhola3\nhola4\nhola5\nhola6\nhola7\nhola8\n'
content += 'hola9\nhola10|n'
in_file.write(content)
in_file.flush()
out_file = NamedTemporaryFile()
cmd = [bin]
cmd.extend(['-i', in_file.name, '-t', out_file.name])
stdout = NamedTemporaryFile()
stderr = NamedTemporaryFile()
cmd_def = [{'options': ('-i', '--input'), 'io': 'in', 'splitter':''},
{'options': ('-t', '--output'), 'io': 'out'}]
popen = Popen(cmd, stdout=stdout, stderr=stderr, cmd_def=cmd_def)
assert popen.wait() == 0 #waits till finishes and looks to the retcod
assert not open(stdout.name).read()
assert not open(stderr.name).read()
assert open(out_file.name).read() == content
in_file.close()
os.remove(bin)
@staticmethod
def test_retcode():
'It tests that we get the correct returncode'
bin = create_test_binary()
cmd = [bin]
cmd.extend(['-r', '20'])
stdout = NamedTemporaryFile()
stderr = NamedTemporaryFile()
popen = Popen(cmd, stdout=stdout, stderr=stderr, cmd_def=[])
assert popen.wait() == 20 #waits till finishes and looks to the retcod
assert not open(stdout.name).read()
assert not open(stderr.name).read()
os.remove(bin)
@staticmethod
def xtest_infile_outfile_condor():
'It tests that we can set an input file and an output file'
bin = create_test_binary()
#with infile
in_file = NamedTemporaryFile()
content = 'hola1\nhola2\nhola3\nhola4\nhola5\nhola6\nhola7\nhola8\n'
content += 'hola9\nhola10|n'
in_file.write(content)
in_file.flush()
out_file = NamedTemporaryFile()
cmd = [bin]
cmd.extend(['-i', in_file.name, '-t', out_file.name])
stdout = NamedTemporaryFile()
stderr = NamedTemporaryFile()
cmd_def = [{'options': ('-i', '--input'), 'io': 'in', 'splitter':''},
{'options': ('-t', '--output'), 'io': 'out'}]
from psubprocess import CondorPopen
popen = Popen(cmd, stdout=stdout, stderr=stderr, cmd_def=cmd_def,
runner=CondorPopen,
runner_conf={'transfer_executable':True})
assert popen.wait() == 0 #waits till finishes and looks to the retcod
assert not open(stdout.name).read()
assert not open(stderr.name).read()
assert open(out_file.name).read() == content
in_file.close()
os.remove(bin)
@staticmethod
def test_stdin_real_splitter():
'It test that stdin works as input'
bin = create_test_binary()
#with stdin
content = '>hola1\nhola2\n>hola3\nhola4\n>hola5\nhola6\n>hola7\nhola8\n'
content += '>hola9\nhola10|n'
stdin = NamedTemporaryFile()
stdin.write(content)
stdin.flush()
cmd = [bin]
cmd.extend(['-s'])
stdout = NamedTemporaryFile()
stderr = NamedTemporaryFile()
cmd_def = [{'options':STDIN, 'io': 'in', 'splitter':'>'}]
popen = Popen(cmd, stdout=stdout, stderr=stderr, stdin=stdin,
cmd_def=cmd_def)
assert popen.wait() == 0 #waits till finishes and looks to the retcod
assert open(stdout.name).read() == content
assert open(stderr.name).read() == ''
os.remove(bin)
@staticmethod
def test_2_infile_outfile():
'It tests that we can set 2 input files and an output file'
bin = create_test_binary()
#with infile
content = 'hola1\nhola2\nhola3\nhola4\nhola5\nhola6\nhola7\nhola8\n'
content += 'hola9\nhola10|n'
in_file1 = NamedTemporaryFile()
in_file1.write(content)
in_file1.flush()
in_file2 = NamedTemporaryFile()
in_file2.write(content)
in_file2.flush()
out_file1 = NamedTemporaryFile()
out_file2 = NamedTemporaryFile()
cmd = [bin]
cmd.extend(['-i', in_file1.name, '-t', out_file1.name])
cmd.extend(['-x', in_file2.name, '-z', out_file2.name])
stdout = NamedTemporaryFile()
stderr = NamedTemporaryFile()
cmd_def = [{'options': ('-i', '--input'), 'io': 'in', 'splitter':''},
{'options': ('-x', '--input'), 'io': 'in', 'splitter':''},
{'options': ('-t', '--output'), 'io': 'out'},
{'options': ('-z', '--output'), 'io': 'out'}]
popen = Popen(cmd, stdout=stdout, stderr=stderr, cmd_def=cmd_def)
assert popen.wait() == 0 #waits till finishes and looks to the retcod
assert not open(stdout.name).read()
assert not open(stderr.name).read()
assert open(out_file1.name).read() == content
assert open(out_file2.name).read() == content
in_file1.close()
in_file2.close()
os.remove(bin)
@staticmethod
def test_kill_subjobs():
'It tests that we can kill the subjobs'
bin = create_test_binary()
cmd = [bin]
cmd.extend(['-w'])
stdout = NamedTemporaryFile()
stderr = NamedTemporaryFile()
popen = Popen(cmd, stdout=stdout, stderr=stderr, cmd_def=[])
assert popen.returncode is None
popen.kill()
assert not open(stdout.name).read()
assert not open(stderr.name).read()
os.remove(bin)
@staticmethod
def test_nosplit():
'It test that we can set some input files to be not split'
bin = create_test_binary()
#with infile
in_file = NamedTemporaryFile()
content = 'hola1\nhola2\n'
in_file.write(content)
in_file.flush()
out_file = NamedTemporaryFile()
cmd = [bin]
cmd.extend(['-i', in_file.name, '-t', out_file.name])
stdout = NamedTemporaryFile()
stderr = NamedTemporaryFile()
cmd_def = [{'options': ('-i', '--input'), 'io': 'in',
'special':['no_split']},
{'options': ('-t', '--output'), 'io': 'out'}]
splits = 4
popen = Popen(cmd, stdout=stdout, stderr=stderr, cmd_def=cmd_def,
splits=splits)
assert popen.wait() == 0 #waits till finishes and looks to the retcod
assert not open(stdout.name).read()
assert not open(stderr.name).read()
assert open(out_file.name).read() == content * splits
in_file.close()
os.remove(bin)
if __name__ == "__main__":
#import sys;sys.argv = ['', 'Test.testName']
- unittest.main()
\ No newline at end of file
+ unittest.main()
diff --git a/test/splitter_test.py b/test/splitter_test.py
new file mode 100644
index 0000000..c3ad2a6
--- /dev/null
+++ b/test/splitter_test.py
@@ -0,0 +1,73 @@
+'''
+Created on 03/12/2009
+
+@author: jose
+'''
+
+# Copyright 2009 Jose Blanca, Peio Ziarsolo, COMAV-Univ. Politecnica Valencia
+# This file is part of psubprocess.
+# psubprocess is free software: you can redistribute it and/or modify
+# it under the terms of the GNU Affero General Public License as
+# published by the Free Software Foundation, either version 3 of the
+# License, or (at your option) any later version.
+
+# psubprocess is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU Affero General Public License for more details.
+
+# You should have received a copy of the GNU Affero General Public License
+# along with psubprocess. If not, see <http://www.gnu.org/licenses/>.
+
+import unittest
+from tempfile import NamedTemporaryFile
+from psubprocess.prunner import NamedTemporaryDir
+from psubprocess.splitters import create_file_splitter_with_re, fastq_splitter
+
+class SplitterTest(unittest.TestCase):
+ 'It test that we can split the input files'
+
+ @staticmethod
+ def test_re_splitter():
+ 'It tests the general regular expression based splitter'
+ fastq = '@seq1\nACTG\n+\nmoco\n@seq2\nGTCA\n+\nhola\n'
+ file_ = NamedTemporaryFile()
+ file_.write(fastq)
+ file_.flush()
+
+ splitter = create_file_splitter_with_re(expression='^@')
+ dir1 = NamedTemporaryDir()
+ dir2 = NamedTemporaryDir()
+ dir3 = NamedTemporaryDir()
+ new_files = splitter(file_, [dir1, dir2, dir3])
+ assert len(new_files) == 2
+ assert open(new_files[0].name).read() == '@seq1\nACTG\n+\nmoco\n'
+ assert open(new_files[1].name).read() == '@seq2\nGTCA\n+\nhola\n'
+ dir1.close()
+ dir2.close()
+ dir3.close()
+
+ @staticmethod
+ def test_fastq_splitter():
+ 'It tests the fastq splitter'
+ fastq = '@seq1\nACTG\n+\nmoco\n@seq2\nGTCA\n+\nhola\n'
+ file_ = NamedTemporaryFile()
+ file_.write(fastq)
+ file_.flush()
+
+ splitter = fastq_splitter
+ dir1 = NamedTemporaryDir()
+ dir2 = NamedTemporaryDir()
+ dir3 = NamedTemporaryDir()
+ new_files = splitter(file_, [dir1, dir2, dir3])
+ assert len(new_files) == 2
+ assert open(new_files[0].name).read() == '@seq1\nACTG\n+\nmoco\n'
+ assert open(new_files[1].name).read() == '@seq2\nGTCA\n+\nhola\n'
+ dir1.close()
+ dir2.close()
+ dir3.close()
+
+
+if __name__ == "__main__":
+ #import sys;sys.argv = ['', 'Test.testName']
+ unittest.main()
\ No newline at end of file
|
JoseBlanca/psubprocess
|
5330f5004ac023961cf0de40e3dc5abe168b742d
|
simplyfied call function
|
diff --git a/psubprocess/condor_runner.py b/psubprocess/condor_runner.py
index 6c45507..2a280f2 100644
--- a/psubprocess/condor_runner.py
+++ b/psubprocess/condor_runner.py
@@ -1,339 +1,340 @@
'''The main aim of this module is to provide an easy way to launch condor jobs.
Condor is a specialized workload management system for compute-intensive jobs.
Like other full-featured batch systems, Condor provides a job queueing
mechanism, scheduling policy, priority scheme, resource monitoring, and
resource management. More on condor on its web site:
http://www.cs.wisc.edu/condor/
The interface used is similar to the subprocess.Popen one.
Besides the standard parameters like cmd, stdout, stderr, and stdin, this condor
Popen takes a couple of extra paramteres cmd_def and runner_conf. The cmd_def
syntax is explained in the streams.py file. Condor Popen needs the cmd_def to
be able to get from the cmd which are the input and output files. The input
files should be specified in the condor job file, in the case that we want
to transfer them to the computing nodes. Besides the input and output files
in the cmd should have no paths, otherwise the command would fail in the other
machines. That's why we need cmd_def.
Created on 14/07/2009
'''
# Copyright 2009 Jose Blanca, Peio Ziarsolo, COMAV-Univ. Politecnica Valencia
# This file is part of psubprocess.
# psubprocess is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as
# published by the Free Software Foundation, either version 3 of the
# License, or (at your option) any later version.
# psubprocess is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
# You should have received a copy of the GNU Affero General Public License
# along with psubprocess. If not, see <http://www.gnu.org/licenses/>.
from tempfile import NamedTemporaryFile
import subprocess, signal, os.path
from psubprocess.streams import get_streams_from_cmd
-def call(cmd, env=None, stdin=None):
+def call(cmd):
'It calls a command and it returns stdout, stderr and retcode'
def subprocess_setup():
''' Python installs a SIGPIPE handler by default. This is usually not
what non-Python subprocesses expect. Taken from this url:
http://www.chiark.greenend.org.uk/ucgi/~cjwatson/blosxom/2009/07/02#
2009-07-02-python-sigpipe'''
signal.signal(signal.SIGPIPE, signal.SIG_DFL)
- if stdin is None:
- pstdin = None
- else:
- pstdin = subprocess.PIPE
-
+# if stdin is None:
+# pstdin = None
+# else:
+# pstdin = subprocess.PIPE
process = subprocess.Popen(cmd, stdout=subprocess.PIPE,
- stderr=subprocess.PIPE, env=env, stdin=pstdin,
+ stderr=subprocess.PIPE,
preexec_fn=subprocess_setup)
- if stdin is None:
- stdout, stderr = process.communicate()
- else:
+# if stdin is None:
+# stdout, stderr = process.communicate()
+# else:
# a = stdin.read()
# print a
# stdout, stderr = subprocess.Popen.stdin = stdin
# print stdin.read()
- stdout, stderr = process.communicate(stdin)
+# stdout, stderr = process.communicate(stdin)
+ stdout, stderr = process.communicate()
retcode = process.returncode
return stdout, stderr, retcode
def write_condor_job_file(fhand, parameters):
'It writes a condor job file using the given fhand'
to_print = 'Executable = %s\nArguments = "%s"\nUniverse = vanilla\n' % \
(parameters['executable'], parameters['arguments'])
fhand.write(to_print)
to_print = 'Log = %s\n' % parameters['log_file'].name
fhand.write(to_print)
if parameters['transfer_files']:
to_print = 'When_to_transfer_output = ON_EXIT\n'
fhand.write(to_print)
to_print = 'Getenv = True\n'
fhand.write(to_print)
if ('transfer_executable' in parameters and
parameters['transfer_executable']):
to_print = 'Transfer_executable = %s\n' % \
parameters['transfer_executable']
fhand.write(to_print)
if 'input_fnames' in parameters and parameters['input_fnames']:
ins = ','.join(parameters['input_fnames'])
to_print = 'Transfer_input_files = %s\n' % ins
fhand.write(to_print)
if parameters['transfer_files']:
to_print = 'Should_transfer_files = IF_NEEDED\n'
fhand.write(to_print)
if 'requirements' in parameters:
to_print = "Requirements = %s\n" % parameters['requirements']
fhand.write(to_print)
if 'stdout' in parameters:
to_print = 'Output = %s\n' % parameters['stdout'].name
fhand.write(to_print)
if 'stderr' in parameters:
to_print = 'Error = %s\n' % parameters['stderr'].name
fhand.write(to_print)
if 'stdin' in parameters:
to_print = 'Input = %s\n' % parameters['stdin'].name
fhand.write(to_print)
to_print = 'Queue\n'
fhand.write(to_print)
fhand.flush()
class Popen(object):
'''It launches and controls a condor job.
The job is launched when an instance is created. After that we can get the
cluster id with the method.pid. The rest of the interface is very similar
to the subprocess.Popen one. There's no communicate method because there's
no support for PIPE.
'''
def __init__(self, cmd, cmd_def=None, runner_conf=None, stdout=None,
stderr=None, stdin=None):
'''It launches a condor job.
The interface is similar to the subprocess.Popen one, although there are
some differences.
stdout, stdin and stderr should be file handlers, there's no support for
PIPEs. The extra parameter cmd_def is required if we need to transfer
the input and output files to the computing nodes of the cluster using
the condor file transfer mechanism. The cmd_def syntax is explained in
the streams.py file.
runner_conf is a dict that admits several parameters that control how
condor is run:
- transfer_files: do we want to transfer the files using the condor
transfer file mechanism? (default True)
- condor_log: the condor log file. If it's not given Popen will
create a condor log file in the tempdir.
- transfer_executable: do we want to transfer the executable?
(default False)
- requirements: The requirements line for the condor job file.
(default None)
'''
#we use the same parameters as subprocess.Popen
#pylint: disable-msg=R0913
if cmd_def is None:
cmd_def = []
#runner conf
if runner_conf is None:
runner_conf = {}
#some defaults
if 'transfer_files' not in runner_conf:
runner_conf['transfer_files'] = True
if 'condor_log' not in runner_conf:
self._log_file = NamedTemporaryFile(suffix='.log')
else:
self._log_file = runner_conf['condor_log']
#create condor job file
condor_job_file = self._create_condor_job_file(cmd, cmd_def,
self._log_file,
runner_conf,
stdout, stderr, stdin)
self._condor_job_file = condor_job_file
#print open(condor_job_file.name).read()
#launch condor
self._retcode = None
self._cluster_number = None
self._launch_condor(condor_job_file)
def _launch_condor(self, condor_job_file):
'Given the condor_job_file it launches the condor job'
try:
- stdout, stderr, retcode = call(['condor_submit', condor_job_file.name])
- except OSError:
- raise OSError('condor_submit not found in your path')
+ stdout, stderr, retcode = call(['condor_submit',
+ condor_job_file.name])
+ except OSError, msg:
+ raise OSError('condor_submit not found in your path.' + str(msg))
if retcode:
msg = 'There was a problem with condor_submit: ' + stderr
raise RuntimeError(msg)
#the condor cluster number is given by condor_submit
#1 job(s) submitted to cluster 15.
for line in stdout.splitlines():
if 'submitted to cluster' in line:
self._cluster_number = line.strip().strip('.').split()[-1]
def _get_pid(self):
'It returns the condor cluster number'
return self._cluster_number
pid = property(_get_pid)
def _get_returncode(self):
'It returns the return code'
return self._retcode
returncode = property(_get_returncode)
@staticmethod
def _remove_paths_from_cmd(cmd, streams, conf):
'''It removes the absolute and relative paths from the cmd,
it returns the modified cmd'''
cmd_mod = cmd[:]
for stream in streams:
if 'fname' not in stream:
continue
fpath = stream['fname']
#for the output files we can't deal with transfering files with
#paths. Condor will deliver those files into the initialdir, not
#where we expected.
if (stream['io'] != 'in' and conf['transfer_files']
and os.path.split(fpath)[-1] != fpath):
msg = 'output files with paths are not transferable'
raise ValueError(msg)
index = cmd_mod.index(fpath)
fpath = os.path.split(fpath)[-1]
cmd_mod[index] = fpath
return cmd_mod
def _create_condor_job_file(self, cmd, cmd_def, log_file, runner_conf,
stdout, stderr, stdin):
'Given a cmd and the cmd_def it returns the condor job file'
#streams
streams = get_streams_from_cmd(cmd, cmd_def, stdout=stdout,
stderr=stderr, stdin=stdin)
#we need some parameters to write the condor file
parameters = {}
#the executable
binary = cmd[0]
#the binary should be an absolute path
if not os.path.isabs(binary):
#the path to the binary could be relative
if os.sep in binary:
#we make the path absolute
binary = os.path.abspath(binary)
else:
#we have to look in the system $PATH
binary = call(['which', binary])[0].strip()
parameters['executable'] = binary
parameters['log_file'] = log_file
#the cmd shouldn't have absolute path in the files because they will be
#transfered to another node in the condor working dir and they wouldn't
#be found with an absolute path
cmd_no_path = self._remove_paths_from_cmd(cmd, streams, runner_conf)
parameters['arguments'] = ' '.join(cmd_no_path[1:])
if stdout is not None:
parameters['stdout'] = stdout
if stderr is not None:
parameters['stderr'] = stderr
if stdin is not None:
parameters['stdin'] = stdin
transfer_bin = False
if 'transfer_executable' in runner_conf:
transfer_bin = runner_conf['transfer_executable']
parameters['transfer_executable'] = transfer_bin
transfer_files = runner_conf['transfer_executable']
parameters['transfer_files'] = str(transfer_files)
if 'requirements' in runner_conf:
parameters['requirements'] = runner_conf['requirements']
in_fnames = []
for stream in streams:
if stream['io'] == 'in':
fname = None
if 'fname' in stream:
fname = stream['fname']
else:
fname = stream['fhand'].name
in_fnames.append(fname)
parameters['input_fnames'] = in_fnames
#now we can create the job file
condor_job_file = NamedTemporaryFile()
write_condor_job_file(condor_job_file, parameters=parameters)
return condor_job_file
def _update_retcode(self):
'It updates the retcode looking at the log file, it returns the retcode'
for line in open(self._log_file.name):
if 'return value' in line:
ret = line.split('return value')[1].strip().strip(')')
self._retcode = int(ret)
return self._retcode
def poll(self):
'It checks if condor has run ower condor cluster'
cluster_number = self._cluster_number
cmd = ['condor_q', cluster_number,
'-format', '"%d.\n"', 'ClusterId']
stdout, stderr, retcode = call(cmd)
if retcode:
msg = 'There was a problem with condor_q: ' + stderr
raise RuntimeError(msg)
if cluster_number not in stdout:
#the job is finished
return self._update_retcode()
return self._retcode
def wait(self):
'It waits until the condor job is finished'
try:
stderr, retcode = call(['condor_wait', self._log_file.name])[1:]
except OSError:
raise OSError('condor_wait not found in your path')
if retcode:
msg = 'There was a problem with condor_wait: ' + stderr
raise RuntimeError(msg)
return self._update_retcode()
def kill(self):
'It runs condor_rm for the condor job'
try:
stderr, retcode = call(['condor_rm', self.pid])[1:]
except OSError:
raise OSError('condor_rm not found in your path')
if retcode:
msg = 'There was a problem with condor_rm: ' + stderr
raise RuntimeError(msg)
return self._update_retcode()
def terminate(self):
'It runs condor_rm for the condor job'
self.kill()
def get_default_splits():
'It returns a suggested number of splits for this Popen runner'
try:
stdout, stderr, retcode = call(['condor_status', '-total'])
except OSError:
raise OSError('condor_status not found in your path')
if retcode:
msg = 'There was a problem with condor_status: ' + stderr
raise RuntimeError(msg)
for line in stdout.splitlines():
line = line.strip().lower()
if line.startswith('total') and 'owner' not in line:
return int(line.split()[1]) * 2
|
JoseBlanca/psubprocess
|
33450c2f9a37b61dca78a31c34a8f052ce5c45ec
|
add #!/usr/bin/env python to the scripts
|
diff --git a/scripts/run_in_parallel.py b/scripts/run_in_parallel.py
index a1aa3ce..ed0e798 100644
--- a/scripts/run_in_parallel.py
+++ b/scripts/run_in_parallel.py
@@ -1,117 +1,118 @@
+#!/usr/bin/env python
'''This script allows the easy parallelization of command line utilities.
If you have command that process a file with a set of items it would be
quite easy to run it in a parallel environment using this command. The file
will be divided in equally sized subjobs, these subjobs will be run in parallel
and once completed the output files will be generated as if the original
would have run.
The subjobs can be run in one node with several processors or in a cluter with
several nodes using condor.
'''
# Copyright 2009 Jose Blanca, Peio Ziarsolo, COMAV-Univ. Politecnica Valencia
# This file is part of psubprocess.
# psubprocess is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as
# published by the Free Software Foundation, either version 3 of the
# License, or (at your option) any later version.
# psubprocess is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
# You should have received a copy of the GNU Affero General Public License
# along with psubprocess. If not, see <http://www.gnu.org/licenses/>.
from optparse import OptionParser
import os.path, sys, signal
from psubprocess import CondorPopen, Popen
POPEN = None
def parse_options():
'It parses the command line arguments'
parser = OptionParser('usage: %prog -c "command"')
parser.add_option('-n', '--nsplits', dest='splits',
help='number of subjobs to create')
parser.add_option('-r', '--runner', dest='runner', default='subprocess',
help='who should run the subjobs (subprocess or condor)')
parser.add_option('-c', '--command', dest='command',
help='The command to run')
parser.add_option('-o', '--stdout', dest='stdout',
help='A file to store the stdout')
parser.add_option('-e', '--stderr', dest='stderr',
help='A file to store the stderr')
parser.add_option('-i', '--stdin', dest='stdin',
help='A file to store the stdin')
parser.add_option('-d', '--cmd_def', dest='cmd_def',
help='The command line definition')
parser.add_option('-q', '--runner_req', dest='runner_req',
help='runner requirements')
return parser
def get_options():
'It returns a dict with the options'
parser = parse_options()
cmd_options = parser.parse_args()[0]
options = {}
if cmd_options.command is None:
raise parser.error('The command should be set')
else:
options['cmd'] = cmd_options.command.split()
if cmd_options.stdout is not None:
options['stdout'] = open(cmd_options.stdout, 'w')
if cmd_options.stderr is not None:
options['stderr'] = open(cmd_options.stderr, 'w')
if cmd_options.stdin is not None:
options['stdin'] = open(cmd_options.stdin)
if cmd_options.runner == 'subprocess':
options['runner'] = None
elif cmd_options.runner == 'condor':
runner_conf = {}
runner_conf['transfer_executable'] = False
if cmd_options.runner_req is not None:
runner_conf['requirements'] = cmd_options.runner_req
options['runner_conf'] = runner_conf
options['runner'] = CondorPopen
else:
parser.error('Allowable runners are: subprocess and condor')
if cmd_options.cmd_def is None:
options['cmd_def'] = []
else:
cmd_def = cmd_options.cmd_def
#it can be a file or an str
if os.path.exists(cmd_def):
cmd_def = open(cmd_def).read()
cmd_def = eval(cmd_def)
if not isinstance(cmd_def, list):
msg = 'cmd_def should be a list of dicts, read the documentation'
parser.error(msg)
options['cmd_def'] = cmd_def
return options
def kill_processes():
'It kills the ongoing process'
if POPEN is not None:
POPEN.kill()
sys.exit(-1)
def set_signal_handlers():
'It sets the SIGTERM and SIGKILL signals'
signal.signal(signal.SIGTERM, kill_processes)
signal.signal(signal.SIGABRT, kill_processes)
signal.signal(signal.SIGINT, kill_processes)
def main():
'It runs a command in parallel'
set_signal_handlers()
options = get_options()
global POPEN
POPEN = Popen(**options)
sys.exit(POPEN.wait())
if __name__ == '__main__':
main()
\ No newline at end of file
diff --git a/scripts/run_with_condor.py b/scripts/run_with_condor.py
index 306b680..648486b 100644
--- a/scripts/run_with_condor.py
+++ b/scripts/run_with_condor.py
@@ -1,105 +1,106 @@
+#!/usr/bin/env python
'''This script eases the running of a job in a condor environment.
The condor job file will be created for you.
'''
# Copyright 2009 Jose Blanca, Peio Ziarsolo, COMAV-Univ. Politecnica Valencia
# This file is part of psubprocess.
# psubprocess is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as
# published by the Free Software Foundation, either version 3 of the
# License, or (at your option) any later version.
# psubprocess is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
# You should have received a copy of the GNU Affero General Public License
# along with psubprocess. If not, see <http://www.gnu.org/licenses/>.
from optparse import OptionParser
import os.path, sys, signal
from psubprocess import CondorPopen
POPEN = None
def parse_options():
'It parses the command line arguments'
parser = OptionParser('usage: %prog -c "command"')
parser.add_option('-c', '--command', dest='command',
help='The command to run')
parser.add_option('-o', '--stdout', dest='stdout',
help='A file to store the stdout')
parser.add_option('-e', '--stderr', dest='stderr',
help='A file to store the stderr')
parser.add_option('-i', '--stdin', dest='stdin',
help='A file to store the stdin')
parser.add_option('-d', '--cmd_def', dest='cmd_def',
help='The command line definition')
parser.add_option('-l', '--log', dest='condor_log',
help='The log file')
parser.add_option('-q', '--condor_req', dest='runner_req',
help='condor requiements for the job')
return parser
def get_options():
'It returns a dict with the options'
parser = parse_options()
cmd_options = parser.parse_args()[0]
options = {}
if cmd_options.command is None:
raise parser.error('The command should be set')
else:
options['cmd'] = cmd_options.command.split()
if cmd_options.stdout is not None:
options['stdout'] = open(cmd_options.stdout, 'w')
if cmd_options.stderr is not None:
options['stderr'] = open(cmd_options.stderr, 'w')
if cmd_options.stdin is not None:
options['stdin'] = open(cmd_options.stdin)
if cmd_options.cmd_def is None:
options['cmd_def'] = []
else:
cmd_def = cmd_options.cmd_def
#it can be a file or an str
if os.path.exists(cmd_def):
cmd_def = open(cmd_def).read()
cmd_def = eval(cmd_def)
if not isinstance(cmd_def, list):
msg = 'cmd_def should be a list of dicts, read the documentation'
parser.error(msg)
options['cmd_def'] = cmd_def
runner_conf = {}
if cmd_options.condor_log is not None:
condor_log = open(cmd_options.condor_log, 'w')
runner_conf['condor_log'] = condor_log
runner_conf['transfer_executable'] = False
options['runner_conf'] = runner_conf
return options
def kill_processes():
'It kills the ongoing process'
if POPEN is not None:
POPEN.kill()
sys.exit(-1)
def set_signal_handlers():
'It sets the SIGTERM and SIGKILL signals'
signal.signal(signal.SIGTERM, kill_processes)
signal.signal(signal.SIGABRT, kill_processes)
signal.signal(signal.SIGINT, kill_processes)
def main():
'It runs a command in a condor cluster'
set_signal_handlers()
options = get_options()
global POPEN
POPEN = CondorPopen(**options)
sys.exit(POPEN.wait())
if __name__ == '__main__':
main()
\ No newline at end of file
|
JoseBlanca/psubprocess
|
14b9d61c9050d126e9c23eba7b01bdcb7e7f74ff
|
bug fix. condor does not like os.symlink
|
diff --git a/psubprocess/prunner.py b/psubprocess/prunner.py
index 8ecacbe..5601d52 100644
--- a/psubprocess/prunner.py
+++ b/psubprocess/prunner.py
@@ -125,549 +125,550 @@ class Popen(object):
stderr -- a fhand to store the stderr (default None)
stdin -- a fhand with the stdin (default None)
splits -- number of subjobs to generate
'''
#we want the same interface as subprocess.popen
#pylint: disable-msg=R0913
self._retcode = None
self._outputs_collected = False
#some defaults
#if the runner is not given, we use subprocess.Popen
if runner is None:
runner = StdPopen
if cmd_def is None:
if stdin is not None:
raise ValueError('No cmd_def given but stdin present')
cmd_def = []
#if the number of splits is not given we calculate them
if splits is None:
splits = self.default_splits(runner)
#we need a work dir to create the temporary split files
self._work_dir = NamedTemporaryDir()
copy_file_mode('.', self._work_dir.name)
#the main job
self._job = {'cmd': cmd, 'work_dir': self._work_dir}
#we create the new subjobs
self._jobs = self._split_jobs(cmd, cmd_def, splits, self._work_dir,
stdout=stdout, stderr=stderr, stdin=stdin)
#launch every subjobs
self._launch_jobs(self._jobs, runner=runner, runner_conf=runner_conf)
@staticmethod
def _launch_jobs(jobs, runner, runner_conf):
'It launches all jobs and it adds its popen instance to them'
jobs['popens'] = []
cwd = os.getcwd()
for job_index, (cmd, streams, work_dir) in enumerate(zip(jobs['cmds'],
jobs['streams'], jobs['work_dirs'])):
#the std stream can be present or not
stdin, stdout, stderr = None, None, None
if jobs['stdins']:
stdin = jobs['stdins'][job_index]
if jobs['stdouts']:
stdout = jobs['stdouts'][job_index]
if jobs['stderrs']:
stderr = jobs['stderrs'][job_index]
#for every job we go to its dir to launch it
os.chdir(work_dir.name)
#we have to be sure that stdin is open for read
if stdin:
stdin = open(stdin.name)
#we launch the job
if runner == StdPopen:
popen = runner(cmd, stdout=stdout, stderr=stderr, stdin=stdin)
else:
popen = runner(cmd, cmd_def=streams, stdout=stdout,
stderr=stderr, stdin=stdin,
runner_conf=runner_conf)
#we record it's popen instane
jobs['popens'].append(popen)
os.chdir(cwd)
def _split_jobs(self, cmd, cmd_def, splits, work_dir, stdout=None,
stderr=None, stdin=None,):
''''I creates one job for every split.
Every job has a cmd, work_dir and streams, this info is in the jobs dict
with the keys: cmds, work_dirs, streams
'''
#too many arguments, but similar interface to our __init__
#pylint: disable-msg=R0913
#pylint: disable-msg=R0914
#the main job streams
main_job_streams = get_streams_from_cmd(cmd, cmd_def, stdout=stdout,
stderr=stderr, stdin=stdin)
self._job['streams'] = main_job_streams
streams, work_dirs = self._split_streams(main_job_streams, splits,
work_dir.name)
#now we have to create a new cmd with the right in and out streams for
#every split
cmds, stdins, stdouts, stderrs = self._create_cmds(cmd, streams)
jobs = {'cmds': cmds, 'work_dirs': work_dirs, 'streams': streams,
'stdins':stdins, 'stdouts':stdouts, 'stderrs':stderrs}
return jobs
@staticmethod
def _create_cmds(cmd, streams):
'''Given a base cmd and a steams list it creates one modified cmds for
every stream'''
#the streams is a list of streams
streamss = streams
cmds = []
stdouts = []
stdins = []
stderrs = []
for streams in streamss:
new_cmd = copy.deepcopy(cmd)
for stream in streams:
#is the stream in the cmd or in is a std one?
if 'cmd_location' in stream:
location = stream['cmd_location']
else:
location = None
if location is None:
continue
elif location == STDIN:
stdins.append(stream['fhand'])
elif location == STDOUT:
stdouts.append(stream['fhand'])
elif location == STDERR:
stderrs.append(stream['fhand'])
else:
#we modify the cmd[location] with the new file
#we use the fname and no path because the jobs will be
#launched from the job working dir
location = stream['cmd_location']
fpath = stream['fname']
fname = os.path.split(fpath)[-1]
new_cmd[location] = fname
cmds.append(new_cmd)
return cmds, stdins, stdouts, stderrs
@staticmethod
def _split_streams(streams, splits, work_dir):
'''Given a list of streams it splits every stream in the given number of
splits'''
#which are the input and output streams?
input_stream_indexes = []
output_stream_indexes = []
for index, stream in enumerate(streams):
if stream['io'] == 'in':
input_stream_indexes.append(index)
elif stream['io'] == 'out':
output_stream_indexes.append(index)
#we create one work dir for every split
work_dirs = []
for index in range(splits):
dir_ = NamedTemporaryDir(dir=work_dir)
work_dirs.append(dir_)
copy_file_mode('.', dir_.name)
#we have to do first the input files because the number of splits could
#be changed by them
#we split the input stream files into several splits
#we have to sort the input_stream_indexes, first we should take the ones
#that have an input file to be split
def do_we_have_to_split(stream_index):
'If the stream has to split a file it will return True'
split = None
stream = streams[stream_index]
#maybe they shouldn't be split
if 'special' in stream and 'no_split' in stream['special']:
split = False
#maybe there is no file to split
if (('fhand' in stream and stream['fhand'] is None) or
('fname' in stream and stream['fname'] is None) or
('fname' not in stream and 'fhand' not in stream)):
split = False
elif (('fhand' in stream and stream['fhand'] is not None) or
('fname' in stream and stream['fname'] is not None)):
split = True
return split
def to_be_split_first(stream1, stream2):
'It sorts the streams, the ones to be split go first'
split1 = do_we_have_to_split(stream1)
split2 = do_we_have_to_split(stream2)
return int(split1) - int(split2)
input_stream_indexes = sorted(input_stream_indexes, to_be_split_first)
first = True
split_files = {}
for index in input_stream_indexes:
stream = streams[index]
#splitter
splitter = None
if 'special' in stream and 'no_split' in stream['special']:
splitter = _create_non_splitter_splitter(copy_files=True)
elif 'splitter' not in stream:
msg = 'An splitter should be provided for every input stream'
msg += 'missing for: ' + str(stream)
raise ValueError(msg)
else:
splitter = stream['splitter']
#the splitter can be a re, in that case with create the function
if '__call__' not in dir(splitter):
splitter = _create_file_splitter_with_re(splitter)
#we split the input files in the splits, every file will be in one
#of the given work_dirs
#the stream can have fname or fhands
if 'fhand' in stream:
file_ = stream['fhand']
elif 'fname' in stream:
file_ = stream['fname']
else:
file_ = None
if file_ is None:
#the stream migth have no file associated
files = [None] * len(work_dirs)
else:
files = splitter(file_, work_dirs)
#the files len can be different than splits, in that case we modify
#the splits or we raise an error
if len(files) != splits:
if first:
splits = len(files)
#we discard the empty temporary dirs
work_dirs = work_dirs[0:splits]
else:
msg = 'Not all input files were divided in the same number'
msg += ' of splits'
raise RuntimeError(msg)
first = False
split_files[index] = files #a list of files for every in stream
#we split the ouptut stream files into several splits
output_splitter = _create_non_splitter_splitter(copy_files=False)
for index in output_stream_indexes:
stream = streams[index]
#for th output we just create the new names, but we don't split
#any file
if 'fhand' in stream:
fname = stream['fhand']
else:
fname = stream['fname']
files = output_splitter(fname, work_dirs)
split_files[index] = files #a list of files for every in stream
new_streamss = []
#we need one new stream for every split
for split_index in range(splits):
#the streams for one job
new_streams = []
for stream_index, stream in enumerate(streams):
#we duplicate the original stream
new_stream = stream.copy()
#we set the new files
if 'fhand' in stream:
new_stream['fhand'] = split_files[stream_index][split_index]
else:
new_stream['fname'] = split_files[stream_index][split_index]
new_streams.append(new_stream)
new_streamss.append(new_streams)
return new_streamss, work_dirs
@staticmethod
def default_splits(runner):
'Given a runner it returns the number of splits recommended by default'
if runner is StdPopen:
#the number of processors
return os.sysconf('SC_NPROCESSORS_ONLN')
else:
module = runner.__module__.split('.')[-1]
module = RUNNER_MODULES[module]
return module.get_default_splits()
def wait(self):
'It waits for all the works to finnish'
#we wait till all jobs finish
for job in self._jobs['popens']:
job.wait()
#now that all jobs have finished we join the results
self._collect_output_streams()
#we join now the retcodes
self._collect_retcodes()
return self._retcode
def _collect_output_streams(self):
'''It joins all the output streams into the output files and it removes
the work dirs'''
if self._outputs_collected:
return
#for each file in the main job cmd
for stream_index, stream in enumerate(self._job['streams']):
if stream['io'] == 'in':
#now we're dealing only with output files
continue
#every subjob has a part to join for this output stream
part_out_fnames = []
for streams in self._jobs['streams']:
this_stream = streams[stream_index]
if 'fname' in this_stream:
part_out_fnames.append(this_stream['fname'])
else:
part_out_fnames.append(this_stream['fhand'])
#we need a function to join this stream
joiner = None
if joiner in stream:
joiner = stream['joiner']
else:
joiner = default_cat_joiner
if 'fname' in stream:
out_file = stream['fname']
else:
out_file = stream['fhand']
default_cat_joiner(out_file, part_out_fnames)
#now we can delete the tempdirs
for work_dir in self._jobs['work_dirs']:
work_dir.close()
self._outputs_collected = True
def _collect_retcodes(self):
'It gathers the retcodes from all processes'
retcode = None
for popen in self._jobs['popens']:
job_retcode = popen.returncode
if job_retcode is None:
#if some job is yet to be finished the main job is not finished
retcode = None
break
elif job_retcode != 0:
#if one job has finnished badly the main job is badly finished
retcode = job_retcode
break
#it should be 0 at this point
retcode = job_retcode
#if the retcode is not None the jobs have finished and we have to
#collect the outputs
if retcode is not None:
self._collect_output_streams()
self._retcode = retcode
return retcode
def _get_returncode(self):
'It returns the return code'
if self._retcode is None:
self._collect_retcodes()
return self._retcode
returncode = property(_get_returncode)
def kill(self):
'It kills all jobs'
if 'popens' not in self._jobs:
return
for popen in self._jobs['popens']:
#untill 2.6 subprocess.popen do not support kill
if 'kill' in dir(popen):
popen.kill()
else:
pid = popen.pid
call(['kill', '-9', str(pid)])
def terminate(self):
'It kills all jobs'
if 'popens' not in self._jobs:
return
for popen in self._jobs['popens']:
#untill 2.6 subprocess.popen do not support terminate
if 'terminate' in dir(popen):
popen.terminate()
else:
pid = popen.pid
call(['kill', '-6', str(pid)])
def _calculate_divisions(num_items, splits):
'''It calculates how many items should be in every split to divide
the num_items into splits.
Not all splits will have an equal number of items, it will return a tuple
with two tuples inside:
((num_fragments_1, num_items_1), (num_fragments_2, num_items_2))
splits = num_fragments_1 + num_fragments_2
num_items_1 = num_items_2 + 1
num_fragments_1 could be equal to 0.
This is the best way to create as many splits as possible as similar as
possible.
'''
if splits >= num_items:
return ((0, 1), (splits, 1))
num_fragments1 = num_items % splits
num_fragments2 = splits - num_fragments1
num_items2 = num_items // splits
num_items1 = num_items2 + 1
res = ((num_fragments1, num_items1), (num_fragments2, num_items2))
return res
def _items_in_file(fhand, expression_kind, expression):
'''Given an fhand and an expression it yields the items cutting where the
line matches the expression'''
sofar = fhand.readline()
for line in fhand:
if ((expression_kind == 'str' and expression in line) or
(expression_kind != 'str' and expression.search(line))):
yield sofar
sofar = line
else:
sofar += line
else:
#the last item
yield sofar
def _create_file_splitter_with_re(expression):
'''Given an expression it creates a file splitter.
The expression can be a regex or an str.
The item in the file will be defined everytime a line matches the
expression.
'''
expression_kind = None
if isinstance(expression, str):
expression_kind = 'str'
else:
expression_kind = 're'
def splitter(file_, work_dirs):
'''It splits the given file into several splits.
Every split will be located in one of the work_dirs, although it is not
guaranteed to create as many splits as work dirs. If in the file there
are less items than work_dirs some work_dirs will be left empty.
It returns a list with the fpaths or fhands for the splitted files.
file_ can be an fhand or an fname.
'''
#the file_ can be an fname or an fhand. which one is it?
file_is_str = None
if isinstance(file_, str):
fname = file_
file_is_str = True
else:
fname = file_.name
file_is_str = False
#how many splits do we want?
nsplits = len(work_dirs)
#how many items are in the file? We assume that all files have the same
#number of items
nitems = 0
for line in open(fname, 'r'):
if ((expression_kind == 'str' and expression in line) or
(expression_kind != 'str' and expression.search(line))):
nitems += 1
#how many splits a we going to create? and how many items will be in
#every split
#if there are more items than splits we create as many splits as items
if nsplits > nitems:
nsplits = nitems
(nsplits1, nitems1), (nsplits2, nitems2) = _calculate_divisions(nitems,
nsplits)
#we have to create nsplits1 files with nitems1 in it and nsplits2 files
#with nitems2 items in it
new_files = []
fhand = open(fname, 'r')
items = _items_in_file(fhand, expression_kind, expression)
splits_made = 0
for nsplits, nitems in ((nsplits1, nitems1), (nsplits2, nitems2)):
#we have to create nsplits files with nitems in it
#we don't need the split_index for anything
#pylint: disable-msg=W0612
for split_index in range(nsplits):
suffix = os.path.splitext(fname)[-1]
work_dir = work_dirs[splits_made]
ofh = NamedTemporaryFile(dir=work_dir.name, delete=False,
suffix=suffix)
copy_file_mode(fhand.name, ofh.name)
for item_index in range(nitems):
ofh.write(items.next())
ofh.flush()
if file_is_str:
new_files.append(ofh.name)
ofh.close()
else:
new_files.append(ofh)
splits_made += 1
return new_files
return splitter
def _create_non_splitter_splitter(copy_files=False):
'''It creates an splitter function that will not split the given file.
The created splitter will create one file for every work_dir given. This
file can be empty (useful for the output streams, or a copy of the given
file (useful for the no_split input streams).
'''
def splitter(file_, work_dirs):
'''It creates one output file for every splits.
Every split will be located in one of the work_dirs.
It returns a list with the fpaths for the new files.
'''
#the file_ can be an fname or an fhand. which one is it?
file_is_str = None
if isinstance(file_, str):
fname = file_
file_is_str = True
else:
fname = file_.name
file_is_str = False
#how many splits do we want?
nsplits = len(work_dirs)
new_fpaths = []
#we have to create nsplits
suffix = os.path.splitext(fname)[-1]
for split_index in range(nsplits):
work_dir = work_dirs[split_index]
#we use delete=False because this temp file is in a temp dir that
#will be completely deleted. If we use delete=True we get an error
#because the file might be already deleted when its __del__ method
#is called
ofh = NamedTemporaryFile(dir=work_dir.name, suffix=suffix,
delete=False)
if copy_files:
os.remove(ofh.name)
- os.symlink(fname, ofh.name)
+ #i've tried with os.symlink but condor does not like it
+ shutil.copyfile(fname, ofh.name)
#the file will be deleted
#what do we need the fname or the fhand?
if file_is_str:
new_fpaths.append(ofh.name)
else:
new_fpaths.append(ofh)
return new_fpaths
return splitter
def default_cat_joiner(out_file_, in_files_):
'''It joins the given in files into the given out file.
It works with fnames or fhands.
'''
#are we working with fhands or fnames?
file_is_str = None
if isinstance(out_file_, str):
file_is_str = True
else:
file_is_str = False
#the output fhand
if file_is_str:
out_fhand = open(out_file_, 'w')
else:
out_fhand = open(out_file_.name, 'w')
for in_file_ in in_files_:
#the input fhand
if file_is_str:
in_fhand = open(in_file_, 'r')
else:
in_fhand = open(in_file_.name, 'r')
for line in in_fhand:
out_fhand.write(line)
in_fhand.close()
out_fhand.close()
|
JoseBlanca/psubprocess
|
f4bc17ccff03a33070022007e616f7dd95c01fa3
|
the streams with no_split are not split now
|
diff --git a/psubprocess/prunner.py b/psubprocess/prunner.py
index 7412f7c..8ecacbe 100644
--- a/psubprocess/prunner.py
+++ b/psubprocess/prunner.py
@@ -1,658 +1,673 @@
'''It launches parallel processes with an interface similar to Popen.
It divides jobs into subjobs and launches the subjobs.
This module is useful when we have a non-parallel command to run in a
multiprocessor computer or in a multinode cluster. It will take the input files,
it will split them and it will run a subjob for everyone of the splits. It will
wait for the subjobs to finnish and it will join the output files generated
by all subjobs. At the end of the process will get the same output files as if
the command wasn't run in parallel.
This approach will work with commands that process a lot of items. This module
divides the items in sereval set and it assigns each of this sets to one new
subjob. These are the subjobs that will be run in parallel.
To do it requires the parameters used by popen: cmd, stdin, stdout, stderr and
some extra information: runner, splits and cmd_def.
runner is optional and it should be a subprocess.Popen like class. If it's not
given subprocess.Popen will be used. This Popen be the class used to run the
subjobs. If subprocess.Popen is used the subjobs will run in the processors of
the local node on several independent processes. If the Condor Popen is used
the subjobs will run in a condor cluster.
splits is the number of subjobs that we want to generate. If it's not given the
runner will provide a suitable number.
cmd_def is a dict that defines how the cmd defines the input and output files.
We need to tell Popen which are the input and output files in order to split
them and join them. The syntax for cmd_def is explained in the stream.py module
'''
# Copyright 2009 Jose Blanca, Peio Ziarsolo, COMAV-Univ. Politecnica Valencia
# This file is part of psubprocess.
# psubprocess is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as
# published by the Free Software Foundation, either version 3 of the
# License, or (at your option) any later version.
# psubprocess is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
# You should have received a copy of the GNU Affero General Public License
# along with psubprocess. If not, see <http://www.gnu.org/licenses/>.
from subprocess import Popen as StdPopen
import os, tempfile, shutil, copy
from psubprocess.streams import get_streams_from_cmd, STDOUT, STDERR, STDIN
from psubprocess.condor_runner import call
from psubprocess import condor_runner
RUNNER_MODULES = {}
RUNNER_MODULES['condor_runner'] = condor_runner
class NamedTemporaryDir(object):
'''This class creates temporary directories '''
#pylint: disable-msg=W0622
#we redifine the build in dir because temfile uses that inteface
def __init__(self, dir=None):
'''It initiates the class.'''
self._name = tempfile.mkdtemp(dir=dir)
def get_name(self):
'Returns path to the dict'
return self._name
name = property(get_name)
def close(self):
'''It removes the temp dir'''
if os.path.exists(self._name):
shutil.rmtree(self._name)
def __del__(self):
'''It removes de temp dir when instance is removed and the garbaje
colector decides it'''
self.close()
def NamedTemporaryFile(dir=None, delete=False, suffix=''):
'''It creates a temporary file that won't be deleted when close
This behaviour can be done with tempfile.NamedTemporaryFile in python > 2.6
'''
#pylint: disable-msg=W0613
#delete is not being used, it's there as a reminder, once we start to use
#python 2.6 this function should be removed
#pylint: disable-msg=C0103
#pylint: disable-msg=W0622
#We want to mimick tempfile.NamedTemporaryFile
fpath = tempfile.mkstemp(dir=dir, suffix=suffix)[1]
return open(fpath, 'w')
def copy_file_mode(fpath1, fpath2):
'It copies the os.stats mode from file1 to file2'
mode = os.stat(fpath1)[0]
os.chmod(fpath2, mode)
class Popen(object):
'''It paralellizes the given processes dividing them into subprocesses.
The interface is similar to subprocess.Popen to ease the use of this class,
although the functionality of this class is much mor limited.
When an instance of this class is created a series of subjobs is launched.
When all subjobs are finished returncode will have an int, if they're still
running returncode will be None.
We can wait for all subjobs to finnish using the wait method or we can
kill or terminate them using kill and terminate.
'''
def __init__(self, cmd, cmd_def=None, runner=None, runner_conf=None,
stdout=None, stderr=None, stdin=None, splits=None):
'''It inits the a Popen instance, it creates and runs the subjobs.
Like the subprocess.Popen it accepts stdin, stdout, stderr, but in this
case all of them should be files, PIPE will not work.
In the cmd_def list we have to tell this Popen how to locate the
input and output files in the cmd and how to split and join them. Look
for the cmd_format in the streams.py file.
keyword arguments:
cmd -- a list with the cmd to parallelize
cmd_def -- the cmd definition list (default [])
runner -- which runner to use (default subprocess.Popen)
runner_conf -- extra parameters for the runner (default {})
stdout -- a fhand to store the stdout (default None)
stderr -- a fhand to store the stderr (default None)
stdin -- a fhand with the stdin (default None)
splits -- number of subjobs to generate
'''
#we want the same interface as subprocess.popen
#pylint: disable-msg=R0913
self._retcode = None
self._outputs_collected = False
#some defaults
#if the runner is not given, we use subprocess.Popen
if runner is None:
runner = StdPopen
if cmd_def is None:
if stdin is not None:
raise ValueError('No cmd_def given but stdin present')
cmd_def = []
#if the number of splits is not given we calculate them
if splits is None:
splits = self.default_splits(runner)
#we need a work dir to create the temporary split files
self._work_dir = NamedTemporaryDir()
copy_file_mode('.', self._work_dir.name)
#the main job
self._job = {'cmd': cmd, 'work_dir': self._work_dir}
#we create the new subjobs
self._jobs = self._split_jobs(cmd, cmd_def, splits, self._work_dir,
stdout=stdout, stderr=stderr, stdin=stdin)
#launch every subjobs
self._launch_jobs(self._jobs, runner=runner, runner_conf=runner_conf)
@staticmethod
def _launch_jobs(jobs, runner, runner_conf):
'It launches all jobs and it adds its popen instance to them'
jobs['popens'] = []
cwd = os.getcwd()
for job_index, (cmd, streams, work_dir) in enumerate(zip(jobs['cmds'],
jobs['streams'], jobs['work_dirs'])):
#the std stream can be present or not
stdin, stdout, stderr = None, None, None
if jobs['stdins']:
stdin = jobs['stdins'][job_index]
if jobs['stdouts']:
stdout = jobs['stdouts'][job_index]
if jobs['stderrs']:
stderr = jobs['stderrs'][job_index]
#for every job we go to its dir to launch it
os.chdir(work_dir.name)
#we have to be sure that stdin is open for read
if stdin:
stdin = open(stdin.name)
#we launch the job
if runner == StdPopen:
popen = runner(cmd, stdout=stdout, stderr=stderr, stdin=stdin)
else:
popen = runner(cmd, cmd_def=streams, stdout=stdout,
stderr=stderr, stdin=stdin,
runner_conf=runner_conf)
#we record it's popen instane
jobs['popens'].append(popen)
os.chdir(cwd)
def _split_jobs(self, cmd, cmd_def, splits, work_dir, stdout=None,
stderr=None, stdin=None,):
''''I creates one job for every split.
Every job has a cmd, work_dir and streams, this info is in the jobs dict
with the keys: cmds, work_dirs, streams
'''
#too many arguments, but similar interface to our __init__
#pylint: disable-msg=R0913
#pylint: disable-msg=R0914
#the main job streams
main_job_streams = get_streams_from_cmd(cmd, cmd_def, stdout=stdout,
stderr=stderr, stdin=stdin)
self._job['streams'] = main_job_streams
streams, work_dirs = self._split_streams(main_job_streams, splits,
work_dir.name)
#now we have to create a new cmd with the right in and out streams for
#every split
cmds, stdins, stdouts, stderrs = self._create_cmds(cmd, streams)
jobs = {'cmds': cmds, 'work_dirs': work_dirs, 'streams': streams,
'stdins':stdins, 'stdouts':stdouts, 'stderrs':stderrs}
return jobs
@staticmethod
def _create_cmds(cmd, streams):
'''Given a base cmd and a steams list it creates one modified cmds for
every stream'''
#the streams is a list of streams
streamss = streams
cmds = []
stdouts = []
stdins = []
stderrs = []
for streams in streamss:
new_cmd = copy.deepcopy(cmd)
for stream in streams:
#is the stream in the cmd or in is a std one?
if 'cmd_location' in stream:
location = stream['cmd_location']
else:
location = None
if location is None:
continue
elif location == STDIN:
stdins.append(stream['fhand'])
elif location == STDOUT:
stdouts.append(stream['fhand'])
elif location == STDERR:
stderrs.append(stream['fhand'])
else:
#we modify the cmd[location] with the new file
#we use the fname and no path because the jobs will be
#launched from the job working dir
location = stream['cmd_location']
fpath = stream['fname']
fname = os.path.split(fpath)[-1]
new_cmd[location] = fname
cmds.append(new_cmd)
return cmds, stdins, stdouts, stderrs
@staticmethod
def _split_streams(streams, splits, work_dir):
'''Given a list of streams it splits every stream in the given number of
splits'''
#which are the input and output streams?
input_stream_indexes = []
output_stream_indexes = []
for index, stream in enumerate(streams):
if stream['io'] == 'in':
input_stream_indexes.append(index)
elif stream['io'] == 'out':
output_stream_indexes.append(index)
#we create one work dir for every split
work_dirs = []
for index in range(splits):
dir_ = NamedTemporaryDir(dir=work_dir)
work_dirs.append(dir_)
copy_file_mode('.', dir_.name)
#we have to do first the input files because the number of splits could
#be changed by them
#we split the input stream files into several splits
#we have to sort the input_stream_indexes, first we should take the ones
#that have an input file to be split
def do_we_have_to_split(stream_index):
'If the stream has to split a file it will return True'
split = None
stream = streams[stream_index]
#maybe they shouldn't be split
if 'special' in stream and 'no_split' in stream['special']:
split = False
#maybe there is no file to split
if (('fhand' in stream and stream['fhand'] is None) or
('fname' in stream and stream['fname'] is None) or
('fname' not in stream and 'fhand' not in stream)):
split = False
elif (('fhand' in stream and stream['fhand'] is not None) or
('fname' in stream and stream['fname'] is not None)):
split = True
return split
def to_be_split_first(stream1, stream2):
'It sorts the streams, the ones to be split go first'
split1 = do_we_have_to_split(stream1)
split2 = do_we_have_to_split(stream2)
return int(split1) - int(split2)
input_stream_indexes = sorted(input_stream_indexes, to_be_split_first)
first = True
split_files = {}
for index in input_stream_indexes:
stream = streams[index]
#splitter
- if 'splitter' not in stream:
+ splitter = None
+ if 'special' in stream and 'no_split' in stream['special']:
+ splitter = _create_non_splitter_splitter(copy_files=True)
+ elif 'splitter' not in stream:
msg = 'An splitter should be provided for every input stream'
msg += 'missing for: ' + str(stream)
raise ValueError(msg)
- splitter = stream['splitter']
+ else:
+ splitter = stream['splitter']
#the splitter can be a re, in that case with create the function
if '__call__' not in dir(splitter):
splitter = _create_file_splitter_with_re(splitter)
#we split the input files in the splits, every file will be in one
#of the given work_dirs
#the stream can have fname or fhands
if 'fhand' in stream:
file_ = stream['fhand']
elif 'fname' in stream:
file_ = stream['fname']
else:
file_ = None
if file_ is None:
#the stream migth have no file associated
files = [None] * len(work_dirs)
else:
files = splitter(file_, work_dirs)
#the files len can be different than splits, in that case we modify
#the splits or we raise an error
if len(files) != splits:
if first:
splits = len(files)
#we discard the empty temporary dirs
work_dirs = work_dirs[0:splits]
else:
msg = 'Not all input files were divided in the same number'
msg += ' of splits'
raise RuntimeError(msg)
first = False
split_files[index] = files #a list of files for every in stream
#we split the ouptut stream files into several splits
+ output_splitter = _create_non_splitter_splitter(copy_files=False)
for index in output_stream_indexes:
stream = streams[index]
#for th output we just create the new names, but we don't split
#any file
if 'fhand' in stream:
fname = stream['fhand']
else:
fname = stream['fname']
- files = _output_splitter(fname, work_dirs)
+ files = output_splitter(fname, work_dirs)
split_files[index] = files #a list of files for every in stream
new_streamss = []
#we need one new stream for every split
for split_index in range(splits):
#the streams for one job
new_streams = []
for stream_index, stream in enumerate(streams):
#we duplicate the original stream
new_stream = stream.copy()
#we set the new files
if 'fhand' in stream:
new_stream['fhand'] = split_files[stream_index][split_index]
else:
new_stream['fname'] = split_files[stream_index][split_index]
new_streams.append(new_stream)
new_streamss.append(new_streams)
return new_streamss, work_dirs
@staticmethod
def default_splits(runner):
'Given a runner it returns the number of splits recommended by default'
if runner is StdPopen:
#the number of processors
return os.sysconf('SC_NPROCESSORS_ONLN')
else:
module = runner.__module__.split('.')[-1]
module = RUNNER_MODULES[module]
return module.get_default_splits()
def wait(self):
'It waits for all the works to finnish'
#we wait till all jobs finish
for job in self._jobs['popens']:
job.wait()
#now that all jobs have finished we join the results
self._collect_output_streams()
#we join now the retcodes
self._collect_retcodes()
return self._retcode
def _collect_output_streams(self):
'''It joins all the output streams into the output files and it removes
the work dirs'''
if self._outputs_collected:
return
#for each file in the main job cmd
for stream_index, stream in enumerate(self._job['streams']):
if stream['io'] == 'in':
#now we're dealing only with output files
continue
#every subjob has a part to join for this output stream
part_out_fnames = []
for streams in self._jobs['streams']:
this_stream = streams[stream_index]
if 'fname' in this_stream:
part_out_fnames.append(this_stream['fname'])
else:
part_out_fnames.append(this_stream['fhand'])
#we need a function to join this stream
joiner = None
if joiner in stream:
joiner = stream['joiner']
else:
joiner = default_cat_joiner
if 'fname' in stream:
out_file = stream['fname']
else:
out_file = stream['fhand']
default_cat_joiner(out_file, part_out_fnames)
#now we can delete the tempdirs
for work_dir in self._jobs['work_dirs']:
work_dir.close()
self._outputs_collected = True
def _collect_retcodes(self):
'It gathers the retcodes from all processes'
retcode = None
for popen in self._jobs['popens']:
job_retcode = popen.returncode
if job_retcode is None:
#if some job is yet to be finished the main job is not finished
retcode = None
break
elif job_retcode != 0:
#if one job has finnished badly the main job is badly finished
retcode = job_retcode
break
#it should be 0 at this point
retcode = job_retcode
#if the retcode is not None the jobs have finished and we have to
#collect the outputs
if retcode is not None:
self._collect_output_streams()
self._retcode = retcode
return retcode
def _get_returncode(self):
'It returns the return code'
if self._retcode is None:
self._collect_retcodes()
return self._retcode
returncode = property(_get_returncode)
def kill(self):
'It kills all jobs'
if 'popens' not in self._jobs:
return
for popen in self._jobs['popens']:
#untill 2.6 subprocess.popen do not support kill
if 'kill' in dir(popen):
popen.kill()
else:
pid = popen.pid
call(['kill', '-9', str(pid)])
def terminate(self):
'It kills all jobs'
if 'popens' not in self._jobs:
return
for popen in self._jobs['popens']:
#untill 2.6 subprocess.popen do not support terminate
if 'terminate' in dir(popen):
popen.terminate()
else:
pid = popen.pid
call(['kill', '-6', str(pid)])
def _calculate_divisions(num_items, splits):
'''It calculates how many items should be in every split to divide
the num_items into splits.
Not all splits will have an equal number of items, it will return a tuple
with two tuples inside:
((num_fragments_1, num_items_1), (num_fragments_2, num_items_2))
splits = num_fragments_1 + num_fragments_2
num_items_1 = num_items_2 + 1
num_fragments_1 could be equal to 0.
This is the best way to create as many splits as possible as similar as
possible.
'''
if splits >= num_items:
return ((0, 1), (splits, 1))
num_fragments1 = num_items % splits
num_fragments2 = splits - num_fragments1
num_items2 = num_items // splits
num_items1 = num_items2 + 1
res = ((num_fragments1, num_items1), (num_fragments2, num_items2))
return res
def _items_in_file(fhand, expression_kind, expression):
'''Given an fhand and an expression it yields the items cutting where the
line matches the expression'''
sofar = fhand.readline()
for line in fhand:
if ((expression_kind == 'str' and expression in line) or
(expression_kind != 'str' and expression.search(line))):
yield sofar
sofar = line
else:
sofar += line
else:
#the last item
yield sofar
def _create_file_splitter_with_re(expression):
'''Given an expression it creates a file splitter.
The expression can be a regex or an str.
The item in the file will be defined everytime a line matches the
expression.
'''
expression_kind = None
if isinstance(expression, str):
expression_kind = 'str'
else:
expression_kind = 're'
def splitter(file_, work_dirs):
'''It splits the given file into several splits.
Every split will be located in one of the work_dirs, although it is not
guaranteed to create as many splits as work dirs. If in the file there
are less items than work_dirs some work_dirs will be left empty.
It returns a list with the fpaths or fhands for the splitted files.
file_ can be an fhand or an fname.
'''
#the file_ can be an fname or an fhand. which one is it?
file_is_str = None
if isinstance(file_, str):
fname = file_
file_is_str = True
else:
fname = file_.name
file_is_str = False
#how many splits do we want?
nsplits = len(work_dirs)
#how many items are in the file? We assume that all files have the same
#number of items
nitems = 0
for line in open(fname, 'r'):
if ((expression_kind == 'str' and expression in line) or
(expression_kind != 'str' and expression.search(line))):
nitems += 1
#how many splits a we going to create? and how many items will be in
#every split
#if there are more items than splits we create as many splits as items
if nsplits > nitems:
nsplits = nitems
(nsplits1, nitems1), (nsplits2, nitems2) = _calculate_divisions(nitems,
nsplits)
#we have to create nsplits1 files with nitems1 in it and nsplits2 files
#with nitems2 items in it
new_files = []
fhand = open(fname, 'r')
items = _items_in_file(fhand, expression_kind, expression)
splits_made = 0
for nsplits, nitems in ((nsplits1, nitems1), (nsplits2, nitems2)):
#we have to create nsplits files with nitems in it
#we don't need the split_index for anything
#pylint: disable-msg=W0612
for split_index in range(nsplits):
suffix = os.path.splitext(fname)[-1]
work_dir = work_dirs[splits_made]
ofh = NamedTemporaryFile(dir=work_dir.name, delete=False,
suffix=suffix)
copy_file_mode(fhand.name, ofh.name)
for item_index in range(nitems):
ofh.write(items.next())
ofh.flush()
if file_is_str:
new_files.append(ofh.name)
ofh.close()
else:
new_files.append(ofh)
splits_made += 1
return new_files
return splitter
-def _output_splitter(file_, work_dirs):
- '''It creates one output file for every splits.
+def _create_non_splitter_splitter(copy_files=False):
+ '''It creates an splitter function that will not split the given file.
- Every split will be located in one of the work_dirs.
- It returns a list with the fpaths for the new output files.
+ The created splitter will create one file for every work_dir given. This
+ file can be empty (useful for the output streams, or a copy of the given
+ file (useful for the no_split input streams).
'''
- #the file_ can be an fname or an fhand. which one is it?
- file_is_str = None
- if isinstance(file_, str):
- fname = file_
- file_is_str = True
- else:
- fname = file_.name
- file_is_str = False
- #how many splits do we want?
- nsplits = len(work_dirs)
- new_fpaths = []
- #we have to create nsplits
- for split_index in range(nsplits):
- suffix = os.path.splitext(fname)[-1]
- work_dir = work_dirs[split_index]
- #we use delete=False because this temp file is in a temp dir that will
- #be completely deleted. If we use delete=True we get an error because
- #the file might be already deleted when its __del__ method is called
- ofh = NamedTemporaryFile(dir=work_dir.name, suffix=suffix,
- delete=False)
- #the file will be deleted
- #what do we need the fname or the fhand?
- if file_is_str:
- #it will be deleted because we just need the name in the temporary
- #directory. tempfile.mktemp would be better for this use, but it is
- #deprecated
- new_fpaths.append(ofh.name)
+ def splitter(file_, work_dirs):
+ '''It creates one output file for every splits.
+
+ Every split will be located in one of the work_dirs.
+ It returns a list with the fpaths for the new files.
+ '''
+ #the file_ can be an fname or an fhand. which one is it?
+ file_is_str = None
+ if isinstance(file_, str):
+ fname = file_
+ file_is_str = True
else:
- new_fpaths.append(ofh)
- return new_fpaths
+ fname = file_.name
+ file_is_str = False
+ #how many splits do we want?
+ nsplits = len(work_dirs)
+
+ new_fpaths = []
+ #we have to create nsplits
+ suffix = os.path.splitext(fname)[-1]
+ for split_index in range(nsplits):
+ work_dir = work_dirs[split_index]
+ #we use delete=False because this temp file is in a temp dir that
+ #will be completely deleted. If we use delete=True we get an error
+ #because the file might be already deleted when its __del__ method
+ #is called
+ ofh = NamedTemporaryFile(dir=work_dir.name, suffix=suffix,
+ delete=False)
+ if copy_files:
+ os.remove(ofh.name)
+ os.symlink(fname, ofh.name)
+ #the file will be deleted
+ #what do we need the fname or the fhand?
+ if file_is_str:
+ new_fpaths.append(ofh.name)
+ else:
+ new_fpaths.append(ofh)
+ return new_fpaths
+ return splitter
def default_cat_joiner(out_file_, in_files_):
'''It joins the given in files into the given out file.
It works with fnames or fhands.
'''
#are we working with fhands or fnames?
file_is_str = None
if isinstance(out_file_, str):
file_is_str = True
else:
file_is_str = False
#the output fhand
if file_is_str:
out_fhand = open(out_file_, 'w')
else:
out_fhand = open(out_file_.name, 'w')
for in_file_ in in_files_:
#the input fhand
if file_is_str:
in_fhand = open(in_file_, 'r')
else:
in_fhand = open(in_file_.name, 'r')
for line in in_fhand:
out_fhand.write(line)
in_fhand.close()
out_fhand.close()
diff --git a/test/prunner_test.py b/test/prunner_test.py
index 502e00d..6a954ab 100644
--- a/test/prunner_test.py
+++ b/test/prunner_test.py
@@ -1,240 +1,268 @@
'''
Created on 16/07/2009
@author: jose
'''
# Copyright 2009 Jose Blanca, Peio Ziarsolo, COMAV-Univ. Politecnica Valencia
# This file is part of psubprocess.
# psubprocess is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as
# published by the Free Software Foundation, either version 3 of the
# License, or (at your option) any later version.
# psubprocess is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
# You should have received a copy of the GNU Affero General Public License
# along with psubprocess. If not, see <http://www.gnu.org/licenses/>.
import unittest
from tempfile import NamedTemporaryFile
import os
from psubprocess import Popen
from psubprocess.streams import STDIN
from test_utils import create_test_binary
class PRunnerTest(unittest.TestCase):
'It test that we can parallelize processes'
@staticmethod
def test_file_in():
'It tests the most basic behaviour'
bin = create_test_binary()
#with infile
in_file = NamedTemporaryFile()
in_file.write('hola')
in_file.flush()
cmd = [bin]
cmd.extend(['-i', in_file.name])
stdout = NamedTemporaryFile()
stderr = NamedTemporaryFile()
cmd_def = [{'options': ('-i', '--input'), 'io': 'in', 'splitter':''}]
popen = Popen(cmd, stdout=stdout, stderr=stderr, cmd_def=cmd_def)
assert popen.wait() == 0 #waits till finishes and looks to the retcod
assert open(stdout.name).read() == 'hola'
in_file.close()
os.remove(bin)
@staticmethod
def test_job_no_in_stream():
'It test that a job with no in stream is run splits times'
bin = create_test_binary()
cmd = [bin]
cmd.extend(['-o', 'hola', '-e', 'caracola'])
stdout = NamedTemporaryFile()
stderr = NamedTemporaryFile()
cmd_def = [{'options': ('-i', '--input'), 'io': 'in', 'splitter':''}]
splits = 4
popen = Popen(cmd, stdout=stdout, stderr=stderr, cmd_def=cmd_def,
splits=splits)
assert popen.wait() == 0 #waits till finishes and looks to the retcod
assert open(stdout.name).read() == 'hola' * splits
assert open(stderr.name).read() == 'caracola' * splits
os.remove(bin)
@staticmethod
def test_stdin():
'It test that stdin works as input'
bin = create_test_binary()
#with stdin
content = 'hola1\nhola2\nhola3\nhola4\nhola5\nhola6\nhola7\nhola8\n'
content += 'hola9\nhola10|n'
stdin = NamedTemporaryFile()
stdin.write(content)
stdin.flush()
cmd = [bin]
cmd.extend(['-s'])
stdout = NamedTemporaryFile()
stderr = NamedTemporaryFile()
cmd_def = [{'options':STDIN, 'io': 'in', 'splitter':''}]
popen = Popen(cmd, stdout=stdout, stderr=stderr, stdin=stdin,
cmd_def=cmd_def)
assert popen.wait() == 0 #waits till finishes and looks to the retcod
assert open(stdout.name).read() == content
assert open(stderr.name).read() == ''
os.remove(bin)
@staticmethod
def test_infile_outfile():
'It tests that we can set an input file and an output file'
bin = create_test_binary()
#with infile
in_file = NamedTemporaryFile()
content = 'hola1\nhola2\nhola3\nhola4\nhola5\nhola6\nhola7\nhola8\n'
content += 'hola9\nhola10|n'
in_file.write(content)
in_file.flush()
out_file = NamedTemporaryFile()
cmd = [bin]
cmd.extend(['-i', in_file.name, '-t', out_file.name])
stdout = NamedTemporaryFile()
stderr = NamedTemporaryFile()
cmd_def = [{'options': ('-i', '--input'), 'io': 'in', 'splitter':''},
{'options': ('-t', '--output'), 'io': 'out'}]
popen = Popen(cmd, stdout=stdout, stderr=stderr, cmd_def=cmd_def)
assert popen.wait() == 0 #waits till finishes and looks to the retcod
assert not open(stdout.name).read()
assert not open(stderr.name).read()
assert open(out_file.name).read() == content
in_file.close()
os.remove(bin)
@staticmethod
def test_retcode():
'It tests that we get the correct returncode'
bin = create_test_binary()
cmd = [bin]
cmd.extend(['-r', '20'])
stdout = NamedTemporaryFile()
stderr = NamedTemporaryFile()
popen = Popen(cmd, stdout=stdout, stderr=stderr, cmd_def=[])
assert popen.wait() == 20 #waits till finishes and looks to the retcod
assert not open(stdout.name).read()
assert not open(stderr.name).read()
os.remove(bin)
@staticmethod
def xtest_infile_outfile_condor():
'It tests that we can set an input file and an output file'
bin = create_test_binary()
#with infile
in_file = NamedTemporaryFile()
content = 'hola1\nhola2\nhola3\nhola4\nhola5\nhola6\nhola7\nhola8\n'
content += 'hola9\nhola10|n'
in_file.write(content)
in_file.flush()
out_file = NamedTemporaryFile()
cmd = [bin]
cmd.extend(['-i', in_file.name, '-t', out_file.name])
stdout = NamedTemporaryFile()
stderr = NamedTemporaryFile()
cmd_def = [{'options': ('-i', '--input'), 'io': 'in', 'splitter':''},
{'options': ('-t', '--output'), 'io': 'out'}]
from psubprocess import CondorPopen
popen = Popen(cmd, stdout=stdout, stderr=stderr, cmd_def=cmd_def,
runner=CondorPopen,
runner_conf={'transfer_executable':True})
assert popen.wait() == 0 #waits till finishes and looks to the retcod
assert not open(stdout.name).read()
assert not open(stderr.name).read()
assert open(out_file.name).read() == content
in_file.close()
os.remove(bin)
@staticmethod
def test_stdin_real_splitter():
'It test that stdin works as input'
bin = create_test_binary()
#with stdin
content = '>hola1\nhola2\n>hola3\nhola4\n>hola5\nhola6\n>hola7\nhola8\n'
content += '>hola9\nhola10|n'
stdin = NamedTemporaryFile()
stdin.write(content)
stdin.flush()
cmd = [bin]
cmd.extend(['-s'])
stdout = NamedTemporaryFile()
stderr = NamedTemporaryFile()
cmd_def = [{'options':STDIN, 'io': 'in', 'splitter':'>'}]
popen = Popen(cmd, stdout=stdout, stderr=stderr, stdin=stdin,
cmd_def=cmd_def)
assert popen.wait() == 0 #waits till finishes and looks to the retcod
assert open(stdout.name).read() == content
assert open(stderr.name).read() == ''
os.remove(bin)
@staticmethod
def test_2_infile_outfile():
'It tests that we can set 2 input files and an output file'
bin = create_test_binary()
#with infile
content = 'hola1\nhola2\nhola3\nhola4\nhola5\nhola6\nhola7\nhola8\n'
content += 'hola9\nhola10|n'
in_file1 = NamedTemporaryFile()
in_file1.write(content)
in_file1.flush()
in_file2 = NamedTemporaryFile()
in_file2.write(content)
in_file2.flush()
out_file1 = NamedTemporaryFile()
out_file2 = NamedTemporaryFile()
cmd = [bin]
cmd.extend(['-i', in_file1.name, '-t', out_file1.name])
cmd.extend(['-x', in_file2.name, '-z', out_file2.name])
stdout = NamedTemporaryFile()
stderr = NamedTemporaryFile()
cmd_def = [{'options': ('-i', '--input'), 'io': 'in', 'splitter':''},
{'options': ('-x', '--input'), 'io': 'in', 'splitter':''},
{'options': ('-t', '--output'), 'io': 'out'},
{'options': ('-z', '--output'), 'io': 'out'}]
popen = Popen(cmd, stdout=stdout, stderr=stderr, cmd_def=cmd_def)
assert popen.wait() == 0 #waits till finishes and looks to the retcod
assert not open(stdout.name).read()
assert not open(stderr.name).read()
assert open(out_file1.name).read() == content
assert open(out_file2.name).read() == content
in_file1.close()
in_file2.close()
os.remove(bin)
@staticmethod
def test_kill_subjobs():
'It tests that we can kill the subjobs'
bin = create_test_binary()
cmd = [bin]
cmd.extend(['-w'])
stdout = NamedTemporaryFile()
stderr = NamedTemporaryFile()
popen = Popen(cmd, stdout=stdout, stderr=stderr, cmd_def=[])
assert popen.returncode is None
popen.kill()
assert not open(stdout.name).read()
assert not open(stderr.name).read()
os.remove(bin)
+ @staticmethod
+ def test_nosplit():
+ 'It test that we can set some input files to be not split'
+ bin = create_test_binary()
+ #with infile
+ in_file = NamedTemporaryFile()
+ content = 'hola1\nhola2\n'
+ in_file.write(content)
+ in_file.flush()
+ out_file = NamedTemporaryFile()
+
+ cmd = [bin]
+ cmd.extend(['-i', in_file.name, '-t', out_file.name])
+ stdout = NamedTemporaryFile()
+ stderr = NamedTemporaryFile()
+ cmd_def = [{'options': ('-i', '--input'), 'io': 'in',
+ 'special':['no_split']},
+ {'options': ('-t', '--output'), 'io': 'out'}]
+ splits = 4
+ popen = Popen(cmd, stdout=stdout, stderr=stderr, cmd_def=cmd_def,
+ splits=splits)
+ assert popen.wait() == 0 #waits till finishes and looks to the retcod
+ assert not open(stdout.name).read()
+ assert not open(stderr.name).read()
+ assert open(out_file.name).read() == content * splits
+ in_file.close()
+ os.remove(bin)
+
if __name__ == "__main__":
#import sys;sys.argv = ['', 'Test.testName']
unittest.main()
\ No newline at end of file
|
JoseBlanca/psubprocess
|
8f7e6dba187ad23c12f471b6f61040643097f30a
|
Some kill bugfixes and tests
|
diff --git a/psubprocess/prunner.py b/psubprocess/prunner.py
index dd2d938..7412f7c 100644
--- a/psubprocess/prunner.py
+++ b/psubprocess/prunner.py
@@ -1,649 +1,658 @@
'''It launches parallel processes with an interface similar to Popen.
It divides jobs into subjobs and launches the subjobs.
This module is useful when we have a non-parallel command to run in a
multiprocessor computer or in a multinode cluster. It will take the input files,
it will split them and it will run a subjob for everyone of the splits. It will
wait for the subjobs to finnish and it will join the output files generated
by all subjobs. At the end of the process will get the same output files as if
the command wasn't run in parallel.
This approach will work with commands that process a lot of items. This module
divides the items in sereval set and it assigns each of this sets to one new
subjob. These are the subjobs that will be run in parallel.
To do it requires the parameters used by popen: cmd, stdin, stdout, stderr and
some extra information: runner, splits and cmd_def.
runner is optional and it should be a subprocess.Popen like class. If it's not
given subprocess.Popen will be used. This Popen be the class used to run the
subjobs. If subprocess.Popen is used the subjobs will run in the processors of
the local node on several independent processes. If the Condor Popen is used
the subjobs will run in a condor cluster.
splits is the number of subjobs that we want to generate. If it's not given the
runner will provide a suitable number.
cmd_def is a dict that defines how the cmd defines the input and output files.
We need to tell Popen which are the input and output files in order to split
them and join them. The syntax for cmd_def is explained in the stream.py module
'''
# Copyright 2009 Jose Blanca, Peio Ziarsolo, COMAV-Univ. Politecnica Valencia
# This file is part of psubprocess.
# psubprocess is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as
# published by the Free Software Foundation, either version 3 of the
# License, or (at your option) any later version.
# psubprocess is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
# You should have received a copy of the GNU Affero General Public License
# along with psubprocess. If not, see <http://www.gnu.org/licenses/>.
from subprocess import Popen as StdPopen
import os, tempfile, shutil, copy
from psubprocess.streams import get_streams_from_cmd, STDOUT, STDERR, STDIN
+from psubprocess.condor_runner import call
from psubprocess import condor_runner
RUNNER_MODULES = {}
RUNNER_MODULES['condor_runner'] = condor_runner
class NamedTemporaryDir(object):
'''This class creates temporary directories '''
#pylint: disable-msg=W0622
#we redifine the build in dir because temfile uses that inteface
def __init__(self, dir=None):
'''It initiates the class.'''
self._name = tempfile.mkdtemp(dir=dir)
def get_name(self):
'Returns path to the dict'
return self._name
name = property(get_name)
def close(self):
'''It removes the temp dir'''
if os.path.exists(self._name):
shutil.rmtree(self._name)
def __del__(self):
'''It removes de temp dir when instance is removed and the garbaje
colector decides it'''
self.close()
def NamedTemporaryFile(dir=None, delete=False, suffix=''):
'''It creates a temporary file that won't be deleted when close
This behaviour can be done with tempfile.NamedTemporaryFile in python > 2.6
'''
#pylint: disable-msg=W0613
#delete is not being used, it's there as a reminder, once we start to use
#python 2.6 this function should be removed
#pylint: disable-msg=C0103
#pylint: disable-msg=W0622
#We want to mimick tempfile.NamedTemporaryFile
fpath = tempfile.mkstemp(dir=dir, suffix=suffix)[1]
return open(fpath, 'w')
def copy_file_mode(fpath1, fpath2):
'It copies the os.stats mode from file1 to file2'
mode = os.stat(fpath1)[0]
os.chmod(fpath2, mode)
class Popen(object):
'''It paralellizes the given processes dividing them into subprocesses.
The interface is similar to subprocess.Popen to ease the use of this class,
although the functionality of this class is much mor limited.
When an instance of this class is created a series of subjobs is launched.
When all subjobs are finished returncode will have an int, if they're still
running returncode will be None.
We can wait for all subjobs to finnish using the wait method or we can
kill or terminate them using kill and terminate.
'''
def __init__(self, cmd, cmd_def=None, runner=None, runner_conf=None,
stdout=None, stderr=None, stdin=None, splits=None):
'''It inits the a Popen instance, it creates and runs the subjobs.
Like the subprocess.Popen it accepts stdin, stdout, stderr, but in this
case all of them should be files, PIPE will not work.
In the cmd_def list we have to tell this Popen how to locate the
input and output files in the cmd and how to split and join them. Look
for the cmd_format in the streams.py file.
keyword arguments:
cmd -- a list with the cmd to parallelize
cmd_def -- the cmd definition list (default [])
runner -- which runner to use (default subprocess.Popen)
runner_conf -- extra parameters for the runner (default {})
stdout -- a fhand to store the stdout (default None)
stderr -- a fhand to store the stderr (default None)
stdin -- a fhand with the stdin (default None)
splits -- number of subjobs to generate
'''
#we want the same interface as subprocess.popen
#pylint: disable-msg=R0913
self._retcode = None
self._outputs_collected = False
#some defaults
#if the runner is not given, we use subprocess.Popen
if runner is None:
runner = StdPopen
if cmd_def is None:
if stdin is not None:
raise ValueError('No cmd_def given but stdin present')
cmd_def = []
#if the number of splits is not given we calculate them
if splits is None:
splits = self.default_splits(runner)
#we need a work dir to create the temporary split files
self._work_dir = NamedTemporaryDir()
copy_file_mode('.', self._work_dir.name)
#the main job
self._job = {'cmd': cmd, 'work_dir': self._work_dir}
#we create the new subjobs
self._jobs = self._split_jobs(cmd, cmd_def, splits, self._work_dir,
stdout=stdout, stderr=stderr, stdin=stdin)
#launch every subjobs
self._launch_jobs(self._jobs, runner=runner, runner_conf=runner_conf)
@staticmethod
def _launch_jobs(jobs, runner, runner_conf):
'It launches all jobs and it adds its popen instance to them'
jobs['popens'] = []
cwd = os.getcwd()
for job_index, (cmd, streams, work_dir) in enumerate(zip(jobs['cmds'],
jobs['streams'], jobs['work_dirs'])):
#the std stream can be present or not
stdin, stdout, stderr = None, None, None
if jobs['stdins']:
stdin = jobs['stdins'][job_index]
if jobs['stdouts']:
stdout = jobs['stdouts'][job_index]
if jobs['stderrs']:
stderr = jobs['stderrs'][job_index]
#for every job we go to its dir to launch it
os.chdir(work_dir.name)
#we have to be sure that stdin is open for read
if stdin:
stdin = open(stdin.name)
#we launch the job
if runner == StdPopen:
popen = runner(cmd, stdout=stdout, stderr=stderr, stdin=stdin)
else:
popen = runner(cmd, cmd_def=streams, stdout=stdout,
stderr=stderr, stdin=stdin,
runner_conf=runner_conf)
#we record it's popen instane
jobs['popens'].append(popen)
os.chdir(cwd)
def _split_jobs(self, cmd, cmd_def, splits, work_dir, stdout=None,
stderr=None, stdin=None,):
''''I creates one job for every split.
Every job has a cmd, work_dir and streams, this info is in the jobs dict
with the keys: cmds, work_dirs, streams
'''
#too many arguments, but similar interface to our __init__
#pylint: disable-msg=R0913
#pylint: disable-msg=R0914
#the main job streams
main_job_streams = get_streams_from_cmd(cmd, cmd_def, stdout=stdout,
stderr=stderr, stdin=stdin)
self._job['streams'] = main_job_streams
streams, work_dirs = self._split_streams(main_job_streams, splits,
work_dir.name)
#now we have to create a new cmd with the right in and out streams for
#every split
cmds, stdins, stdouts, stderrs = self._create_cmds(cmd, streams)
jobs = {'cmds': cmds, 'work_dirs': work_dirs, 'streams': streams,
'stdins':stdins, 'stdouts':stdouts, 'stderrs':stderrs}
return jobs
@staticmethod
def _create_cmds(cmd, streams):
'''Given a base cmd and a steams list it creates one modified cmds for
every stream'''
#the streams is a list of streams
streamss = streams
cmds = []
stdouts = []
stdins = []
stderrs = []
for streams in streamss:
new_cmd = copy.deepcopy(cmd)
for stream in streams:
#is the stream in the cmd or in is a std one?
if 'cmd_location' in stream:
location = stream['cmd_location']
else:
location = None
if location is None:
continue
elif location == STDIN:
stdins.append(stream['fhand'])
elif location == STDOUT:
stdouts.append(stream['fhand'])
elif location == STDERR:
stderrs.append(stream['fhand'])
else:
#we modify the cmd[location] with the new file
#we use the fname and no path because the jobs will be
#launched from the job working dir
location = stream['cmd_location']
fpath = stream['fname']
fname = os.path.split(fpath)[-1]
new_cmd[location] = fname
cmds.append(new_cmd)
return cmds, stdins, stdouts, stderrs
@staticmethod
def _split_streams(streams, splits, work_dir):
'''Given a list of streams it splits every stream in the given number of
splits'''
#which are the input and output streams?
input_stream_indexes = []
output_stream_indexes = []
for index, stream in enumerate(streams):
if stream['io'] == 'in':
input_stream_indexes.append(index)
elif stream['io'] == 'out':
output_stream_indexes.append(index)
#we create one work dir for every split
work_dirs = []
for index in range(splits):
dir_ = NamedTemporaryDir(dir=work_dir)
work_dirs.append(dir_)
copy_file_mode('.', dir_.name)
#we have to do first the input files because the number of splits could
#be changed by them
#we split the input stream files into several splits
#we have to sort the input_stream_indexes, first we should take the ones
#that have an input file to be split
def do_we_have_to_split(stream_index):
'If the stream has to split a file it will return True'
split = None
stream = streams[stream_index]
#maybe they shouldn't be split
if 'special' in stream and 'no_split' in stream['special']:
split = False
#maybe there is no file to split
if (('fhand' in stream and stream['fhand'] is None) or
('fname' in stream and stream['fname'] is None) or
('fname' not in stream and 'fhand' not in stream)):
split = False
elif (('fhand' in stream and stream['fhand'] is not None) or
('fname' in stream and stream['fname'] is not None)):
split = True
return split
def to_be_split_first(stream1, stream2):
'It sorts the streams, the ones to be split go first'
split1 = do_we_have_to_split(stream1)
split2 = do_we_have_to_split(stream2)
return int(split1) - int(split2)
input_stream_indexes = sorted(input_stream_indexes, to_be_split_first)
first = True
split_files = {}
for index in input_stream_indexes:
stream = streams[index]
#splitter
if 'splitter' not in stream:
msg = 'An splitter should be provided for every input stream'
msg += 'missing for: ' + str(stream)
raise ValueError(msg)
splitter = stream['splitter']
#the splitter can be a re, in that case with create the function
if '__call__' not in dir(splitter):
splitter = _create_file_splitter_with_re(splitter)
#we split the input files in the splits, every file will be in one
#of the given work_dirs
#the stream can have fname or fhands
if 'fhand' in stream:
file_ = stream['fhand']
elif 'fname' in stream:
file_ = stream['fname']
else:
file_ = None
if file_ is None:
#the stream migth have no file associated
files = [None] * len(work_dirs)
else:
files = splitter(file_, work_dirs)
#the files len can be different than splits, in that case we modify
#the splits or we raise an error
if len(files) != splits:
if first:
splits = len(files)
#we discard the empty temporary dirs
work_dirs = work_dirs[0:splits]
else:
msg = 'Not all input files were divided in the same number'
msg += ' of splits'
raise RuntimeError(msg)
first = False
split_files[index] = files #a list of files for every in stream
#we split the ouptut stream files into several splits
for index in output_stream_indexes:
stream = streams[index]
#for th output we just create the new names, but we don't split
#any file
if 'fhand' in stream:
fname = stream['fhand']
else:
fname = stream['fname']
files = _output_splitter(fname, work_dirs)
split_files[index] = files #a list of files for every in stream
new_streamss = []
#we need one new stream for every split
for split_index in range(splits):
#the streams for one job
new_streams = []
for stream_index, stream in enumerate(streams):
#we duplicate the original stream
new_stream = stream.copy()
#we set the new files
if 'fhand' in stream:
new_stream['fhand'] = split_files[stream_index][split_index]
else:
new_stream['fname'] = split_files[stream_index][split_index]
new_streams.append(new_stream)
new_streamss.append(new_streams)
return new_streamss, work_dirs
@staticmethod
def default_splits(runner):
'Given a runner it returns the number of splits recommended by default'
if runner is StdPopen:
#the number of processors
return os.sysconf('SC_NPROCESSORS_ONLN')
else:
module = runner.__module__.split('.')[-1]
module = RUNNER_MODULES[module]
return module.get_default_splits()
def wait(self):
'It waits for all the works to finnish'
#we wait till all jobs finish
for job in self._jobs['popens']:
job.wait()
#now that all jobs have finished we join the results
self._collect_output_streams()
#we join now the retcodes
self._collect_retcodes()
return self._retcode
def _collect_output_streams(self):
'''It joins all the output streams into the output files and it removes
the work dirs'''
if self._outputs_collected:
return
#for each file in the main job cmd
for stream_index, stream in enumerate(self._job['streams']):
if stream['io'] == 'in':
#now we're dealing only with output files
continue
#every subjob has a part to join for this output stream
part_out_fnames = []
for streams in self._jobs['streams']:
this_stream = streams[stream_index]
if 'fname' in this_stream:
part_out_fnames.append(this_stream['fname'])
else:
part_out_fnames.append(this_stream['fhand'])
#we need a function to join this stream
joiner = None
if joiner in stream:
joiner = stream['joiner']
else:
joiner = default_cat_joiner
if 'fname' in stream:
out_file = stream['fname']
else:
out_file = stream['fhand']
default_cat_joiner(out_file, part_out_fnames)
#now we can delete the tempdirs
for work_dir in self._jobs['work_dirs']:
work_dir.close()
self._outputs_collected = True
def _collect_retcodes(self):
'It gathers the retcodes from all processes'
retcode = None
for popen in self._jobs['popens']:
job_retcode = popen.returncode
if job_retcode is None:
#if some job is yet to be finished the main job is not finished
retcode = None
break
elif job_retcode != 0:
#if one job has finnished badly the main job is badly finished
retcode = job_retcode
break
#it should be 0 at this point
retcode = job_retcode
#if the retcode is not None the jobs have finished and we have to
#collect the outputs
if retcode is not None:
self._collect_output_streams()
self._retcode = retcode
return retcode
def _get_returncode(self):
'It returns the return code'
if self._retcode is None:
self._collect_retcodes()
return self._retcode
returncode = property(_get_returncode)
def kill(self):
'It kills all jobs'
if 'popens' not in self._jobs:
return
for popen in self._jobs['popens']:
- popen.kill()
- del self._jobs['popens']
+ #untill 2.6 subprocess.popen do not support kill
+ if 'kill' in dir(popen):
+ popen.kill()
+ else:
+ pid = popen.pid
+ call(['kill', '-9', str(pid)])
def terminate(self):
'It kills all jobs'
if 'popens' not in self._jobs:
return
for popen in self._jobs['popens']:
- popen.terminate()
- del self._jobs['popens']
+ #untill 2.6 subprocess.popen do not support terminate
+ if 'terminate' in dir(popen):
+ popen.terminate()
+ else:
+ pid = popen.pid
+ call(['kill', '-6', str(pid)])
def _calculate_divisions(num_items, splits):
'''It calculates how many items should be in every split to divide
the num_items into splits.
Not all splits will have an equal number of items, it will return a tuple
with two tuples inside:
((num_fragments_1, num_items_1), (num_fragments_2, num_items_2))
splits = num_fragments_1 + num_fragments_2
num_items_1 = num_items_2 + 1
num_fragments_1 could be equal to 0.
This is the best way to create as many splits as possible as similar as
possible.
'''
if splits >= num_items:
return ((0, 1), (splits, 1))
num_fragments1 = num_items % splits
num_fragments2 = splits - num_fragments1
num_items2 = num_items // splits
num_items1 = num_items2 + 1
res = ((num_fragments1, num_items1), (num_fragments2, num_items2))
return res
def _items_in_file(fhand, expression_kind, expression):
'''Given an fhand and an expression it yields the items cutting where the
line matches the expression'''
sofar = fhand.readline()
for line in fhand:
if ((expression_kind == 'str' and expression in line) or
(expression_kind != 'str' and expression.search(line))):
yield sofar
sofar = line
else:
sofar += line
else:
#the last item
yield sofar
def _create_file_splitter_with_re(expression):
'''Given an expression it creates a file splitter.
The expression can be a regex or an str.
The item in the file will be defined everytime a line matches the
expression.
'''
expression_kind = None
if isinstance(expression, str):
expression_kind = 'str'
else:
expression_kind = 're'
def splitter(file_, work_dirs):
'''It splits the given file into several splits.
Every split will be located in one of the work_dirs, although it is not
guaranteed to create as many splits as work dirs. If in the file there
are less items than work_dirs some work_dirs will be left empty.
It returns a list with the fpaths or fhands for the splitted files.
file_ can be an fhand or an fname.
'''
#the file_ can be an fname or an fhand. which one is it?
file_is_str = None
if isinstance(file_, str):
fname = file_
file_is_str = True
else:
fname = file_.name
file_is_str = False
#how many splits do we want?
nsplits = len(work_dirs)
#how many items are in the file? We assume that all files have the same
#number of items
nitems = 0
for line in open(fname, 'r'):
if ((expression_kind == 'str' and expression in line) or
(expression_kind != 'str' and expression.search(line))):
nitems += 1
#how many splits a we going to create? and how many items will be in
#every split
#if there are more items than splits we create as many splits as items
if nsplits > nitems:
nsplits = nitems
(nsplits1, nitems1), (nsplits2, nitems2) = _calculate_divisions(nitems,
nsplits)
#we have to create nsplits1 files with nitems1 in it and nsplits2 files
#with nitems2 items in it
new_files = []
fhand = open(fname, 'r')
items = _items_in_file(fhand, expression_kind, expression)
splits_made = 0
for nsplits, nitems in ((nsplits1, nitems1), (nsplits2, nitems2)):
#we have to create nsplits files with nitems in it
#we don't need the split_index for anything
#pylint: disable-msg=W0612
for split_index in range(nsplits):
suffix = os.path.splitext(fname)[-1]
work_dir = work_dirs[splits_made]
ofh = NamedTemporaryFile(dir=work_dir.name, delete=False,
suffix=suffix)
copy_file_mode(fhand.name, ofh.name)
for item_index in range(nitems):
ofh.write(items.next())
ofh.flush()
if file_is_str:
new_files.append(ofh.name)
ofh.close()
else:
new_files.append(ofh)
splits_made += 1
return new_files
return splitter
def _output_splitter(file_, work_dirs):
'''It creates one output file for every splits.
Every split will be located in one of the work_dirs.
It returns a list with the fpaths for the new output files.
'''
#the file_ can be an fname or an fhand. which one is it?
file_is_str = None
if isinstance(file_, str):
fname = file_
file_is_str = True
else:
fname = file_.name
file_is_str = False
#how many splits do we want?
nsplits = len(work_dirs)
new_fpaths = []
#we have to create nsplits
for split_index in range(nsplits):
suffix = os.path.splitext(fname)[-1]
work_dir = work_dirs[split_index]
#we use delete=False because this temp file is in a temp dir that will
#be completely deleted. If we use delete=True we get an error because
#the file might be already deleted when its __del__ method is called
ofh = NamedTemporaryFile(dir=work_dir.name, suffix=suffix,
delete=False)
#the file will be deleted
#what do we need the fname or the fhand?
if file_is_str:
#it will be deleted because we just need the name in the temporary
#directory. tempfile.mktemp would be better for this use, but it is
#deprecated
new_fpaths.append(ofh.name)
else:
new_fpaths.append(ofh)
return new_fpaths
def default_cat_joiner(out_file_, in_files_):
'''It joins the given in files into the given out file.
It works with fnames or fhands.
'''
#are we working with fhands or fnames?
file_is_str = None
if isinstance(out_file_, str):
file_is_str = True
else:
file_is_str = False
#the output fhand
if file_is_str:
out_fhand = open(out_file_, 'w')
else:
out_fhand = open(out_file_.name, 'w')
for in_file_ in in_files_:
#the input fhand
if file_is_str:
in_fhand = open(in_file_, 'r')
else:
in_fhand = open(in_file_.name, 'r')
for line in in_fhand:
out_fhand.write(line)
in_fhand.close()
out_fhand.close()
diff --git a/test/condor_runner_test.py b/test/condor_runner_test.py
index 1bcfb76..5c705fb 100644
--- a/test/condor_runner_test.py
+++ b/test/condor_runner_test.py
@@ -1,181 +1,195 @@
'''
Created on 14/07/2009
@author: jose
'''
# Copyright 2009 Jose Blanca, Peio Ziarsolo, COMAV-Univ. Politecnica Valencia
# This file is part of psubprocess.
# psubprocess is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as
# published by the Free Software Foundation, either version 3 of the
# License, or (at your option) any later version.
# psubprocess is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
# You should have received a copy of the GNU Affero General Public License
# along with psubprocess. If not, see <http://www.gnu.org/licenses/>.
import unittest
from tempfile import NamedTemporaryFile
from StringIO import StringIO
import os
from psubprocess.condor_runner import (write_condor_job_file, Popen,
- get_default_splits)
+ get_default_splits, call)
from test_utils import create_test_binary
class CondorRunnerTest(unittest.TestCase):
'It tests the condor runner'
@staticmethod
def test_write_condor_job_file():
'It tests that we can write a condor job file with the right parameters'
fhand1 = NamedTemporaryFile()
fhand2 = NamedTemporaryFile()
flog = NamedTemporaryFile()
stderr_ = NamedTemporaryFile()
stdout_ = NamedTemporaryFile()
stdin_ = NamedTemporaryFile()
expected = '''Executable = /bin/ls
Arguments = "-i %s -j %s"
Universe = vanilla
Log = %s
When_to_transfer_output = ON_EXIT
Getenv = True
Transfer_executable = True
Transfer_input_files = %s,%s
Should_transfer_files = IF_NEEDED
Output = %s
Error = %s
Input = %s
Queue
''' % (fhand1.name, fhand2.name, flog.name, fhand1.name, fhand2.name,
stdout_.name, stderr_.name, stdin_.name)
fhand = StringIO()
parameters = {'executable':'/bin/ls', 'log_file':flog,
'input_fnames':[fhand1.name, fhand2.name],
'arguments':'-i %s -j %s' % (fhand1.name, fhand2.name),
'transfer_executable':True, 'transfer_files':True,
'stdout':stdout_, 'stderr':stderr_, 'stdin':stdin_}
write_condor_job_file(fhand, parameters=parameters)
condor = fhand.getvalue()
assert condor == expected
@staticmethod
def test_run_condor_stdout():
'It test that we can run condor job and retrieve stdout and stderr'
bin = create_test_binary()
#a simple job
cmd = [bin]
cmd.extend(['-o', 'hola', '-e', 'caracola'])
stdout = NamedTemporaryFile()
stderr = NamedTemporaryFile()
popen = Popen(cmd, runner_conf={'transfer_executable':True},
stdout=stdout, stderr=stderr)
assert popen.wait() == 0 #waits till finishes and looks to the retcode
assert open(stdout.name).read() == 'hola'
assert open(stderr.name).read() == 'caracola'
os.remove(bin)
@staticmethod
def test_run_condor_stdin():
'It test that we can run condor job with stdin'
bin = create_test_binary()
#a simple job
cmd = [bin]
cmd.extend(['-s'])
stdin = NamedTemporaryFile()
stdout = NamedTemporaryFile()
stdin.write('hola')
stdin.flush()
popen = Popen(cmd, runner_conf={'transfer_executable':True},
stdout=stdout, stdin=stdin)
assert popen.wait() == 0 #waits till finishes and looks to the retcode
assert open(stdout.name).read() == 'hola'
os.remove(bin)
@staticmethod
def test_run_condor_retcode():
'It test that we can run condor job and get the retcode'
bin = create_test_binary()
#a simple job
cmd = [bin]
cmd.extend(['-r', '10'])
popen = Popen(cmd, runner_conf={'transfer_executable':True})
assert popen.wait() == 10 #waits till finishes and looks to the retcode
os.remove(bin)
@staticmethod
def test_run_condor_in_file():
'It test that we can run condor job with an input file'
bin = create_test_binary()
in_file = NamedTemporaryFile()
in_file.write('hola')
in_file.flush()
cmd = [bin]
cmd.extend(['-i', in_file.name])
stdout = NamedTemporaryFile()
stderr = NamedTemporaryFile()
cmd_def = [{'options': ('-i', '--input'), 'io': 'in'}]
popen = Popen(cmd, runner_conf={'transfer_executable':True},
stdout=stdout, stderr=stderr, cmd_def=cmd_def)
assert popen.wait() == 0 #waits till finishes and looks to the retcod
assert open(stdout.name).read() == 'hola'
os.remove(bin)
def test_run_condor_in_out_file(self):
'It test that we can run condor job with an output file'
bin = create_test_binary()
in_file = NamedTemporaryFile()
in_file.write('hola')
in_file.flush()
out_file = open('output.txt', 'w')
cmd = [bin]
cmd.extend(['-i', in_file.name, '-t', out_file.name])
stdout = NamedTemporaryFile()
stderr = NamedTemporaryFile()
cmd_def = [{'options': ('-i', '--input'), 'io': 'in'},
{'options': ('-t', '--output'), 'io': 'out'}]
popen = Popen(cmd, runner_conf={'transfer_executable':True},
stdout=stdout, stderr=stderr, cmd_def=cmd_def)
popen.wait()
assert popen.wait() == 0 #waits till finishes and looks to the retcod
assert open(out_file.name).read() == 'hola'
os.remove(out_file.name)
#and output file with path won't be allowed unless the transfer file
#mechanism is not used
out_file = NamedTemporaryFile()
cmd = [bin]
cmd.extend(['-i', in_file.name, '-t', out_file.name])
stdout = NamedTemporaryFile()
stderr = NamedTemporaryFile()
cmd_def = [{'options': ('-i', '--input'), 'io': 'in'},
{'options': ('-t', '--output'), 'io': 'out'}]
try:
popen = Popen(cmd, runner_conf={'transfer_executable':True},
stdout=stdout, stderr=stderr, cmd_def=cmd_def)
self.fail('ValueError expected')
#pylint: disable-msg=W0704
except ValueError:
pass
os.remove(bin)
@staticmethod
def test_default_splits():
'It tests that we can get a suggested number of splits'
assert get_default_splits() > 0
assert isinstance(get_default_splits(), int)
+ @staticmethod
+ def test_run_condor_kill():
+ 'It test that we can kill a condor job'
+ bin = create_test_binary()
+ #a simple job
+ cmd = [bin]
+ cmd.extend(['-w'])
+ popen = Popen(cmd, runner_conf={'transfer_executable':True})
+ pid = str(popen.pid)
+ popen.kill()
+ stdout = call(['condor_q', pid])[0]
+ assert pid not in stdout
+ os.remove(bin)
+
if __name__ == "__main__":
#import sys;sys.argv = ['', 'Test.testName']
unittest.main()
\ No newline at end of file
diff --git a/test/prunner_test.py b/test/prunner_test.py
index e81a866..502e00d 100644
--- a/test/prunner_test.py
+++ b/test/prunner_test.py
@@ -1,226 +1,240 @@
'''
Created on 16/07/2009
@author: jose
'''
# Copyright 2009 Jose Blanca, Peio Ziarsolo, COMAV-Univ. Politecnica Valencia
# This file is part of psubprocess.
# psubprocess is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as
# published by the Free Software Foundation, either version 3 of the
# License, or (at your option) any later version.
# psubprocess is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
# You should have received a copy of the GNU Affero General Public License
# along with psubprocess. If not, see <http://www.gnu.org/licenses/>.
import unittest
from tempfile import NamedTemporaryFile
import os
from psubprocess import Popen
from psubprocess.streams import STDIN
from test_utils import create_test_binary
class PRunnerTest(unittest.TestCase):
'It test that we can parallelize processes'
@staticmethod
def test_file_in():
'It tests the most basic behaviour'
bin = create_test_binary()
#with infile
in_file = NamedTemporaryFile()
in_file.write('hola')
in_file.flush()
cmd = [bin]
cmd.extend(['-i', in_file.name])
stdout = NamedTemporaryFile()
stderr = NamedTemporaryFile()
cmd_def = [{'options': ('-i', '--input'), 'io': 'in', 'splitter':''}]
popen = Popen(cmd, stdout=stdout, stderr=stderr, cmd_def=cmd_def)
assert popen.wait() == 0 #waits till finishes and looks to the retcod
assert open(stdout.name).read() == 'hola'
in_file.close()
os.remove(bin)
@staticmethod
def test_job_no_in_stream():
'It test that a job with no in stream is run splits times'
bin = create_test_binary()
cmd = [bin]
cmd.extend(['-o', 'hola', '-e', 'caracola'])
stdout = NamedTemporaryFile()
stderr = NamedTemporaryFile()
cmd_def = [{'options': ('-i', '--input'), 'io': 'in', 'splitter':''}]
splits = 4
popen = Popen(cmd, stdout=stdout, stderr=stderr, cmd_def=cmd_def,
splits=splits)
assert popen.wait() == 0 #waits till finishes and looks to the retcod
assert open(stdout.name).read() == 'hola' * splits
assert open(stderr.name).read() == 'caracola' * splits
os.remove(bin)
@staticmethod
def test_stdin():
'It test that stdin works as input'
bin = create_test_binary()
#with stdin
content = 'hola1\nhola2\nhola3\nhola4\nhola5\nhola6\nhola7\nhola8\n'
content += 'hola9\nhola10|n'
stdin = NamedTemporaryFile()
stdin.write(content)
stdin.flush()
cmd = [bin]
cmd.extend(['-s'])
stdout = NamedTemporaryFile()
stderr = NamedTemporaryFile()
cmd_def = [{'options':STDIN, 'io': 'in', 'splitter':''}]
popen = Popen(cmd, stdout=stdout, stderr=stderr, stdin=stdin,
cmd_def=cmd_def)
assert popen.wait() == 0 #waits till finishes and looks to the retcod
assert open(stdout.name).read() == content
assert open(stderr.name).read() == ''
os.remove(bin)
@staticmethod
def test_infile_outfile():
'It tests that we can set an input file and an output file'
bin = create_test_binary()
#with infile
in_file = NamedTemporaryFile()
content = 'hola1\nhola2\nhola3\nhola4\nhola5\nhola6\nhola7\nhola8\n'
content += 'hola9\nhola10|n'
in_file.write(content)
in_file.flush()
out_file = NamedTemporaryFile()
cmd = [bin]
cmd.extend(['-i', in_file.name, '-t', out_file.name])
stdout = NamedTemporaryFile()
stderr = NamedTemporaryFile()
cmd_def = [{'options': ('-i', '--input'), 'io': 'in', 'splitter':''},
{'options': ('-t', '--output'), 'io': 'out'}]
popen = Popen(cmd, stdout=stdout, stderr=stderr, cmd_def=cmd_def)
assert popen.wait() == 0 #waits till finishes and looks to the retcod
assert not open(stdout.name).read()
assert not open(stderr.name).read()
assert open(out_file.name).read() == content
in_file.close()
os.remove(bin)
@staticmethod
def test_retcode():
'It tests that we get the correct returncode'
bin = create_test_binary()
cmd = [bin]
cmd.extend(['-r', '20'])
stdout = NamedTemporaryFile()
stderr = NamedTemporaryFile()
popen = Popen(cmd, stdout=stdout, stderr=stderr, cmd_def=[])
assert popen.wait() == 20 #waits till finishes and looks to the retcod
assert not open(stdout.name).read()
assert not open(stderr.name).read()
os.remove(bin)
@staticmethod
def xtest_infile_outfile_condor():
'It tests that we can set an input file and an output file'
bin = create_test_binary()
#with infile
in_file = NamedTemporaryFile()
content = 'hola1\nhola2\nhola3\nhola4\nhola5\nhola6\nhola7\nhola8\n'
content += 'hola9\nhola10|n'
in_file.write(content)
in_file.flush()
out_file = NamedTemporaryFile()
cmd = [bin]
cmd.extend(['-i', in_file.name, '-t', out_file.name])
stdout = NamedTemporaryFile()
stderr = NamedTemporaryFile()
cmd_def = [{'options': ('-i', '--input'), 'io': 'in', 'splitter':''},
{'options': ('-t', '--output'), 'io': 'out'}]
from psubprocess import CondorPopen
popen = Popen(cmd, stdout=stdout, stderr=stderr, cmd_def=cmd_def,
runner=CondorPopen,
runner_conf={'transfer_executable':True})
assert popen.wait() == 0 #waits till finishes and looks to the retcod
assert not open(stdout.name).read()
assert not open(stderr.name).read()
assert open(out_file.name).read() == content
in_file.close()
os.remove(bin)
@staticmethod
def test_stdin_real_splitter():
'It test that stdin works as input'
bin = create_test_binary()
#with stdin
content = '>hola1\nhola2\n>hola3\nhola4\n>hola5\nhola6\n>hola7\nhola8\n'
content += '>hola9\nhola10|n'
stdin = NamedTemporaryFile()
stdin.write(content)
stdin.flush()
cmd = [bin]
cmd.extend(['-s'])
stdout = NamedTemporaryFile()
stderr = NamedTemporaryFile()
cmd_def = [{'options':STDIN, 'io': 'in', 'splitter':'>'}]
popen = Popen(cmd, stdout=stdout, stderr=stderr, stdin=stdin,
cmd_def=cmd_def)
assert popen.wait() == 0 #waits till finishes and looks to the retcod
assert open(stdout.name).read() == content
assert open(stderr.name).read() == ''
os.remove(bin)
@staticmethod
def test_2_infile_outfile():
'It tests that we can set 2 input files and an output file'
bin = create_test_binary()
#with infile
content = 'hola1\nhola2\nhola3\nhola4\nhola5\nhola6\nhola7\nhola8\n'
content += 'hola9\nhola10|n'
in_file1 = NamedTemporaryFile()
in_file1.write(content)
in_file1.flush()
in_file2 = NamedTemporaryFile()
in_file2.write(content)
in_file2.flush()
out_file1 = NamedTemporaryFile()
out_file2 = NamedTemporaryFile()
cmd = [bin]
cmd.extend(['-i', in_file1.name, '-t', out_file1.name])
cmd.extend(['-x', in_file2.name, '-z', out_file2.name])
stdout = NamedTemporaryFile()
stderr = NamedTemporaryFile()
cmd_def = [{'options': ('-i', '--input'), 'io': 'in', 'splitter':''},
{'options': ('-x', '--input'), 'io': 'in', 'splitter':''},
{'options': ('-t', '--output'), 'io': 'out'},
{'options': ('-z', '--output'), 'io': 'out'}]
popen = Popen(cmd, stdout=stdout, stderr=stderr, cmd_def=cmd_def)
assert popen.wait() == 0 #waits till finishes and looks to the retcod
assert not open(stdout.name).read()
assert not open(stderr.name).read()
assert open(out_file1.name).read() == content
assert open(out_file2.name).read() == content
in_file1.close()
in_file2.close()
os.remove(bin)
+ @staticmethod
+ def test_kill_subjobs():
+ 'It tests that we can kill the subjobs'
+ bin = create_test_binary()
+ cmd = [bin]
+ cmd.extend(['-w'])
+ stdout = NamedTemporaryFile()
+ stderr = NamedTemporaryFile()
+ popen = Popen(cmd, stdout=stdout, stderr=stderr, cmd_def=[])
+ assert popen.returncode is None
+ popen.kill()
+ assert not open(stdout.name).read()
+ assert not open(stderr.name).read()
+ os.remove(bin)
if __name__ == "__main__":
#import sys;sys.argv = ['', 'Test.testName']
unittest.main()
\ No newline at end of file
diff --git a/test/test_utils.py b/test/test_utils.py
index 23c02e0..3c05a1b 100644
--- a/test/test_utils.py
+++ b/test/test_utils.py
@@ -1,82 +1,85 @@
'''
Created on 16/07/2009
@author: jose
'''
# Copyright 2009 Jose Blanca, Peio Ziarsolo, COMAV-Univ. Politecnica Valencia
# This file is part of psubprocess.
# psubprocess is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as
# published by the Free Software Foundation, either version 3 of the
# License, or (at your option) any later version.
# psubprocess is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
# You should have received a copy of the GNU Affero General Public License
# along with psubprocess. If not, see <http://www.gnu.org/licenses/>.
from tempfile import NamedTemporaryFile
import os, stat, shutil
TEST_BINARY = '''#!/usr/bin/env python
-import sys, shutil, os
+import sys, shutil, os, time
args = sys.argv
#-o something send something to stdout
#-e something send something to stderr
#-i some_file send the file content to sdout
#-t some_file copy the -i file to -t file
#-x some_file
#-z some_file copy the -x file to -z file
#-s and stdin write stdin to stout
#-r a number return this retcode
#are the commands in the argv?
arg_indexes = {}
-for param in ('-o', '-e', '-i', '-t', '-s', '-r', '-x', '-z'):
+for param in ('-o', '-e', '-i', '-t', '-s', '-r', '-x', '-z', '-w'):
try:
arg_indexes[param] = args.index(param)
except ValueError:
arg_indexes[param] = None
#stdout, stderr
if arg_indexes['-o']:
sys.stdout.write(args[arg_indexes['-o'] + 1])
if arg_indexes['-e']:
sys.stderr.write(args[arg_indexes['-e'] + 1])
#-i -t
if arg_indexes['-i'] and not arg_indexes['-t']:
sys.stdout.write(open(args[arg_indexes['-i'] + 1]).read())
elif arg_indexes['-i'] and arg_indexes['-t']:
shutil.copy(args[arg_indexes['-i'] + 1], args[arg_indexes['-t'] + 1])
if arg_indexes['-x'] and arg_indexes['-z']:
shutil.copy(args[arg_indexes['-x'] + 1], args[arg_indexes['-z'] + 1])
#stdin
if arg_indexes['-s']:
stdin = sys.stdin.read()
sys.stdout.write(stdin)
#retcode
if arg_indexes['-r']:
retcode = int(args[arg_indexes['-r'] + 1])
else:
retcode = 0
+#wait
+if arg_indexes['-w']:
+ time.sleep(50)
sys.exit(retcode)
'''
def create_test_binary():
'It creates a file with a test python binary in it'
fhand = NamedTemporaryFile(suffix='.py')
fhand.write(TEST_BINARY)
fhand.flush()
os.chmod(fhand.name, stat.S_IXOTH | stat.S_IRWXU)
fname = '/tmp/test_cmd.py'
shutil.copy(fhand.name, fname)
fhand.close()
#it should be executable
return fname
|
JoseBlanca/psubprocess
|
f465f1b38724e40e18be7410a216610b214d3268
|
More documentation written
|
diff --git a/psubprocess/prunner.py b/psubprocess/prunner.py
index efd6a26..dd2d938 100644
--- a/psubprocess/prunner.py
+++ b/psubprocess/prunner.py
@@ -1,536 +1,542 @@
'''It launches parallel processes with an interface similar to Popen.
It divides jobs into subjobs and launches the subjobs.
This module is useful when we have a non-parallel command to run in a
multiprocessor computer or in a multinode cluster. It will take the input files,
it will split them and it will run a subjob for everyone of the splits. It will
wait for the subjobs to finnish and it will join the output files generated
by all subjobs. At the end of the process will get the same output files as if
the command wasn't run in parallel.
+This approach will work with commands that process a lot of items. This module
+divides the items in sereval set and it assigns each of this sets to one new
+subjob. These are the subjobs that will be run in parallel.
+
To do it requires the parameters used by popen: cmd, stdin, stdout, stderr and
some extra information: runner, splits and cmd_def.
runner is optional and it should be a subprocess.Popen like class. If it's not
-given subprocess.Popen will be used. This will be the class used to run the
-subjobs. In that case the subjobs will run in the processors of the local node.
+given subprocess.Popen will be used. This Popen be the class used to run the
+subjobs. If subprocess.Popen is used the subjobs will run in the processors of
+the local node on several independent processes. If the Condor Popen is used
+the subjobs will run in a condor cluster.
splits is the number of subjobs that we want to generate. If it's not given the
runner will provide a suitable number.
cmd_def is a dict that defines how the cmd defines the input and output files.
We need to tell Popen which are the input and output files in order to split
-them and join them.
+them and join them. The syntax for cmd_def is explained in the stream.py module
'''
# Copyright 2009 Jose Blanca, Peio Ziarsolo, COMAV-Univ. Politecnica Valencia
# This file is part of psubprocess.
# psubprocess is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as
# published by the Free Software Foundation, either version 3 of the
# License, or (at your option) any later version.
# psubprocess is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
# You should have received a copy of the GNU Affero General Public License
# along with psubprocess. If not, see <http://www.gnu.org/licenses/>.
from subprocess import Popen as StdPopen
import os, tempfile, shutil, copy
from psubprocess.streams import get_streams_from_cmd, STDOUT, STDERR, STDIN
from psubprocess import condor_runner
RUNNER_MODULES = {}
RUNNER_MODULES['condor_runner'] = condor_runner
class NamedTemporaryDir(object):
'''This class creates temporary directories '''
#pylint: disable-msg=W0622
#we redifine the build in dir because temfile uses that inteface
def __init__(self, dir=None):
'''It initiates the class.'''
self._name = tempfile.mkdtemp(dir=dir)
def get_name(self):
'Returns path to the dict'
return self._name
name = property(get_name)
def close(self):
'''It removes the temp dir'''
if os.path.exists(self._name):
shutil.rmtree(self._name)
def __del__(self):
'''It removes de temp dir when instance is removed and the garbaje
colector decides it'''
self.close()
def NamedTemporaryFile(dir=None, delete=False, suffix=''):
'''It creates a temporary file that won't be deleted when close
This behaviour can be done with tempfile.NamedTemporaryFile in python > 2.6
'''
#pylint: disable-msg=W0613
#delete is not being used, it's there as a reminder, once we start to use
#python 2.6 this function should be removed
#pylint: disable-msg=C0103
#pylint: disable-msg=W0622
#We want to mimick tempfile.NamedTemporaryFile
fpath = tempfile.mkstemp(dir=dir, suffix=suffix)[1]
return open(fpath, 'w')
def copy_file_mode(fpath1, fpath2):
'It copies the os.stats mode from file1 to file2'
mode = os.stat(fpath1)[0]
os.chmod(fpath2, mode)
class Popen(object):
'''It paralellizes the given processes dividing them into subprocesses.
The interface is similar to subprocess.Popen to ease the use of this class,
although the functionality of this class is much mor limited.
When an instance of this class is created a series of subjobs is launched.
When all subjobs are finished returncode will have an int, if they're still
running returncode will be None.
We can wait for all subjobs to finnish using the wait method or we can
kill or terminate them using kill and terminate.
'''
def __init__(self, cmd, cmd_def=None, runner=None, runner_conf=None,
stdout=None, stderr=None, stdin=None, splits=None):
'''It inits the a Popen instance, it creates and runs the subjobs.
Like the subprocess.Popen it accepts stdin, stdout, stderr, but in this
case all of them should be files, PIPE will not work.
In the cmd_def list we have to tell this Popen how to locate the
input and output files in the cmd and how to split and join them. Look
for the cmd_format in the streams.py file.
keyword arguments:
cmd -- a list with the cmd to parallelize
cmd_def -- the cmd definition list (default [])
runner -- which runner to use (default subprocess.Popen)
runner_conf -- extra parameters for the runner (default {})
stdout -- a fhand to store the stdout (default None)
stderr -- a fhand to store the stderr (default None)
stdin -- a fhand with the stdin (default None)
splits -- number of subjobs to generate
'''
#we want the same interface as subprocess.popen
#pylint: disable-msg=R0913
self._retcode = None
self._outputs_collected = False
#some defaults
#if the runner is not given, we use subprocess.Popen
if runner is None:
runner = StdPopen
if cmd_def is None:
if stdin is not None:
raise ValueError('No cmd_def given but stdin present')
cmd_def = []
#if the number of splits is not given we calculate them
if splits is None:
splits = self.default_splits(runner)
#we need a work dir to create the temporary split files
self._work_dir = NamedTemporaryDir()
copy_file_mode('.', self._work_dir.name)
#the main job
self._job = {'cmd': cmd, 'work_dir': self._work_dir}
#we create the new subjobs
self._jobs = self._split_jobs(cmd, cmd_def, splits, self._work_dir,
stdout=stdout, stderr=stderr, stdin=stdin)
#launch every subjobs
self._launch_jobs(self._jobs, runner=runner, runner_conf=runner_conf)
@staticmethod
def _launch_jobs(jobs, runner, runner_conf):
'It launches all jobs and it adds its popen instance to them'
jobs['popens'] = []
cwd = os.getcwd()
for job_index, (cmd, streams, work_dir) in enumerate(zip(jobs['cmds'],
jobs['streams'], jobs['work_dirs'])):
#the std stream can be present or not
stdin, stdout, stderr = None, None, None
if jobs['stdins']:
stdin = jobs['stdins'][job_index]
if jobs['stdouts']:
stdout = jobs['stdouts'][job_index]
if jobs['stderrs']:
stderr = jobs['stderrs'][job_index]
#for every job we go to its dir to launch it
os.chdir(work_dir.name)
#we have to be sure that stdin is open for read
if stdin:
stdin = open(stdin.name)
#we launch the job
if runner == StdPopen:
popen = runner(cmd, stdout=stdout, stderr=stderr, stdin=stdin)
else:
popen = runner(cmd, cmd_def=streams, stdout=stdout,
stderr=stderr, stdin=stdin,
runner_conf=runner_conf)
#we record it's popen instane
jobs['popens'].append(popen)
os.chdir(cwd)
def _split_jobs(self, cmd, cmd_def, splits, work_dir, stdout=None,
stderr=None, stdin=None,):
''''I creates one job for every split.
Every job has a cmd, work_dir and streams, this info is in the jobs dict
with the keys: cmds, work_dirs, streams
'''
#too many arguments, but similar interface to our __init__
#pylint: disable-msg=R0913
#pylint: disable-msg=R0914
#the main job streams
main_job_streams = get_streams_from_cmd(cmd, cmd_def, stdout=stdout,
stderr=stderr, stdin=stdin)
self._job['streams'] = main_job_streams
streams, work_dirs = self._split_streams(main_job_streams, splits,
work_dir.name)
#now we have to create a new cmd with the right in and out streams for
#every split
cmds, stdins, stdouts, stderrs = self._create_cmds(cmd, streams)
jobs = {'cmds': cmds, 'work_dirs': work_dirs, 'streams': streams,
'stdins':stdins, 'stdouts':stdouts, 'stderrs':stderrs}
return jobs
@staticmethod
def _create_cmds(cmd, streams):
'''Given a base cmd and a steams list it creates one modified cmds for
every stream'''
#the streams is a list of streams
streamss = streams
cmds = []
stdouts = []
stdins = []
stderrs = []
for streams in streamss:
new_cmd = copy.deepcopy(cmd)
for stream in streams:
#is the stream in the cmd or in is a std one?
if 'cmd_location' in stream:
location = stream['cmd_location']
else:
location = None
if location is None:
continue
elif location == STDIN:
stdins.append(stream['fhand'])
elif location == STDOUT:
stdouts.append(stream['fhand'])
elif location == STDERR:
stderrs.append(stream['fhand'])
else:
#we modify the cmd[location] with the new file
#we use the fname and no path because the jobs will be
#launched from the job working dir
location = stream['cmd_location']
fpath = stream['fname']
fname = os.path.split(fpath)[-1]
new_cmd[location] = fname
cmds.append(new_cmd)
return cmds, stdins, stdouts, stderrs
@staticmethod
def _split_streams(streams, splits, work_dir):
'''Given a list of streams it splits every stream in the given number of
splits'''
#which are the input and output streams?
input_stream_indexes = []
output_stream_indexes = []
for index, stream in enumerate(streams):
if stream['io'] == 'in':
input_stream_indexes.append(index)
elif stream['io'] == 'out':
output_stream_indexes.append(index)
#we create one work dir for every split
work_dirs = []
for index in range(splits):
dir_ = NamedTemporaryDir(dir=work_dir)
work_dirs.append(dir_)
copy_file_mode('.', dir_.name)
#we have to do first the input files because the number of splits could
#be changed by them
#we split the input stream files into several splits
#we have to sort the input_stream_indexes, first we should take the ones
#that have an input file to be split
def do_we_have_to_split(stream_index):
'If the stream has to split a file it will return True'
split = None
stream = streams[stream_index]
#maybe they shouldn't be split
if 'special' in stream and 'no_split' in stream['special']:
split = False
#maybe there is no file to split
if (('fhand' in stream and stream['fhand'] is None) or
('fname' in stream and stream['fname'] is None) or
('fname' not in stream and 'fhand' not in stream)):
split = False
elif (('fhand' in stream and stream['fhand'] is not None) or
('fname' in stream and stream['fname'] is not None)):
split = True
return split
def to_be_split_first(stream1, stream2):
'It sorts the streams, the ones to be split go first'
split1 = do_we_have_to_split(stream1)
split2 = do_we_have_to_split(stream2)
return int(split1) - int(split2)
input_stream_indexes = sorted(input_stream_indexes, to_be_split_first)
first = True
split_files = {}
for index in input_stream_indexes:
stream = streams[index]
#splitter
if 'splitter' not in stream:
msg = 'An splitter should be provided for every input stream'
msg += 'missing for: ' + str(stream)
raise ValueError(msg)
splitter = stream['splitter']
#the splitter can be a re, in that case with create the function
if '__call__' not in dir(splitter):
splitter = _create_file_splitter_with_re(splitter)
#we split the input files in the splits, every file will be in one
#of the given work_dirs
#the stream can have fname or fhands
if 'fhand' in stream:
file_ = stream['fhand']
elif 'fname' in stream:
file_ = stream['fname']
else:
file_ = None
if file_ is None:
#the stream migth have no file associated
files = [None] * len(work_dirs)
else:
files = splitter(file_, work_dirs)
#the files len can be different than splits, in that case we modify
#the splits or we raise an error
if len(files) != splits:
if first:
splits = len(files)
#we discard the empty temporary dirs
work_dirs = work_dirs[0:splits]
else:
msg = 'Not all input files were divided in the same number'
msg += ' of splits'
raise RuntimeError(msg)
first = False
split_files[index] = files #a list of files for every in stream
#we split the ouptut stream files into several splits
for index in output_stream_indexes:
stream = streams[index]
#for th output we just create the new names, but we don't split
#any file
if 'fhand' in stream:
fname = stream['fhand']
else:
fname = stream['fname']
files = _output_splitter(fname, work_dirs)
split_files[index] = files #a list of files for every in stream
new_streamss = []
#we need one new stream for every split
for split_index in range(splits):
#the streams for one job
new_streams = []
for stream_index, stream in enumerate(streams):
#we duplicate the original stream
new_stream = stream.copy()
#we set the new files
if 'fhand' in stream:
new_stream['fhand'] = split_files[stream_index][split_index]
else:
new_stream['fname'] = split_files[stream_index][split_index]
new_streams.append(new_stream)
new_streamss.append(new_streams)
return new_streamss, work_dirs
@staticmethod
def default_splits(runner):
'Given a runner it returns the number of splits recommended by default'
if runner is StdPopen:
#the number of processors
return os.sysconf('SC_NPROCESSORS_ONLN')
else:
module = runner.__module__.split('.')[-1]
module = RUNNER_MODULES[module]
return module.get_default_splits()
def wait(self):
'It waits for all the works to finnish'
#we wait till all jobs finish
for job in self._jobs['popens']:
job.wait()
#now that all jobs have finished we join the results
self._collect_output_streams()
#we join now the retcodes
self._collect_retcodes()
return self._retcode
def _collect_output_streams(self):
'''It joins all the output streams into the output files and it removes
the work dirs'''
if self._outputs_collected:
return
#for each file in the main job cmd
for stream_index, stream in enumerate(self._job['streams']):
if stream['io'] == 'in':
#now we're dealing only with output files
continue
#every subjob has a part to join for this output stream
part_out_fnames = []
for streams in self._jobs['streams']:
this_stream = streams[stream_index]
if 'fname' in this_stream:
part_out_fnames.append(this_stream['fname'])
else:
part_out_fnames.append(this_stream['fhand'])
#we need a function to join this stream
joiner = None
if joiner in stream:
joiner = stream['joiner']
else:
joiner = default_cat_joiner
if 'fname' in stream:
out_file = stream['fname']
else:
out_file = stream['fhand']
default_cat_joiner(out_file, part_out_fnames)
#now we can delete the tempdirs
for work_dir in self._jobs['work_dirs']:
work_dir.close()
self._outputs_collected = True
def _collect_retcodes(self):
'It gathers the retcodes from all processes'
retcode = None
for popen in self._jobs['popens']:
job_retcode = popen.returncode
if job_retcode is None:
#if some job is yet to be finished the main job is not finished
retcode = None
break
elif job_retcode != 0:
#if one job has finnished badly the main job is badly finished
retcode = job_retcode
break
#it should be 0 at this point
retcode = job_retcode
#if the retcode is not None the jobs have finished and we have to
#collect the outputs
if retcode is not None:
self._collect_output_streams()
self._retcode = retcode
return retcode
def _get_returncode(self):
'It returns the return code'
if self._retcode is None:
self._collect_retcodes()
return self._retcode
returncode = property(_get_returncode)
def kill(self):
'It kills all jobs'
if 'popens' not in self._jobs:
return
for popen in self._jobs['popens']:
popen.kill()
del self._jobs['popens']
def terminate(self):
'It kills all jobs'
if 'popens' not in self._jobs:
return
for popen in self._jobs['popens']:
popen.terminate()
del self._jobs['popens']
def _calculate_divisions(num_items, splits):
'''It calculates how many items should be in every split to divide
the num_items into splits.
Not all splits will have an equal number of items, it will return a tuple
with two tuples inside:
((num_fragments_1, num_items_1), (num_fragments_2, num_items_2))
splits = num_fragments_1 + num_fragments_2
num_items_1 = num_items_2 + 1
num_fragments_1 could be equal to 0.
This is the best way to create as many splits as possible as similar as
possible.
'''
if splits >= num_items:
return ((0, 1), (splits, 1))
num_fragments1 = num_items % splits
num_fragments2 = splits - num_fragments1
num_items2 = num_items // splits
num_items1 = num_items2 + 1
res = ((num_fragments1, num_items1), (num_fragments2, num_items2))
return res
def _items_in_file(fhand, expression_kind, expression):
'''Given an fhand and an expression it yields the items cutting where the
line matches the expression'''
sofar = fhand.readline()
for line in fhand:
if ((expression_kind == 'str' and expression in line) or
(expression_kind != 'str' and expression.search(line))):
yield sofar
sofar = line
else:
sofar += line
else:
#the last item
yield sofar
def _create_file_splitter_with_re(expression):
'''Given an expression it creates a file splitter.
The expression can be a regex or an str.
The item in the file will be defined everytime a line matches the
expression.
'''
expression_kind = None
if isinstance(expression, str):
expression_kind = 'str'
else:
expression_kind = 're'
def splitter(file_, work_dirs):
'''It splits the given file into several splits.
Every split will be located in one of the work_dirs, although it is not
guaranteed to create as many splits as work dirs. If in the file there
are less items than work_dirs some work_dirs will be left empty.
It returns a list with the fpaths or fhands for the splitted files.
file_ can be an fhand or an fname.
'''
#the file_ can be an fname or an fhand. which one is it?
file_is_str = None
if isinstance(file_, str):
fname = file_
file_is_str = True
else:
fname = file_.name
file_is_str = False
#how many splits do we want?
nsplits = len(work_dirs)
#how many items are in the file? We assume that all files have the same
diff --git a/scripts/run_in_parallel.py b/scripts/run_in_parallel.py
index bfa2c51..a1aa3ce 100644
--- a/scripts/run_in_parallel.py
+++ b/scripts/run_in_parallel.py
@@ -1,112 +1,117 @@
-'''
-Created on 21/07/2009
+'''This script allows the easy parallelization of command line utilities.
-@author: jose
+If you have command that process a file with a set of items it would be
+quite easy to run it in a parallel environment using this command. The file
+will be divided in equally sized subjobs, these subjobs will be run in parallel
+and once completed the output files will be generated as if the original
+would have run.
+The subjobs can be run in one node with several processors or in a cluter with
+several nodes using condor.
'''
# Copyright 2009 Jose Blanca, Peio Ziarsolo, COMAV-Univ. Politecnica Valencia
# This file is part of psubprocess.
# psubprocess is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as
# published by the Free Software Foundation, either version 3 of the
# License, or (at your option) any later version.
# psubprocess is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
# You should have received a copy of the GNU Affero General Public License
# along with psubprocess. If not, see <http://www.gnu.org/licenses/>.
from optparse import OptionParser
import os.path, sys, signal
from psubprocess import CondorPopen, Popen
POPEN = None
def parse_options():
'It parses the command line arguments'
parser = OptionParser('usage: %prog -c "command"')
parser.add_option('-n', '--nsplits', dest='splits',
help='number of subjobs to create')
parser.add_option('-r', '--runner', dest='runner', default='subprocess',
help='who should run the subjobs (subprocess or condor)')
parser.add_option('-c', '--command', dest='command',
help='The command to run')
parser.add_option('-o', '--stdout', dest='stdout',
help='A file to store the stdout')
parser.add_option('-e', '--stderr', dest='stderr',
help='A file to store the stderr')
parser.add_option('-i', '--stdin', dest='stdin',
help='A file to store the stdin')
parser.add_option('-d', '--cmd_def', dest='cmd_def',
help='The command line definition')
parser.add_option('-q', '--runner_req', dest='runner_req',
help='runner requirements')
return parser
def get_options():
'It returns a dict with the options'
parser = parse_options()
cmd_options = parser.parse_args()[0]
options = {}
if cmd_options.command is None:
raise parser.error('The command should be set')
else:
options['cmd'] = cmd_options.command.split()
if cmd_options.stdout is not None:
options['stdout'] = open(cmd_options.stdout, 'w')
if cmd_options.stderr is not None:
options['stderr'] = open(cmd_options.stderr, 'w')
if cmd_options.stdin is not None:
options['stdin'] = open(cmd_options.stdin)
if cmd_options.runner == 'subprocess':
options['runner'] = None
elif cmd_options.runner == 'condor':
runner_conf = {}
runner_conf['transfer_executable'] = False
if cmd_options.runner_req is not None:
runner_conf['requirements'] = cmd_options.runner_req
options['runner_conf'] = runner_conf
options['runner'] = CondorPopen
else:
parser.error('Allowable runners are: subprocess and condor')
if cmd_options.cmd_def is None:
options['cmd_def'] = []
else:
cmd_def = cmd_options.cmd_def
#it can be a file or an str
if os.path.exists(cmd_def):
cmd_def = open(cmd_def).read()
cmd_def = eval(cmd_def)
if not isinstance(cmd_def, list):
msg = 'cmd_def should be a list of dicts, read the documentation'
parser.error(msg)
options['cmd_def'] = cmd_def
return options
def kill_processes():
'It kills the ongoing process'
if POPEN is not None:
POPEN.kill()
sys.exit(-1)
def set_signal_handlers():
'It sets the SIGTERM and SIGKILL signals'
signal.signal(signal.SIGTERM, kill_processes)
signal.signal(signal.SIGABRT, kill_processes)
signal.signal(signal.SIGINT, kill_processes)
def main():
'It runs a command in parallel'
set_signal_handlers()
options = get_options()
global POPEN
POPEN = Popen(**options)
sys.exit(POPEN.wait())
if __name__ == '__main__':
main()
\ No newline at end of file
diff --git a/scripts/run_with_condor.py b/scripts/run_with_condor.py
index c1d70bc..306b680 100644
--- a/scripts/run_with_condor.py
+++ b/scripts/run_with_condor.py
@@ -1,106 +1,105 @@
-'''
-Created on 21/07/2009
+'''This script eases the running of a job in a condor environment.
-@author: jose
+The condor job file will be created for you.
'''
# Copyright 2009 Jose Blanca, Peio Ziarsolo, COMAV-Univ. Politecnica Valencia
# This file is part of psubprocess.
# psubprocess is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as
# published by the Free Software Foundation, either version 3 of the
# License, or (at your option) any later version.
# psubprocess is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
# You should have received a copy of the GNU Affero General Public License
# along with psubprocess. If not, see <http://www.gnu.org/licenses/>.
from optparse import OptionParser
import os.path, sys, signal
from psubprocess import CondorPopen
POPEN = None
def parse_options():
'It parses the command line arguments'
parser = OptionParser('usage: %prog -c "command"')
parser.add_option('-c', '--command', dest='command',
help='The command to run')
parser.add_option('-o', '--stdout', dest='stdout',
help='A file to store the stdout')
parser.add_option('-e', '--stderr', dest='stderr',
help='A file to store the stderr')
parser.add_option('-i', '--stdin', dest='stdin',
help='A file to store the stdin')
parser.add_option('-d', '--cmd_def', dest='cmd_def',
help='The command line definition')
parser.add_option('-l', '--log', dest='condor_log',
help='The log file')
parser.add_option('-q', '--condor_req', dest='runner_req',
help='condor requiements for the job')
return parser
def get_options():
'It returns a dict with the options'
parser = parse_options()
cmd_options = parser.parse_args()[0]
options = {}
if cmd_options.command is None:
raise parser.error('The command should be set')
else:
options['cmd'] = cmd_options.command.split()
if cmd_options.stdout is not None:
options['stdout'] = open(cmd_options.stdout, 'w')
if cmd_options.stderr is not None:
options['stderr'] = open(cmd_options.stderr, 'w')
if cmd_options.stdin is not None:
options['stdin'] = open(cmd_options.stdin)
if cmd_options.cmd_def is None:
options['cmd_def'] = []
else:
cmd_def = cmd_options.cmd_def
#it can be a file or an str
if os.path.exists(cmd_def):
cmd_def = open(cmd_def).read()
cmd_def = eval(cmd_def)
if not isinstance(cmd_def, list):
msg = 'cmd_def should be a list of dicts, read the documentation'
parser.error(msg)
options['cmd_def'] = cmd_def
runner_conf = {}
if cmd_options.condor_log is not None:
condor_log = open(cmd_options.condor_log, 'w')
runner_conf['condor_log'] = condor_log
runner_conf['transfer_executable'] = False
options['runner_conf'] = runner_conf
return options
def kill_processes():
'It kills the ongoing process'
if POPEN is not None:
POPEN.kill()
sys.exit(-1)
def set_signal_handlers():
'It sets the SIGTERM and SIGKILL signals'
signal.signal(signal.SIGTERM, kill_processes)
signal.signal(signal.SIGABRT, kill_processes)
signal.signal(signal.SIGINT, kill_processes)
def main():
'It runs a command in a condor cluster'
set_signal_handlers()
options = get_options()
global POPEN
POPEN = CondorPopen(**options)
sys.exit(POPEN.wait())
if __name__ == '__main__':
main()
\ No newline at end of file
|
JoseBlanca/psubprocess
|
f7235dc3729093dfb1c5bf7e018e53f19301d66f
|
Documentation added to the condor_runner module
|
diff --git a/psubprocess/condor_runner.py b/psubprocess/condor_runner.py
index 4882050..6c45507 100644
--- a/psubprocess/condor_runner.py
+++ b/psubprocess/condor_runner.py
@@ -1,298 +1,339 @@
-'''It launches processes using Condor with an interface similar to Popen
+'''The main aim of this module is to provide an easy way to launch condor jobs.
+
+Condor is a specialized workload management system for compute-intensive jobs.
+Like other full-featured batch systems, Condor provides a job queueing
+mechanism, scheduling policy, priority scheme, resource monitoring, and
+resource management. More on condor on its web site:
+http://www.cs.wisc.edu/condor/
+
+The interface used is similar to the subprocess.Popen one.
+Besides the standard parameters like cmd, stdout, stderr, and stdin, this condor
+Popen takes a couple of extra paramteres cmd_def and runner_conf. The cmd_def
+syntax is explained in the streams.py file. Condor Popen needs the cmd_def to
+be able to get from the cmd which are the input and output files. The input
+files should be specified in the condor job file, in the case that we want
+to transfer them to the computing nodes. Besides the input and output files
+in the cmd should have no paths, otherwise the command would fail in the other
+machines. That's why we need cmd_def.
Created on 14/07/2009
-
-@author: jose
'''
# Copyright 2009 Jose Blanca, Peio Ziarsolo, COMAV-Univ. Politecnica Valencia
# This file is part of psubprocess.
# psubprocess is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as
# published by the Free Software Foundation, either version 3 of the
# License, or (at your option) any later version.
# psubprocess is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
# You should have received a copy of the GNU Affero General Public License
# along with psubprocess. If not, see <http://www.gnu.org/licenses/>.
from tempfile import NamedTemporaryFile
import subprocess, signal, os.path
from psubprocess.streams import get_streams_from_cmd
def call(cmd, env=None, stdin=None):
'It calls a command and it returns stdout, stderr and retcode'
def subprocess_setup():
''' Python installs a SIGPIPE handler by default. This is usually not
what non-Python subprocesses expect. Taken from this url:
http://www.chiark.greenend.org.uk/ucgi/~cjwatson/blosxom/2009/07/02#
2009-07-02-python-sigpipe'''
signal.signal(signal.SIGPIPE, signal.SIG_DFL)
if stdin is None:
pstdin = None
else:
pstdin = subprocess.PIPE
process = subprocess.Popen(cmd, stdout=subprocess.PIPE,
stderr=subprocess.PIPE, env=env, stdin=pstdin,
preexec_fn=subprocess_setup)
if stdin is None:
stdout, stderr = process.communicate()
else:
# a = stdin.read()
# print a
# stdout, stderr = subprocess.Popen.stdin = stdin
# print stdin.read()
stdout, stderr = process.communicate(stdin)
retcode = process.returncode
return stdout, stderr, retcode
def write_condor_job_file(fhand, parameters):
'It writes a condor job file using the given fhand'
to_print = 'Executable = %s\nArguments = "%s"\nUniverse = vanilla\n' % \
(parameters['executable'], parameters['arguments'])
fhand.write(to_print)
to_print = 'Log = %s\n' % parameters['log_file'].name
fhand.write(to_print)
if parameters['transfer_files']:
to_print = 'When_to_transfer_output = ON_EXIT\n'
fhand.write(to_print)
to_print = 'Getenv = True\n'
fhand.write(to_print)
if ('transfer_executable' in parameters and
parameters['transfer_executable']):
to_print = 'Transfer_executable = %s\n' % \
parameters['transfer_executable']
fhand.write(to_print)
if 'input_fnames' in parameters and parameters['input_fnames']:
ins = ','.join(parameters['input_fnames'])
to_print = 'Transfer_input_files = %s\n' % ins
fhand.write(to_print)
if parameters['transfer_files']:
to_print = 'Should_transfer_files = IF_NEEDED\n'
fhand.write(to_print)
if 'requirements' in parameters:
to_print = "Requirements = %s\n" % parameters['requirements']
fhand.write(to_print)
if 'stdout' in parameters:
to_print = 'Output = %s\n' % parameters['stdout'].name
fhand.write(to_print)
if 'stderr' in parameters:
to_print = 'Error = %s\n' % parameters['stderr'].name
fhand.write(to_print)
if 'stdin' in parameters:
to_print = 'Input = %s\n' % parameters['stdin'].name
fhand.write(to_print)
to_print = 'Queue\n'
fhand.write(to_print)
fhand.flush()
class Popen(object):
- 'It launches and controls a condor job'
- def __init__(self, cmd, cmd_def=None, runner_conf=None,
- stdout=None, stderr=None, stdin=None):
- 'It launches a condor job'
+ '''It launches and controls a condor job.
+
+ The job is launched when an instance is created. After that we can get the
+ cluster id with the method.pid. The rest of the interface is very similar
+ to the subprocess.Popen one. There's no communicate method because there's
+ no support for PIPE.
+ '''
+ def __init__(self, cmd, cmd_def=None, runner_conf=None, stdout=None,
+ stderr=None, stdin=None):
+ '''It launches a condor job.
+
+ The interface is similar to the subprocess.Popen one, although there are
+ some differences.
+ stdout, stdin and stderr should be file handlers, there's no support for
+ PIPEs. The extra parameter cmd_def is required if we need to transfer
+ the input and output files to the computing nodes of the cluster using
+ the condor file transfer mechanism. The cmd_def syntax is explained in
+ the streams.py file.
+ runner_conf is a dict that admits several parameters that control how
+ condor is run:
+ - transfer_files: do we want to transfer the files using the condor
+ transfer file mechanism? (default True)
+ - condor_log: the condor log file. If it's not given Popen will
+ create a condor log file in the tempdir.
+ - transfer_executable: do we want to transfer the executable?
+ (default False)
+ - requirements: The requirements line for the condor job file.
+ (default None)
+ '''
+ #we use the same parameters as subprocess.Popen
+ #pylint: disable-msg=R0913
if cmd_def is None:
cmd_def = []
#runner conf
if runner_conf is None:
runner_conf = {}
#some defaults
if 'transfer_files' not in runner_conf:
runner_conf['transfer_files'] = True
if 'condor_log' not in runner_conf:
self._log_file = NamedTemporaryFile(suffix='.log')
else:
self._log_file = runner_conf['condor_log']
#create condor job file
condor_job_file = self._create_condor_job_file(cmd, cmd_def,
self._log_file,
runner_conf,
stdout, stderr, stdin)
self._condor_job_file = condor_job_file
#print open(condor_job_file.name).read()
#launch condor
self._retcode = None
self._cluster_number = None
self._launch_condor(condor_job_file)
def _launch_condor(self, condor_job_file):
'Given the condor_job_file it launches the condor job'
try:
stdout, stderr, retcode = call(['condor_submit', condor_job_file.name])
except OSError:
raise OSError('condor_submit not found in your path')
if retcode:
msg = 'There was a problem with condor_submit: ' + stderr
raise RuntimeError(msg)
#the condor cluster number is given by condor_submit
#1 job(s) submitted to cluster 15.
for line in stdout.splitlines():
if 'submitted to cluster' in line:
self._cluster_number = line.strip().strip('.').split()[-1]
def _get_pid(self):
'It returns the condor cluster number'
return self._cluster_number
pid = property(_get_pid)
def _get_returncode(self):
'It returns the return code'
return self._retcode
returncode = property(_get_returncode)
@staticmethod
def _remove_paths_from_cmd(cmd, streams, conf):
'''It removes the absolute and relative paths from the cmd,
it returns the modified cmd'''
cmd_mod = cmd[:]
for stream in streams:
if 'fname' not in stream:
continue
fpath = stream['fname']
#for the output files we can't deal with transfering files with
#paths. Condor will deliver those files into the initialdir, not
#where we expected.
if (stream['io'] != 'in' and conf['transfer_files']
and os.path.split(fpath)[-1] != fpath):
msg = 'output files with paths are not transferable'
raise ValueError(msg)
index = cmd_mod.index(fpath)
fpath = os.path.split(fpath)[-1]
cmd_mod[index] = fpath
return cmd_mod
def _create_condor_job_file(self, cmd, cmd_def, log_file, runner_conf,
stdout, stderr, stdin):
'Given a cmd and the cmd_def it returns the condor job file'
#streams
streams = get_streams_from_cmd(cmd, cmd_def, stdout=stdout,
stderr=stderr, stdin=stdin)
#we need some parameters to write the condor file
parameters = {}
#the executable
binary = cmd[0]
#the binary should be an absolute path
if not os.path.isabs(binary):
#the path to the binary could be relative
if os.sep in binary:
#we make the path absolute
binary = os.path.abspath(binary)
else:
#we have to look in the system $PATH
binary = call(['which', binary])[0].strip()
parameters['executable'] = binary
parameters['log_file'] = log_file
#the cmd shouldn't have absolute path in the files because they will be
#transfered to another node in the condor working dir and they wouldn't
#be found with an absolute path
cmd_no_path = self._remove_paths_from_cmd(cmd, streams, runner_conf)
parameters['arguments'] = ' '.join(cmd_no_path[1:])
if stdout is not None:
parameters['stdout'] = stdout
if stderr is not None:
parameters['stderr'] = stderr
if stdin is not None:
parameters['stdin'] = stdin
transfer_bin = False
if 'transfer_executable' in runner_conf:
transfer_bin = runner_conf['transfer_executable']
parameters['transfer_executable'] = transfer_bin
transfer_files = runner_conf['transfer_executable']
parameters['transfer_files'] = str(transfer_files)
if 'requirements' in runner_conf:
parameters['requirements'] = runner_conf['requirements']
in_fnames = []
for stream in streams:
if stream['io'] == 'in':
fname = None
if 'fname' in stream:
fname = stream['fname']
else:
fname = stream['fhand'].name
in_fnames.append(fname)
parameters['input_fnames'] = in_fnames
#now we can create the job file
condor_job_file = NamedTemporaryFile()
write_condor_job_file(condor_job_file, parameters=parameters)
return condor_job_file
def _update_retcode(self):
'It updates the retcode looking at the log file, it returns the retcode'
for line in open(self._log_file.name):
if 'return value' in line:
ret = line.split('return value')[1].strip().strip(')')
self._retcode = int(ret)
return self._retcode
def poll(self):
'It checks if condor has run ower condor cluster'
cluster_number = self._cluster_number
cmd = ['condor_q', cluster_number,
'-format', '"%d.\n"', 'ClusterId']
stdout, stderr, retcode = call(cmd)
if retcode:
msg = 'There was a problem with condor_q: ' + stderr
raise RuntimeError(msg)
if cluster_number not in stdout:
#the job is finished
return self._update_retcode()
return self._retcode
def wait(self):
'It waits until the condor job is finished'
try:
stderr, retcode = call(['condor_wait', self._log_file.name])[1:]
except OSError:
raise OSError('condor_wait not found in your path')
if retcode:
msg = 'There was a problem with condor_wait: ' + stderr
raise RuntimeError(msg)
return self._update_retcode()
def kill(self):
'It runs condor_rm for the condor job'
try:
stderr, retcode = call(['condor_rm', self.pid])[1:]
except OSError:
raise OSError('condor_rm not found in your path')
if retcode:
msg = 'There was a problem with condor_rm: ' + stderr
raise RuntimeError(msg)
return self._update_retcode()
def terminate(self):
'It runs condor_rm for the condor job'
self.kill()
def get_default_splits():
'It returns a suggested number of splits for this Popen runner'
try:
stdout, stderr, retcode = call(['condor_status', '-total'])
except OSError:
raise OSError('condor_status not found in your path')
if retcode:
msg = 'There was a problem with condor_status: ' + stderr
raise RuntimeError(msg)
for line in stdout.splitlines():
line = line.strip().lower()
if line.startswith('total') and 'owner' not in line:
return int(line.split()[1]) * 2
|
JoseBlanca/psubprocess
|
1a45c629502ffacd0c0b9c968a03838a6747b34d
|
Several bugs in the parallel splitter fixed
|
diff --git a/psubprocess/condor_runner.py b/psubprocess/condor_runner.py
index 5baebeb..4882050 100644
--- a/psubprocess/condor_runner.py
+++ b/psubprocess/condor_runner.py
@@ -1,284 +1,298 @@
'''It launches processes using Condor with an interface similar to Popen
Created on 14/07/2009
@author: jose
'''
# Copyright 2009 Jose Blanca, Peio Ziarsolo, COMAV-Univ. Politecnica Valencia
# This file is part of psubprocess.
# psubprocess is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as
# published by the Free Software Foundation, either version 3 of the
# License, or (at your option) any later version.
# psubprocess is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
# You should have received a copy of the GNU Affero General Public License
# along with psubprocess. If not, see <http://www.gnu.org/licenses/>.
from tempfile import NamedTemporaryFile
import subprocess, signal, os.path
from psubprocess.streams import get_streams_from_cmd
def call(cmd, env=None, stdin=None):
'It calls a command and it returns stdout, stderr and retcode'
def subprocess_setup():
''' Python installs a SIGPIPE handler by default. This is usually not
what non-Python subprocesses expect. Taken from this url:
http://www.chiark.greenend.org.uk/ucgi/~cjwatson/blosxom/2009/07/02#
2009-07-02-python-sigpipe'''
signal.signal(signal.SIGPIPE, signal.SIG_DFL)
if stdin is None:
pstdin = None
else:
pstdin = subprocess.PIPE
process = subprocess.Popen(cmd, stdout=subprocess.PIPE,
stderr=subprocess.PIPE, env=env, stdin=pstdin,
preexec_fn=subprocess_setup)
if stdin is None:
stdout, stderr = process.communicate()
else:
# a = stdin.read()
# print a
# stdout, stderr = subprocess.Popen.stdin = stdin
# print stdin.read()
stdout, stderr = process.communicate(stdin)
retcode = process.returncode
return stdout, stderr, retcode
def write_condor_job_file(fhand, parameters):
'It writes a condor job file using the given fhand'
to_print = 'Executable = %s\nArguments = "%s"\nUniverse = vanilla\n' % \
(parameters['executable'], parameters['arguments'])
fhand.write(to_print)
to_print = 'Log = %s\n' % parameters['log_file'].name
fhand.write(to_print)
if parameters['transfer_files']:
to_print = 'When_to_transfer_output = ON_EXIT\n'
fhand.write(to_print)
to_print = 'Getenv = True\n'
fhand.write(to_print)
if ('transfer_executable' in parameters and
parameters['transfer_executable']):
to_print = 'Transfer_executable = %s\n' % \
parameters['transfer_executable']
fhand.write(to_print)
if 'input_fnames' in parameters and parameters['input_fnames']:
ins = ','.join(parameters['input_fnames'])
to_print = 'Transfer_input_files = %s\n' % ins
fhand.write(to_print)
if parameters['transfer_files']:
to_print = 'Should_transfer_files = IF_NEEDED\n'
fhand.write(to_print)
if 'requirements' in parameters:
- to_print = "requeriments = '%s'\n" % parameters['requirements']
+ to_print = "Requirements = %s\n" % parameters['requirements']
fhand.write(to_print)
if 'stdout' in parameters:
to_print = 'Output = %s\n' % parameters['stdout'].name
fhand.write(to_print)
if 'stderr' in parameters:
to_print = 'Error = %s\n' % parameters['stderr'].name
fhand.write(to_print)
if 'stdin' in parameters:
to_print = 'Input = %s\n' % parameters['stdin'].name
fhand.write(to_print)
to_print = 'Queue\n'
fhand.write(to_print)
fhand.flush()
class Popen(object):
'It launches and controls a condor job'
def __init__(self, cmd, cmd_def=None, runner_conf=None,
stdout=None, stderr=None, stdin=None):
'It launches a condor job'
if cmd_def is None:
cmd_def = []
#runner conf
if runner_conf is None:
runner_conf = {}
#some defaults
if 'transfer_files' not in runner_conf:
runner_conf['transfer_files'] = True
if 'condor_log' not in runner_conf:
self._log_file = NamedTemporaryFile(suffix='.log')
else:
self._log_file = runner_conf['condor_log']
#create condor job file
condor_job_file = self._create_condor_job_file(cmd, cmd_def,
self._log_file,
runner_conf,
stdout, stderr, stdin)
self._condor_job_file = condor_job_file
+ #print open(condor_job_file.name).read()
#launch condor
self._retcode = None
self._cluster_number = None
self._launch_condor(condor_job_file)
def _launch_condor(self, condor_job_file):
'Given the condor_job_file it launches the condor job'
- stdout, stderr, retcode = call(['condor_submit', condor_job_file.name])
+ try:
+ stdout, stderr, retcode = call(['condor_submit', condor_job_file.name])
+ except OSError:
+ raise OSError('condor_submit not found in your path')
if retcode:
msg = 'There was a problem with condor_submit: ' + stderr
raise RuntimeError(msg)
#the condor cluster number is given by condor_submit
#1 job(s) submitted to cluster 15.
for line in stdout.splitlines():
if 'submitted to cluster' in line:
self._cluster_number = line.strip().strip('.').split()[-1]
def _get_pid(self):
'It returns the condor cluster number'
return self._cluster_number
pid = property(_get_pid)
def _get_returncode(self):
'It returns the return code'
return self._retcode
returncode = property(_get_returncode)
@staticmethod
def _remove_paths_from_cmd(cmd, streams, conf):
'''It removes the absolute and relative paths from the cmd,
it returns the modified cmd'''
cmd_mod = cmd[:]
for stream in streams:
if 'fname' not in stream:
continue
fpath = stream['fname']
#for the output files we can't deal with transfering files with
#paths. Condor will deliver those files into the initialdir, not
#where we expected.
if (stream['io'] != 'in' and conf['transfer_files']
and os.path.split(fpath)[-1] != fpath):
msg = 'output files with paths are not transferable'
raise ValueError(msg)
index = cmd_mod.index(fpath)
fpath = os.path.split(fpath)[-1]
cmd_mod[index] = fpath
return cmd_mod
def _create_condor_job_file(self, cmd, cmd_def, log_file, runner_conf,
stdout, stderr, stdin):
'Given a cmd and the cmd_def it returns the condor job file'
#streams
streams = get_streams_from_cmd(cmd, cmd_def, stdout=stdout,
stderr=stderr, stdin=stdin)
#we need some parameters to write the condor file
parameters = {}
#the executable
binary = cmd[0]
#the binary should be an absolute path
if not os.path.isabs(binary):
#the path to the binary could be relative
if os.sep in binary:
#we make the path absolute
binary = os.path.abspath(binary)
else:
#we have to look in the system $PATH
binary = call(['which', binary])[0].strip()
parameters['executable'] = binary
parameters['log_file'] = log_file
#the cmd shouldn't have absolute path in the files because they will be
#transfered to another node in the condor working dir and they wouldn't
#be found with an absolute path
cmd_no_path = self._remove_paths_from_cmd(cmd, streams, runner_conf)
parameters['arguments'] = ' '.join(cmd_no_path[1:])
if stdout is not None:
parameters['stdout'] = stdout
if stderr is not None:
parameters['stderr'] = stderr
if stdin is not None:
parameters['stdin'] = stdin
transfer_bin = False
if 'transfer_executable' in runner_conf:
transfer_bin = runner_conf['transfer_executable']
parameters['transfer_executable'] = transfer_bin
transfer_files = runner_conf['transfer_executable']
parameters['transfer_files'] = str(transfer_files)
if 'requirements' in runner_conf:
parameters['requirements'] = runner_conf['requirements']
in_fnames = []
for stream in streams:
if stream['io'] == 'in':
fname = None
if 'fname' in stream:
fname = stream['fname']
else:
fname = stream['fhand'].name
in_fnames.append(fname)
parameters['input_fnames'] = in_fnames
#now we can create the job file
condor_job_file = NamedTemporaryFile()
write_condor_job_file(condor_job_file, parameters=parameters)
return condor_job_file
def _update_retcode(self):
'It updates the retcode looking at the log file, it returns the retcode'
for line in open(self._log_file.name):
if 'return value' in line:
ret = line.split('return value')[1].strip().strip(')')
self._retcode = int(ret)
return self._retcode
def poll(self):
'It checks if condor has run ower condor cluster'
cluster_number = self._cluster_number
cmd = ['condor_q', cluster_number,
'-format', '"%d.\n"', 'ClusterId']
stdout, stderr, retcode = call(cmd)
if retcode:
msg = 'There was a problem with condor_q: ' + stderr
raise RuntimeError(msg)
if cluster_number not in stdout:
#the job is finished
return self._update_retcode()
return self._retcode
def wait(self):
'It waits until the condor job is finished'
- stderr, retcode = call(['condor_wait', self._log_file.name])[1:]
+ try:
+ stderr, retcode = call(['condor_wait', self._log_file.name])[1:]
+ except OSError:
+ raise OSError('condor_wait not found in your path')
if retcode:
msg = 'There was a problem with condor_wait: ' + stderr
raise RuntimeError(msg)
return self._update_retcode()
def kill(self):
'It runs condor_rm for the condor job'
- stderr, retcode = call(['condor_rm', self.pid])[1:]
+ try:
+ stderr, retcode = call(['condor_rm', self.pid])[1:]
+ except OSError:
+ raise OSError('condor_rm not found in your path')
+
if retcode:
msg = 'There was a problem with condor_rm: ' + stderr
raise RuntimeError(msg)
return self._update_retcode()
def terminate(self):
'It runs condor_rm for the condor job'
self.kill()
def get_default_splits():
'It returns a suggested number of splits for this Popen runner'
- stdout, stderr, retcode = call(['condor_status', '-total'])
+ try:
+ stdout, stderr, retcode = call(['condor_status', '-total'])
+ except OSError:
+ raise OSError('condor_status not found in your path')
if retcode:
msg = 'There was a problem with condor_status: ' + stderr
raise RuntimeError(msg)
for line in stdout.splitlines():
line = line.strip().lower()
if line.startswith('total') and 'owner' not in line:
return int(line.split()[1]) * 2
diff --git a/psubprocess/prunner.py b/psubprocess/prunner.py
index 9b3d6c1..efd6a26 100644
--- a/psubprocess/prunner.py
+++ b/psubprocess/prunner.py
@@ -1,639 +1,643 @@
'''It launches parallel processes with an interface similar to Popen.
It divides jobs into subjobs and launches the subjobs.
This module is useful when we have a non-parallel command to run in a
multiprocessor computer or in a multinode cluster. It will take the input files,
it will split them and it will run a subjob for everyone of the splits. It will
wait for the subjobs to finnish and it will join the output files generated
by all subjobs. At the end of the process will get the same output files as if
the command wasn't run in parallel.
To do it requires the parameters used by popen: cmd, stdin, stdout, stderr and
some extra information: runner, splits and cmd_def.
runner is optional and it should be a subprocess.Popen like class. If it's not
given subprocess.Popen will be used. This will be the class used to run the
subjobs. In that case the subjobs will run in the processors of the local node.
splits is the number of subjobs that we want to generate. If it's not given the
runner will provide a suitable number.
cmd_def is a dict that defines how the cmd defines the input and output files.
We need to tell Popen which are the input and output files in order to split
them and join them.
'''
# Copyright 2009 Jose Blanca, Peio Ziarsolo, COMAV-Univ. Politecnica Valencia
# This file is part of psubprocess.
# psubprocess is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as
# published by the Free Software Foundation, either version 3 of the
# License, or (at your option) any later version.
# psubprocess is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
# You should have received a copy of the GNU Affero General Public License
# along with psubprocess. If not, see <http://www.gnu.org/licenses/>.
from subprocess import Popen as StdPopen
import os, tempfile, shutil, copy
from psubprocess.streams import get_streams_from_cmd, STDOUT, STDERR, STDIN
from psubprocess import condor_runner
RUNNER_MODULES = {}
RUNNER_MODULES['condor_runner'] = condor_runner
class NamedTemporaryDir(object):
'''This class creates temporary directories '''
#pylint: disable-msg=W0622
#we redifine the build in dir because temfile uses that inteface
def __init__(self, dir=None):
'''It initiates the class.'''
self._name = tempfile.mkdtemp(dir=dir)
def get_name(self):
'Returns path to the dict'
return self._name
name = property(get_name)
def close(self):
'''It removes the temp dir'''
if os.path.exists(self._name):
shutil.rmtree(self._name)
def __del__(self):
'''It removes de temp dir when instance is removed and the garbaje
colector decides it'''
self.close()
def NamedTemporaryFile(dir=None, delete=False, suffix=''):
'''It creates a temporary file that won't be deleted when close
This behaviour can be done with tempfile.NamedTemporaryFile in python > 2.6
'''
#pylint: disable-msg=W0613
#delete is not being used, it's there as a reminder, once we start to use
#python 2.6 this function should be removed
#pylint: disable-msg=C0103
#pylint: disable-msg=W0622
#We want to mimick tempfile.NamedTemporaryFile
fpath = tempfile.mkstemp(dir=dir, suffix=suffix)[1]
return open(fpath, 'w')
def copy_file_mode(fpath1, fpath2):
'It copies the os.stats mode from file1 to file2'
mode = os.stat(fpath1)[0]
os.chmod(fpath2, mode)
class Popen(object):
'''It paralellizes the given processes dividing them into subprocesses.
The interface is similar to subprocess.Popen to ease the use of this class,
although the functionality of this class is much mor limited.
When an instance of this class is created a series of subjobs is launched.
When all subjobs are finished returncode will have an int, if they're still
running returncode will be None.
We can wait for all subjobs to finnish using the wait method or we can
kill or terminate them using kill and terminate.
'''
def __init__(self, cmd, cmd_def=None, runner=None, runner_conf=None,
stdout=None, stderr=None, stdin=None, splits=None):
'''It inits the a Popen instance, it creates and runs the subjobs.
Like the subprocess.Popen it accepts stdin, stdout, stderr, but in this
case all of them should be files, PIPE will not work.
In the cmd_def list we have to tell this Popen how to locate the
input and output files in the cmd and how to split and join them. Look
for the cmd_format in the streams.py file.
keyword arguments:
cmd -- a list with the cmd to parallelize
cmd_def -- the cmd definition list (default [])
runner -- which runner to use (default subprocess.Popen)
runner_conf -- extra parameters for the runner (default {})
stdout -- a fhand to store the stdout (default None)
stderr -- a fhand to store the stderr (default None)
stdin -- a fhand with the stdin (default None)
splits -- number of subjobs to generate
'''
#we want the same interface as subprocess.popen
#pylint: disable-msg=R0913
self._retcode = None
self._outputs_collected = False
#some defaults
#if the runner is not given, we use subprocess.Popen
if runner is None:
runner = StdPopen
if cmd_def is None:
if stdin is not None:
raise ValueError('No cmd_def given but stdin present')
cmd_def = []
#if the number of splits is not given we calculate them
if splits is None:
splits = self.default_splits(runner)
#we need a work dir to create the temporary split files
self._work_dir = NamedTemporaryDir()
copy_file_mode('.', self._work_dir.name)
#the main job
self._job = {'cmd': cmd, 'work_dir': self._work_dir}
#we create the new subjobs
self._jobs = self._split_jobs(cmd, cmd_def, splits, self._work_dir,
stdout=stdout, stderr=stderr, stdin=stdin)
#launch every subjobs
self._launch_jobs(self._jobs, runner=runner, runner_conf=runner_conf)
@staticmethod
def _launch_jobs(jobs, runner, runner_conf):
'It launches all jobs and it adds its popen instance to them'
jobs['popens'] = []
cwd = os.getcwd()
for job_index, (cmd, streams, work_dir) in enumerate(zip(jobs['cmds'],
jobs['streams'], jobs['work_dirs'])):
#the std stream can be present or not
stdin, stdout, stderr = None, None, None
if jobs['stdins']:
stdin = jobs['stdins'][job_index]
if jobs['stdouts']:
stdout = jobs['stdouts'][job_index]
if jobs['stderrs']:
stderr = jobs['stderrs'][job_index]
#for every job we go to its dir to launch it
os.chdir(work_dir.name)
#we have to be sure that stdin is open for read
if stdin:
stdin = open(stdin.name)
#we launch the job
if runner == StdPopen:
popen = runner(cmd, stdout=stdout, stderr=stderr, stdin=stdin)
else:
popen = runner(cmd, cmd_def=streams, stdout=stdout,
stderr=stderr, stdin=stdin,
runner_conf=runner_conf)
#we record it's popen instane
jobs['popens'].append(popen)
os.chdir(cwd)
def _split_jobs(self, cmd, cmd_def, splits, work_dir, stdout=None,
stderr=None, stdin=None,):
''''I creates one job for every split.
Every job has a cmd, work_dir and streams, this info is in the jobs dict
with the keys: cmds, work_dirs, streams
'''
#too many arguments, but similar interface to our __init__
#pylint: disable-msg=R0913
#pylint: disable-msg=R0914
#the main job streams
main_job_streams = get_streams_from_cmd(cmd, cmd_def, stdout=stdout,
stderr=stderr, stdin=stdin)
self._job['streams'] = main_job_streams
streams, work_dirs = self._split_streams(main_job_streams, splits,
work_dir.name)
#now we have to create a new cmd with the right in and out streams for
#every split
cmds, stdins, stdouts, stderrs = self._create_cmds(cmd, streams)
jobs = {'cmds': cmds, 'work_dirs': work_dirs, 'streams': streams,
'stdins':stdins, 'stdouts':stdouts, 'stderrs':stderrs}
return jobs
@staticmethod
def _create_cmds(cmd, streams):
'''Given a base cmd and a steams list it creates one modified cmds for
every stream'''
#the streams is a list of streams
streamss = streams
cmds = []
stdouts = []
stdins = []
stderrs = []
for streams in streamss:
new_cmd = copy.deepcopy(cmd)
for stream in streams:
#is the stream in the cmd or in is a std one?
if 'cmd_location' in stream:
location = stream['cmd_location']
else:
location = None
if location is None:
continue
elif location == STDIN:
stdins.append(stream['fhand'])
elif location == STDOUT:
stdouts.append(stream['fhand'])
elif location == STDERR:
stderrs.append(stream['fhand'])
else:
#we modify the cmd[location] with the new file
#we use the fname and no path because the jobs will be
#launched from the job working dir
location = stream['cmd_location']
fpath = stream['fname']
fname = os.path.split(fpath)[-1]
new_cmd[location] = fname
cmds.append(new_cmd)
return cmds, stdins, stdouts, stderrs
@staticmethod
def _split_streams(streams, splits, work_dir):
'''Given a list of streams it splits every stream in the given number of
splits'''
#which are the input and output streams?
input_stream_indexes = []
output_stream_indexes = []
for index, stream in enumerate(streams):
if stream['io'] == 'in':
input_stream_indexes.append(index)
elif stream['io'] == 'out':
output_stream_indexes.append(index)
#we create one work dir for every split
work_dirs = []
for index in range(splits):
dir_ = NamedTemporaryDir(dir=work_dir)
work_dirs.append(dir_)
copy_file_mode('.', dir_.name)
#we have to do first the input files because the number of splits could
#be changed by them
#we split the input stream files into several splits
#we have to sort the input_stream_indexes, first we should take the ones
#that have an input file to be split
+ def do_we_have_to_split(stream_index):
+ 'If the stream has to split a file it will return True'
+ split = None
+ stream = streams[stream_index]
+ #maybe they shouldn't be split
+ if 'special' in stream and 'no_split' in stream['special']:
+ split = False
+ #maybe there is no file to split
+ if (('fhand' in stream and stream['fhand'] is None) or
+ ('fname' in stream and stream['fname'] is None) or
+ ('fname' not in stream and 'fhand' not in stream)):
+ split = False
+ elif (('fhand' in stream and stream['fhand'] is not None) or
+ ('fname' in stream and stream['fname'] is not None)):
+ split = True
+ return split
def to_be_split_first(stream1, stream2):
'It sorts the streams, the ones to be split go first'
- split1 = None
- split2 = None
- for split, stream in ((split1, stream1), (split2, stream2)):
- #maybe they shouldn't be split
- if 'special' in stream and 'no_split' in stream['special']:
- split = False
- #maybe the have no file to split
- if (('fhand' in stream and stream['fhand'] is None) or
- ('fname' in stream and stream['fname'] is None) or
- ('fname' not in stream and 'fhand' not in stream)):
- split = False
- elif (('fhand' in stream and stream['fhand'] is not None) or
- ('fname' in stream and stream['fname'] is not None)):
- split = True
+ split1 = do_we_have_to_split(stream1)
+ split2 = do_we_have_to_split(stream2)
return int(split1) - int(split2)
input_stream_indexes = sorted(input_stream_indexes, to_be_split_first)
first = True
split_files = {}
for index in input_stream_indexes:
stream = streams[index]
#splitter
if 'splitter' not in stream:
msg = 'An splitter should be provided for every input stream'
msg += 'missing for: ' + str(stream)
raise ValueError(msg)
splitter = stream['splitter']
#the splitter can be a re, in that case with create the function
if '__call__' not in dir(splitter):
splitter = _create_file_splitter_with_re(splitter)
#we split the input files in the splits, every file will be in one
#of the given work_dirs
#the stream can have fname or fhands
if 'fhand' in stream:
file_ = stream['fhand']
elif 'fname' in stream:
file_ = stream['fname']
else:
file_ = None
if file_ is None:
#the stream migth have no file associated
files = [None] * len(work_dirs)
else:
files = splitter(file_, work_dirs)
#the files len can be different than splits, in that case we modify
#the splits or we raise an error
if len(files) != splits:
if first:
splits = len(files)
#we discard the empty temporary dirs
work_dirs = work_dirs[0:splits]
else:
msg = 'Not all input files were divided in the same number'
msg += ' of splits'
raise RuntimeError(msg)
first = False
split_files[index] = files #a list of files for every in stream
#we split the ouptut stream files into several splits
for index in output_stream_indexes:
stream = streams[index]
#for th output we just create the new names, but we don't split
#any file
if 'fhand' in stream:
fname = stream['fhand']
else:
fname = stream['fname']
files = _output_splitter(fname, work_dirs)
split_files[index] = files #a list of files for every in stream
new_streamss = []
#we need one new stream for every split
for split_index in range(splits):
#the streams for one job
new_streams = []
for stream_index, stream in enumerate(streams):
#we duplicate the original stream
new_stream = stream.copy()
#we set the new files
if 'fhand' in stream:
new_stream['fhand'] = split_files[stream_index][split_index]
else:
new_stream['fname'] = split_files[stream_index][split_index]
new_streams.append(new_stream)
new_streamss.append(new_streams)
return new_streamss, work_dirs
@staticmethod
def default_splits(runner):
'Given a runner it returns the number of splits recommended by default'
if runner is StdPopen:
#the number of processors
return os.sysconf('SC_NPROCESSORS_ONLN')
else:
module = runner.__module__.split('.')[-1]
module = RUNNER_MODULES[module]
return module.get_default_splits()
def wait(self):
'It waits for all the works to finnish'
#we wait till all jobs finish
for job in self._jobs['popens']:
job.wait()
#now that all jobs have finished we join the results
self._collect_output_streams()
#we join now the retcodes
self._collect_retcodes()
return self._retcode
def _collect_output_streams(self):
'''It joins all the output streams into the output files and it removes
the work dirs'''
if self._outputs_collected:
return
#for each file in the main job cmd
for stream_index, stream in enumerate(self._job['streams']):
if stream['io'] == 'in':
#now we're dealing only with output files
continue
#every subjob has a part to join for this output stream
part_out_fnames = []
for streams in self._jobs['streams']:
this_stream = streams[stream_index]
if 'fname' in this_stream:
part_out_fnames.append(this_stream['fname'])
else:
part_out_fnames.append(this_stream['fhand'])
#we need a function to join this stream
joiner = None
if joiner in stream:
joiner = stream['joiner']
else:
joiner = default_cat_joiner
if 'fname' in stream:
out_file = stream['fname']
else:
out_file = stream['fhand']
default_cat_joiner(out_file, part_out_fnames)
#now we can delete the tempdirs
for work_dir in self._jobs['work_dirs']:
work_dir.close()
self._outputs_collected = True
def _collect_retcodes(self):
'It gathers the retcodes from all processes'
retcode = None
for popen in self._jobs['popens']:
job_retcode = popen.returncode
if job_retcode is None:
#if some job is yet to be finished the main job is not finished
retcode = None
break
elif job_retcode != 0:
#if one job has finnished badly the main job is badly finished
retcode = job_retcode
break
#it should be 0 at this point
retcode = job_retcode
#if the retcode is not None the jobs have finished and we have to
#collect the outputs
if retcode is not None:
self._collect_output_streams()
self._retcode = retcode
return retcode
def _get_returncode(self):
'It returns the return code'
if self._retcode is None:
self._collect_retcodes()
return self._retcode
returncode = property(_get_returncode)
def kill(self):
'It kills all jobs'
if 'popens' not in self._jobs:
return
for popen in self._jobs['popens']:
popen.kill()
del self._jobs['popens']
def terminate(self):
'It kills all jobs'
if 'popens' not in self._jobs:
return
for popen in self._jobs['popens']:
popen.terminate()
del self._jobs['popens']
def _calculate_divisions(num_items, splits):
'''It calculates how many items should be in every split to divide
the num_items into splits.
Not all splits will have an equal number of items, it will return a tuple
with two tuples inside:
((num_fragments_1, num_items_1), (num_fragments_2, num_items_2))
splits = num_fragments_1 + num_fragments_2
num_items_1 = num_items_2 + 1
num_fragments_1 could be equal to 0.
This is the best way to create as many splits as possible as similar as
possible.
'''
if splits >= num_items:
return ((0, 1), (splits, 1))
num_fragments1 = num_items % splits
num_fragments2 = splits - num_fragments1
num_items2 = num_items // splits
num_items1 = num_items2 + 1
res = ((num_fragments1, num_items1), (num_fragments2, num_items2))
return res
def _items_in_file(fhand, expression_kind, expression):
'''Given an fhand and an expression it yields the items cutting where the
line matches the expression'''
sofar = fhand.readline()
for line in fhand:
if ((expression_kind == 'str' and expression in line) or
(expression_kind != 'str' and expression.search(line))):
yield sofar
sofar = line
else:
sofar += line
else:
#the last item
yield sofar
def _create_file_splitter_with_re(expression):
'''Given an expression it creates a file splitter.
The expression can be a regex or an str.
The item in the file will be defined everytime a line matches the
expression.
'''
expression_kind = None
if isinstance(expression, str):
expression_kind = 'str'
else:
expression_kind = 're'
def splitter(file_, work_dirs):
'''It splits the given file into several splits.
Every split will be located in one of the work_dirs, although it is not
guaranteed to create as many splits as work dirs. If in the file there
are less items than work_dirs some work_dirs will be left empty.
It returns a list with the fpaths or fhands for the splitted files.
file_ can be an fhand or an fname.
'''
#the file_ can be an fname or an fhand. which one is it?
file_is_str = None
if isinstance(file_, str):
fname = file_
file_is_str = True
else:
fname = file_.name
file_is_str = False
#how many splits do we want?
nsplits = len(work_dirs)
#how many items are in the file? We assume that all files have the same
#number of items
nitems = 0
for line in open(fname, 'r'):
if ((expression_kind == 'str' and expression in line) or
(expression_kind != 'str' and expression.search(line))):
nitems += 1
#how many splits a we going to create? and how many items will be in
#every split
#if there are more items than splits we create as many splits as items
if nsplits > nitems:
nsplits = nitems
(nsplits1, nitems1), (nsplits2, nitems2) = _calculate_divisions(nitems,
nsplits)
#we have to create nsplits1 files with nitems1 in it and nsplits2 files
#with nitems2 items in it
new_files = []
fhand = open(fname, 'r')
items = _items_in_file(fhand, expression_kind, expression)
splits_made = 0
for nsplits, nitems in ((nsplits1, nitems1), (nsplits2, nitems2)):
#we have to create nsplits files with nitems in it
#we don't need the split_index for anything
#pylint: disable-msg=W0612
for split_index in range(nsplits):
suffix = os.path.splitext(fname)[-1]
work_dir = work_dirs[splits_made]
ofh = NamedTemporaryFile(dir=work_dir.name, delete=False,
suffix=suffix)
copy_file_mode(fhand.name, ofh.name)
for item_index in range(nitems):
ofh.write(items.next())
ofh.flush()
if file_is_str:
new_files.append(ofh.name)
ofh.close()
else:
new_files.append(ofh)
splits_made += 1
return new_files
return splitter
def _output_splitter(file_, work_dirs):
'''It creates one output file for every splits.
Every split will be located in one of the work_dirs.
It returns a list with the fpaths for the new output files.
'''
#the file_ can be an fname or an fhand. which one is it?
file_is_str = None
if isinstance(file_, str):
fname = file_
file_is_str = True
else:
fname = file_.name
file_is_str = False
#how many splits do we want?
nsplits = len(work_dirs)
new_fpaths = []
#we have to create nsplits
for split_index in range(nsplits):
suffix = os.path.splitext(fname)[-1]
work_dir = work_dirs[split_index]
#we use delete=False because this temp file is in a temp dir that will
#be completely deleted. If we use delete=True we get an error because
#the file might be already deleted when its __del__ method is called
ofh = NamedTemporaryFile(dir=work_dir.name, suffix=suffix,
delete=False)
#the file will be deleted
#what do we need the fname or the fhand?
if file_is_str:
#it will be deleted because we just need the name in the temporary
#directory. tempfile.mktemp would be better for this use, but it is
#deprecated
new_fpaths.append(ofh.name)
else:
new_fpaths.append(ofh)
return new_fpaths
def default_cat_joiner(out_file_, in_files_):
'''It joins the given in files into the given out file.
It works with fnames or fhands.
'''
#are we working with fhands or fnames?
file_is_str = None
if isinstance(out_file_, str):
file_is_str = True
else:
file_is_str = False
#the output fhand
if file_is_str:
out_fhand = open(out_file_, 'w')
else:
out_fhand = open(out_file_.name, 'w')
for in_file_ in in_files_:
#the input fhand
if file_is_str:
in_fhand = open(in_file_, 'r')
else:
in_fhand = open(in_file_.name, 'r')
for line in in_fhand:
out_fhand.write(line)
in_fhand.close()
out_fhand.close()
diff --git a/scripts/run_in_parallel.py b/scripts/run_in_parallel.py
index 839f5f3..bfa2c51 100644
--- a/scripts/run_in_parallel.py
+++ b/scripts/run_in_parallel.py
@@ -1,108 +1,112 @@
'''
Created on 21/07/2009
@author: jose
'''
# Copyright 2009 Jose Blanca, Peio Ziarsolo, COMAV-Univ. Politecnica Valencia
# This file is part of psubprocess.
# psubprocess is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as
# published by the Free Software Foundation, either version 3 of the
# License, or (at your option) any later version.
# psubprocess is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
# You should have received a copy of the GNU Affero General Public License
# along with psubprocess. If not, see <http://www.gnu.org/licenses/>.
from optparse import OptionParser
import os.path, sys, signal
from psubprocess import CondorPopen, Popen
POPEN = None
def parse_options():
'It parses the command line arguments'
parser = OptionParser('usage: %prog -c "command"')
parser.add_option('-n', '--nsplits', dest='splits',
help='number of subjobs to create')
parser.add_option('-r', '--runner', dest='runner', default='subprocess',
help='who should run the subjobs (subprocess or condor)')
parser.add_option('-c', '--command', dest='command',
help='The command to run')
parser.add_option('-o', '--stdout', dest='stdout',
help='A file to store the stdout')
parser.add_option('-e', '--stderr', dest='stderr',
help='A file to store the stderr')
parser.add_option('-i', '--stdin', dest='stdin',
help='A file to store the stdin')
parser.add_option('-d', '--cmd_def', dest='cmd_def',
help='The command line definition')
parser.add_option('-q', '--runner_req', dest='runner_req',
help='runner requirements')
return parser
def get_options():
'It returns a dict with the options'
parser = parse_options()
cmd_options = parser.parse_args()[0]
options = {}
if cmd_options.command is None:
raise parser.error('The command should be set')
else:
options['cmd'] = cmd_options.command.split()
if cmd_options.stdout is not None:
options['stdout'] = open(cmd_options.stdout, 'w')
if cmd_options.stderr is not None:
options['stderr'] = open(cmd_options.stderr, 'w')
if cmd_options.stdin is not None:
options['stdin'] = open(cmd_options.stdin)
if cmd_options.runner == 'subprocess':
options['runner'] = None
elif cmd_options.runner == 'condor':
runner_conf = {}
runner_conf['transfer_executable'] = False
if cmd_options.runner_req is not None:
runner_conf['requirements'] = cmd_options.runner_req
options['runner_conf'] = runner_conf
options['runner'] = CondorPopen
else:
parser.error('Allowable runners are: subprocess and condor')
if cmd_options.cmd_def is None:
options['cmd_def'] = []
else:
cmd_def = cmd_options.cmd_def
#it can be a file or an str
if os.path.exists(cmd_def):
cmd_def = open(cmd_def).read()
- options['cmd_def'] = eval(cmd_def)
+ cmd_def = eval(cmd_def)
+ if not isinstance(cmd_def, list):
+ msg = 'cmd_def should be a list of dicts, read the documentation'
+ parser.error(msg)
+ options['cmd_def'] = cmd_def
return options
-def kill_process():
+def kill_processes():
'It kills the ongoing process'
if POPEN is not None:
POPEN.kill()
sys.exit(-1)
def set_signal_handlers():
'It sets the SIGTERM and SIGKILL signals'
- signal.signal(signal.SIGTERM, kill_process)
- signal.signal(signal.SIGABRT, kill_process)
- signal.signal(signal.SIGINT, kill_process)
+ signal.signal(signal.SIGTERM, kill_processes)
+ signal.signal(signal.SIGABRT, kill_processes)
+ signal.signal(signal.SIGINT, kill_processes)
def main():
'It runs a command in parallel'
set_signal_handlers()
options = get_options()
global POPEN
POPEN = Popen(**options)
sys.exit(POPEN.wait())
if __name__ == '__main__':
main()
\ No newline at end of file
diff --git a/scripts/run_with_condor.py b/scripts/run_with_condor.py
index 1d7209f..c1d70bc 100644
--- a/scripts/run_with_condor.py
+++ b/scripts/run_with_condor.py
@@ -1,102 +1,106 @@
'''
Created on 21/07/2009
@author: jose
'''
# Copyright 2009 Jose Blanca, Peio Ziarsolo, COMAV-Univ. Politecnica Valencia
# This file is part of psubprocess.
# psubprocess is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as
# published by the Free Software Foundation, either version 3 of the
# License, or (at your option) any later version.
# psubprocess is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
# You should have received a copy of the GNU Affero General Public License
# along with psubprocess. If not, see <http://www.gnu.org/licenses/>.
from optparse import OptionParser
import os.path, sys, signal
from psubprocess import CondorPopen
POPEN = None
def parse_options():
'It parses the command line arguments'
parser = OptionParser('usage: %prog -c "command"')
parser.add_option('-c', '--command', dest='command',
help='The command to run')
parser.add_option('-o', '--stdout', dest='stdout',
help='A file to store the stdout')
parser.add_option('-e', '--stderr', dest='stderr',
help='A file to store the stderr')
parser.add_option('-i', '--stdin', dest='stdin',
help='A file to store the stdin')
parser.add_option('-d', '--cmd_def', dest='cmd_def',
help='The command line definition')
parser.add_option('-l', '--log', dest='condor_log',
help='The log file')
parser.add_option('-q', '--condor_req', dest='runner_req',
help='condor requiements for the job')
return parser
def get_options():
'It returns a dict with the options'
parser = parse_options()
cmd_options = parser.parse_args()[0]
options = {}
if cmd_options.command is None:
raise parser.error('The command should be set')
else:
options['cmd'] = cmd_options.command.split()
if cmd_options.stdout is not None:
options['stdout'] = open(cmd_options.stdout, 'w')
if cmd_options.stderr is not None:
options['stderr'] = open(cmd_options.stderr, 'w')
if cmd_options.stdin is not None:
options['stdin'] = open(cmd_options.stdin)
if cmd_options.cmd_def is None:
options['cmd_def'] = []
else:
cmd_def = cmd_options.cmd_def
#it can be a file or an str
if os.path.exists(cmd_def):
cmd_def = open(cmd_def).read()
- options['cmd_def'] = eval(cmd_def)
+ cmd_def = eval(cmd_def)
+ if not isinstance(cmd_def, list):
+ msg = 'cmd_def should be a list of dicts, read the documentation'
+ parser.error(msg)
+ options['cmd_def'] = cmd_def
runner_conf = {}
if cmd_options.condor_log is not None:
condor_log = open(cmd_options.condor_log, 'w')
runner_conf['condor_log'] = condor_log
runner_conf['transfer_executable'] = False
options['runner_conf'] = runner_conf
return options
-def kill_process():
+def kill_processes():
'It kills the ongoing process'
if POPEN is not None:
POPEN.kill()
sys.exit(-1)
def set_signal_handlers():
'It sets the SIGTERM and SIGKILL signals'
- signal.signal(signal.SIGTERM, kill_process)
- signal.signal(signal.SIGABRT, kill_process)
- signal.signal(signal.SIGINT, kill_process)
+ signal.signal(signal.SIGTERM, kill_processes)
+ signal.signal(signal.SIGABRT, kill_processes)
+ signal.signal(signal.SIGINT, kill_processes)
def main():
'It runs a command in a condor cluster'
set_signal_handlers()
options = get_options()
global POPEN
POPEN = CondorPopen(**options)
sys.exit(POPEN.wait())
if __name__ == '__main__':
main()
\ No newline at end of file
diff --git a/test/prunner_test.py b/test/prunner_test.py
index f8cba0d..e81a866 100644
--- a/test/prunner_test.py
+++ b/test/prunner_test.py
@@ -1,191 +1,226 @@
'''
Created on 16/07/2009
@author: jose
'''
# Copyright 2009 Jose Blanca, Peio Ziarsolo, COMAV-Univ. Politecnica Valencia
# This file is part of psubprocess.
# psubprocess is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as
# published by the Free Software Foundation, either version 3 of the
# License, or (at your option) any later version.
# psubprocess is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
# You should have received a copy of the GNU Affero General Public License
# along with psubprocess. If not, see <http://www.gnu.org/licenses/>.
import unittest
from tempfile import NamedTemporaryFile
import os
from psubprocess import Popen
from psubprocess.streams import STDIN
from test_utils import create_test_binary
class PRunnerTest(unittest.TestCase):
'It test that we can parallelize processes'
@staticmethod
def test_file_in():
'It tests the most basic behaviour'
bin = create_test_binary()
#with infile
in_file = NamedTemporaryFile()
in_file.write('hola')
in_file.flush()
cmd = [bin]
cmd.extend(['-i', in_file.name])
stdout = NamedTemporaryFile()
stderr = NamedTemporaryFile()
cmd_def = [{'options': ('-i', '--input'), 'io': 'in', 'splitter':''}]
popen = Popen(cmd, stdout=stdout, stderr=stderr, cmd_def=cmd_def)
assert popen.wait() == 0 #waits till finishes and looks to the retcod
assert open(stdout.name).read() == 'hola'
in_file.close()
os.remove(bin)
@staticmethod
def test_job_no_in_stream():
'It test that a job with no in stream is run splits times'
bin = create_test_binary()
cmd = [bin]
cmd.extend(['-o', 'hola', '-e', 'caracola'])
stdout = NamedTemporaryFile()
stderr = NamedTemporaryFile()
cmd_def = [{'options': ('-i', '--input'), 'io': 'in', 'splitter':''}]
splits = 4
popen = Popen(cmd, stdout=stdout, stderr=stderr, cmd_def=cmd_def,
splits=splits)
assert popen.wait() == 0 #waits till finishes and looks to the retcod
assert open(stdout.name).read() == 'hola' * splits
assert open(stderr.name).read() == 'caracola' * splits
os.remove(bin)
@staticmethod
def test_stdin():
'It test that stdin works as input'
bin = create_test_binary()
#with stdin
content = 'hola1\nhola2\nhola3\nhola4\nhola5\nhola6\nhola7\nhola8\n'
content += 'hola9\nhola10|n'
stdin = NamedTemporaryFile()
stdin.write(content)
stdin.flush()
cmd = [bin]
cmd.extend(['-s'])
stdout = NamedTemporaryFile()
stderr = NamedTemporaryFile()
cmd_def = [{'options':STDIN, 'io': 'in', 'splitter':''}]
popen = Popen(cmd, stdout=stdout, stderr=stderr, stdin=stdin,
cmd_def=cmd_def)
assert popen.wait() == 0 #waits till finishes and looks to the retcod
assert open(stdout.name).read() == content
assert open(stderr.name).read() == ''
os.remove(bin)
@staticmethod
def test_infile_outfile():
'It tests that we can set an input file and an output file'
bin = create_test_binary()
#with infile
in_file = NamedTemporaryFile()
content = 'hola1\nhola2\nhola3\nhola4\nhola5\nhola6\nhola7\nhola8\n'
content += 'hola9\nhola10|n'
in_file.write(content)
in_file.flush()
out_file = NamedTemporaryFile()
cmd = [bin]
cmd.extend(['-i', in_file.name, '-t', out_file.name])
stdout = NamedTemporaryFile()
stderr = NamedTemporaryFile()
cmd_def = [{'options': ('-i', '--input'), 'io': 'in', 'splitter':''},
{'options': ('-t', '--output'), 'io': 'out'}]
popen = Popen(cmd, stdout=stdout, stderr=stderr, cmd_def=cmd_def)
assert popen.wait() == 0 #waits till finishes and looks to the retcod
assert not open(stdout.name).read()
assert not open(stderr.name).read()
assert open(out_file.name).read() == content
in_file.close()
os.remove(bin)
@staticmethod
def test_retcode():
'It tests that we get the correct returncode'
bin = create_test_binary()
cmd = [bin]
cmd.extend(['-r', '20'])
stdout = NamedTemporaryFile()
stderr = NamedTemporaryFile()
popen = Popen(cmd, stdout=stdout, stderr=stderr, cmd_def=[])
assert popen.wait() == 20 #waits till finishes and looks to the retcod
assert not open(stdout.name).read()
assert not open(stderr.name).read()
os.remove(bin)
@staticmethod
def xtest_infile_outfile_condor():
'It tests that we can set an input file and an output file'
bin = create_test_binary()
#with infile
in_file = NamedTemporaryFile()
content = 'hola1\nhola2\nhola3\nhola4\nhola5\nhola6\nhola7\nhola8\n'
content += 'hola9\nhola10|n'
in_file.write(content)
in_file.flush()
out_file = NamedTemporaryFile()
cmd = [bin]
cmd.extend(['-i', in_file.name, '-t', out_file.name])
stdout = NamedTemporaryFile()
stderr = NamedTemporaryFile()
cmd_def = [{'options': ('-i', '--input'), 'io': 'in', 'splitter':''},
{'options': ('-t', '--output'), 'io': 'out'}]
from psubprocess import CondorPopen
popen = Popen(cmd, stdout=stdout, stderr=stderr, cmd_def=cmd_def,
runner=CondorPopen,
runner_conf={'transfer_executable':True})
assert popen.wait() == 0 #waits till finishes and looks to the retcod
assert not open(stdout.name).read()
assert not open(stderr.name).read()
assert open(out_file.name).read() == content
in_file.close()
os.remove(bin)
@staticmethod
def test_stdin_real_splitter():
'It test that stdin works as input'
bin = create_test_binary()
#with stdin
content = '>hola1\nhola2\n>hola3\nhola4\n>hola5\nhola6\n>hola7\nhola8\n'
content += '>hola9\nhola10|n'
stdin = NamedTemporaryFile()
stdin.write(content)
stdin.flush()
cmd = [bin]
cmd.extend(['-s'])
stdout = NamedTemporaryFile()
stderr = NamedTemporaryFile()
cmd_def = [{'options':STDIN, 'io': 'in', 'splitter':'>'}]
popen = Popen(cmd, stdout=stdout, stderr=stderr, stdin=stdin,
cmd_def=cmd_def)
assert popen.wait() == 0 #waits till finishes and looks to the retcod
assert open(stdout.name).read() == content
assert open(stderr.name).read() == ''
os.remove(bin)
+ @staticmethod
+ def test_2_infile_outfile():
+ 'It tests that we can set 2 input files and an output file'
+ bin = create_test_binary()
+ #with infile
+ content = 'hola1\nhola2\nhola3\nhola4\nhola5\nhola6\nhola7\nhola8\n'
+ content += 'hola9\nhola10|n'
+ in_file1 = NamedTemporaryFile()
+ in_file1.write(content)
+ in_file1.flush()
+ in_file2 = NamedTemporaryFile()
+ in_file2.write(content)
+ in_file2.flush()
+ out_file1 = NamedTemporaryFile()
+ out_file2 = NamedTemporaryFile()
+
+ cmd = [bin]
+ cmd.extend(['-i', in_file1.name, '-t', out_file1.name])
+ cmd.extend(['-x', in_file2.name, '-z', out_file2.name])
+ stdout = NamedTemporaryFile()
+ stderr = NamedTemporaryFile()
+ cmd_def = [{'options': ('-i', '--input'), 'io': 'in', 'splitter':''},
+ {'options': ('-x', '--input'), 'io': 'in', 'splitter':''},
+ {'options': ('-t', '--output'), 'io': 'out'},
+ {'options': ('-z', '--output'), 'io': 'out'}]
+ popen = Popen(cmd, stdout=stdout, stderr=stderr, cmd_def=cmd_def)
+ assert popen.wait() == 0 #waits till finishes and looks to the retcod
+ assert not open(stdout.name).read()
+ assert not open(stderr.name).read()
+ assert open(out_file1.name).read() == content
+ assert open(out_file2.name).read() == content
+ in_file1.close()
+ in_file2.close()
+ os.remove(bin)
+
if __name__ == "__main__":
#import sys;sys.argv = ['', 'Test.testName']
unittest.main()
\ No newline at end of file
diff --git a/test/test_utils.py b/test/test_utils.py
index f0f87d2..23c02e0 100644
--- a/test/test_utils.py
+++ b/test/test_utils.py
@@ -1,77 +1,82 @@
'''
Created on 16/07/2009
@author: jose
'''
# Copyright 2009 Jose Blanca, Peio Ziarsolo, COMAV-Univ. Politecnica Valencia
# This file is part of psubprocess.
# psubprocess is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as
# published by the Free Software Foundation, either version 3 of the
# License, or (at your option) any later version.
# psubprocess is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
# You should have received a copy of the GNU Affero General Public License
# along with psubprocess. If not, see <http://www.gnu.org/licenses/>.
from tempfile import NamedTemporaryFile
import os, stat, shutil
TEST_BINARY = '''#!/usr/bin/env python
import sys, shutil, os
args = sys.argv
#-o something send something to stdout
#-e something send something to stderr
#-i some_file send the file content to sdout
#-t some_file copy the -i file to -t file
+#-x some_file
+#-z some_file copy the -x file to -z file
#-s and stdin write stdin to stout
#-r a number return this retcode
#are the commands in the argv?
arg_indexes = {}
-for param in ('-o', '-e', '-i', '-t', '-s', '-r'):
+for param in ('-o', '-e', '-i', '-t', '-s', '-r', '-x', '-z'):
try:
arg_indexes[param] = args.index(param)
except ValueError:
arg_indexes[param] = None
#stdout, stderr
if arg_indexes['-o']:
sys.stdout.write(args[arg_indexes['-o'] + 1])
if arg_indexes['-e']:
sys.stderr.write(args[arg_indexes['-e'] + 1])
#-i -t
if arg_indexes['-i'] and not arg_indexes['-t']:
sys.stdout.write(open(args[arg_indexes['-i'] + 1]).read())
elif arg_indexes['-i'] and arg_indexes['-t']:
shutil.copy(args[arg_indexes['-i'] + 1], args[arg_indexes['-t'] + 1])
+
+if arg_indexes['-x'] and arg_indexes['-z']:
+ shutil.copy(args[arg_indexes['-x'] + 1], args[arg_indexes['-z'] + 1])
#stdin
if arg_indexes['-s']:
stdin = sys.stdin.read()
sys.stdout.write(stdin)
#retcode
if arg_indexes['-r']:
retcode = int(args[arg_indexes['-r'] + 1])
else:
retcode = 0
sys.exit(retcode)
'''
def create_test_binary():
'It creates a file with a test python binary in it'
fhand = NamedTemporaryFile(suffix='.py')
fhand.write(TEST_BINARY)
fhand.flush()
os.chmod(fhand.name, stat.S_IXOTH | stat.S_IRWXU)
fname = '/tmp/test_cmd.py'
shutil.copy(fhand.name, fname)
fhand.close()
#it should be executable
return fname
|
JoseBlanca/psubprocess
|
23767565b5d0a8ad955a32c80f0efe05d5276ceb
|
Now the job requirement options can be set for condor
|
diff --git a/psubprocess/condor_runner.py b/psubprocess/condor_runner.py
index 3e050c3..5baebeb 100644
--- a/psubprocess/condor_runner.py
+++ b/psubprocess/condor_runner.py
@@ -1,278 +1,284 @@
'''It launches processes using Condor with an interface similar to Popen
Created on 14/07/2009
@author: jose
'''
# Copyright 2009 Jose Blanca, Peio Ziarsolo, COMAV-Univ. Politecnica Valencia
# This file is part of psubprocess.
# psubprocess is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as
# published by the Free Software Foundation, either version 3 of the
# License, or (at your option) any later version.
# psubprocess is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
# You should have received a copy of the GNU Affero General Public License
# along with psubprocess. If not, see <http://www.gnu.org/licenses/>.
from tempfile import NamedTemporaryFile
import subprocess, signal, os.path
from psubprocess.streams import get_streams_from_cmd
def call(cmd, env=None, stdin=None):
'It calls a command and it returns stdout, stderr and retcode'
def subprocess_setup():
''' Python installs a SIGPIPE handler by default. This is usually not
what non-Python subprocesses expect. Taken from this url:
http://www.chiark.greenend.org.uk/ucgi/~cjwatson/blosxom/2009/07/02#
2009-07-02-python-sigpipe'''
signal.signal(signal.SIGPIPE, signal.SIG_DFL)
if stdin is None:
pstdin = None
else:
pstdin = subprocess.PIPE
process = subprocess.Popen(cmd, stdout=subprocess.PIPE,
stderr=subprocess.PIPE, env=env, stdin=pstdin,
preexec_fn=subprocess_setup)
if stdin is None:
stdout, stderr = process.communicate()
else:
# a = stdin.read()
# print a
# stdout, stderr = subprocess.Popen.stdin = stdin
# print stdin.read()
stdout, stderr = process.communicate(stdin)
retcode = process.returncode
return stdout, stderr, retcode
def write_condor_job_file(fhand, parameters):
'It writes a condor job file using the given fhand'
to_print = 'Executable = %s\nArguments = "%s"\nUniverse = vanilla\n' % \
(parameters['executable'], parameters['arguments'])
fhand.write(to_print)
to_print = 'Log = %s\n' % parameters['log_file'].name
fhand.write(to_print)
if parameters['transfer_files']:
to_print = 'When_to_transfer_output = ON_EXIT\n'
fhand.write(to_print)
to_print = 'Getenv = True\n'
fhand.write(to_print)
if ('transfer_executable' in parameters and
parameters['transfer_executable']):
to_print = 'Transfer_executable = %s\n' % \
parameters['transfer_executable']
fhand.write(to_print)
if 'input_fnames' in parameters and parameters['input_fnames']:
ins = ','.join(parameters['input_fnames'])
to_print = 'Transfer_input_files = %s\n' % ins
fhand.write(to_print)
if parameters['transfer_files']:
to_print = 'Should_transfer_files = IF_NEEDED\n'
fhand.write(to_print)
+ if 'requirements' in parameters:
+ to_print = "requeriments = '%s'\n" % parameters['requirements']
+ fhand.write(to_print)
if 'stdout' in parameters:
to_print = 'Output = %s\n' % parameters['stdout'].name
fhand.write(to_print)
if 'stderr' in parameters:
to_print = 'Error = %s\n' % parameters['stderr'].name
fhand.write(to_print)
if 'stdin' in parameters:
to_print = 'Input = %s\n' % parameters['stdin'].name
fhand.write(to_print)
to_print = 'Queue\n'
fhand.write(to_print)
fhand.flush()
class Popen(object):
'It launches and controls a condor job'
def __init__(self, cmd, cmd_def=None, runner_conf=None,
- stdout=None, stderr=None, stdin=None, condor_log=None):
+ stdout=None, stderr=None, stdin=None):
'It launches a condor job'
if cmd_def is None:
cmd_def = []
#runner conf
if runner_conf is None:
runner_conf = {}
#some defaults
if 'transfer_files' not in runner_conf:
runner_conf['transfer_files'] = True
- if condor_log is None:
+ if 'condor_log' not in runner_conf:
self._log_file = NamedTemporaryFile(suffix='.log')
else:
- self._log_file = condor_log
+ self._log_file = runner_conf['condor_log']
#create condor job file
condor_job_file = self._create_condor_job_file(cmd, cmd_def,
self._log_file,
runner_conf,
stdout, stderr, stdin)
self._condor_job_file = condor_job_file
#launch condor
self._retcode = None
self._cluster_number = None
self._launch_condor(condor_job_file)
def _launch_condor(self, condor_job_file):
'Given the condor_job_file it launches the condor job'
stdout, stderr, retcode = call(['condor_submit', condor_job_file.name])
if retcode:
msg = 'There was a problem with condor_submit: ' + stderr
raise RuntimeError(msg)
#the condor cluster number is given by condor_submit
#1 job(s) submitted to cluster 15.
for line in stdout.splitlines():
if 'submitted to cluster' in line:
self._cluster_number = line.strip().strip('.').split()[-1]
def _get_pid(self):
'It returns the condor cluster number'
return self._cluster_number
pid = property(_get_pid)
def _get_returncode(self):
'It returns the return code'
return self._retcode
returncode = property(_get_returncode)
@staticmethod
def _remove_paths_from_cmd(cmd, streams, conf):
'''It removes the absolute and relative paths from the cmd,
it returns the modified cmd'''
cmd_mod = cmd[:]
for stream in streams:
if 'fname' not in stream:
continue
fpath = stream['fname']
#for the output files we can't deal with transfering files with
#paths. Condor will deliver those files into the initialdir, not
#where we expected.
if (stream['io'] != 'in' and conf['transfer_files']
and os.path.split(fpath)[-1] != fpath):
msg = 'output files with paths are not transferable'
raise ValueError(msg)
index = cmd_mod.index(fpath)
fpath = os.path.split(fpath)[-1]
cmd_mod[index] = fpath
return cmd_mod
def _create_condor_job_file(self, cmd, cmd_def, log_file, runner_conf,
stdout, stderr, stdin):
'Given a cmd and the cmd_def it returns the condor job file'
#streams
streams = get_streams_from_cmd(cmd, cmd_def, stdout=stdout,
stderr=stderr, stdin=stdin)
#we need some parameters to write the condor file
parameters = {}
#the executable
binary = cmd[0]
#the binary should be an absolute path
if not os.path.isabs(binary):
#the path to the binary could be relative
if os.sep in binary:
#we make the path absolute
binary = os.path.abspath(binary)
else:
#we have to look in the system $PATH
binary = call(['which', binary])[0].strip()
parameters['executable'] = binary
parameters['log_file'] = log_file
#the cmd shouldn't have absolute path in the files because they will be
#transfered to another node in the condor working dir and they wouldn't
#be found with an absolute path
cmd_no_path = self._remove_paths_from_cmd(cmd, streams, runner_conf)
parameters['arguments'] = ' '.join(cmd_no_path[1:])
if stdout is not None:
parameters['stdout'] = stdout
if stderr is not None:
parameters['stderr'] = stderr
if stdin is not None:
parameters['stdin'] = stdin
transfer_bin = False
if 'transfer_executable' in runner_conf:
transfer_bin = runner_conf['transfer_executable']
parameters['transfer_executable'] = transfer_bin
transfer_files = runner_conf['transfer_executable']
parameters['transfer_files'] = str(transfer_files)
+ if 'requirements' in runner_conf:
+ parameters['requirements'] = runner_conf['requirements']
+
in_fnames = []
for stream in streams:
if stream['io'] == 'in':
fname = None
if 'fname' in stream:
fname = stream['fname']
else:
fname = stream['fhand'].name
in_fnames.append(fname)
parameters['input_fnames'] = in_fnames
#now we can create the job file
condor_job_file = NamedTemporaryFile()
write_condor_job_file(condor_job_file, parameters=parameters)
return condor_job_file
def _update_retcode(self):
'It updates the retcode looking at the log file, it returns the retcode'
for line in open(self._log_file.name):
if 'return value' in line:
ret = line.split('return value')[1].strip().strip(')')
self._retcode = int(ret)
return self._retcode
def poll(self):
'It checks if condor has run ower condor cluster'
cluster_number = self._cluster_number
cmd = ['condor_q', cluster_number,
'-format', '"%d.\n"', 'ClusterId']
stdout, stderr, retcode = call(cmd)
if retcode:
msg = 'There was a problem with condor_q: ' + stderr
raise RuntimeError(msg)
if cluster_number not in stdout:
#the job is finished
return self._update_retcode()
return self._retcode
def wait(self):
'It waits until the condor job is finished'
stderr, retcode = call(['condor_wait', self._log_file.name])[1:]
if retcode:
msg = 'There was a problem with condor_wait: ' + stderr
raise RuntimeError(msg)
return self._update_retcode()
def kill(self):
'It runs condor_rm for the condor job'
stderr, retcode = call(['condor_rm', self.pid])[1:]
if retcode:
msg = 'There was a problem with condor_rm: ' + stderr
raise RuntimeError(msg)
return self._update_retcode()
def terminate(self):
'It runs condor_rm for the condor job'
self.kill()
def get_default_splits():
'It returns a suggested number of splits for this Popen runner'
stdout, stderr, retcode = call(['condor_status', '-total'])
if retcode:
msg = 'There was a problem with condor_status: ' + stderr
raise RuntimeError(msg)
for line in stdout.splitlines():
line = line.strip().lower()
if line.startswith('total') and 'owner' not in line:
return int(line.split()[1]) * 2
diff --git a/scripts/run_in_parallel.py b/scripts/run_in_parallel.py
index 04536dd..839f5f3 100644
--- a/scripts/run_in_parallel.py
+++ b/scripts/run_in_parallel.py
@@ -1,102 +1,108 @@
'''
Created on 21/07/2009
@author: jose
'''
# Copyright 2009 Jose Blanca, Peio Ziarsolo, COMAV-Univ. Politecnica Valencia
# This file is part of psubprocess.
# psubprocess is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as
# published by the Free Software Foundation, either version 3 of the
# License, or (at your option) any later version.
# psubprocess is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
# You should have received a copy of the GNU Affero General Public License
# along with psubprocess. If not, see <http://www.gnu.org/licenses/>.
from optparse import OptionParser
import os.path, sys, signal
from psubprocess import CondorPopen, Popen
POPEN = None
def parse_options():
'It parses the command line arguments'
parser = OptionParser('usage: %prog -c "command"')
parser.add_option('-n', '--nsplits', dest='splits',
help='number of subjobs to create')
parser.add_option('-r', '--runner', dest='runner', default='subprocess',
help='who should run the subjobs (subprocess or condor)')
parser.add_option('-c', '--command', dest='command',
help='The command to run')
parser.add_option('-o', '--stdout', dest='stdout',
help='A file to store the stdout')
parser.add_option('-e', '--stderr', dest='stderr',
help='A file to store the stderr')
parser.add_option('-i', '--stdin', dest='stdin',
help='A file to store the stdin')
parser.add_option('-d', '--cmd_def', dest='cmd_def',
help='The command line definition')
-
+ parser.add_option('-q', '--runner_req', dest='runner_req',
+ help='runner requirements')
return parser
def get_options():
'It returns a dict with the options'
parser = parse_options()
cmd_options = parser.parse_args()[0]
options = {}
if cmd_options.command is None:
raise parser.error('The command should be set')
else:
options['cmd'] = cmd_options.command.split()
if cmd_options.stdout is not None:
options['stdout'] = open(cmd_options.stdout, 'w')
if cmd_options.stderr is not None:
options['stderr'] = open(cmd_options.stderr, 'w')
if cmd_options.stdin is not None:
options['stdin'] = open(cmd_options.stdin)
if cmd_options.runner == 'subprocess':
options['runner'] = None
elif cmd_options.runner == 'condor':
+ runner_conf = {}
+ runner_conf['transfer_executable'] = False
+ if cmd_options.runner_req is not None:
+ runner_conf['requirements'] = cmd_options.runner_req
+ options['runner_conf'] = runner_conf
options['runner'] = CondorPopen
- options['runner_conf'] = {'transfer_executable':False}
else:
parser.error('Allowable runners are: subprocess and condor')
if cmd_options.cmd_def is None:
options['cmd_def'] = []
else:
cmd_def = cmd_options.cmd_def
#it can be a file or an str
if os.path.exists(cmd_def):
cmd_def = open(cmd_def).read()
options['cmd_def'] = eval(cmd_def)
+
return options
def kill_process():
'It kills the ongoing process'
if POPEN is not None:
POPEN.kill()
sys.exit(-1)
def set_signal_handlers():
'It sets the SIGTERM and SIGKILL signals'
signal.signal(signal.SIGTERM, kill_process)
signal.signal(signal.SIGABRT, kill_process)
signal.signal(signal.SIGINT, kill_process)
def main():
'It runs a command in parallel'
set_signal_handlers()
options = get_options()
global POPEN
POPEN = Popen(**options)
sys.exit(POPEN.wait())
if __name__ == '__main__':
main()
\ No newline at end of file
diff --git a/scripts/run_with_condor.py b/scripts/run_with_condor.py
index 9cf3267..1d7209f 100644
--- a/scripts/run_with_condor.py
+++ b/scripts/run_with_condor.py
@@ -1,97 +1,102 @@
'''
Created on 21/07/2009
@author: jose
'''
# Copyright 2009 Jose Blanca, Peio Ziarsolo, COMAV-Univ. Politecnica Valencia
# This file is part of psubprocess.
# psubprocess is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as
# published by the Free Software Foundation, either version 3 of the
# License, or (at your option) any later version.
# psubprocess is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
# You should have received a copy of the GNU Affero General Public License
# along with psubprocess. If not, see <http://www.gnu.org/licenses/>.
from optparse import OptionParser
import os.path, sys, signal
from psubprocess import CondorPopen
POPEN = None
def parse_options():
'It parses the command line arguments'
parser = OptionParser('usage: %prog -c "command"')
parser.add_option('-c', '--command', dest='command',
help='The command to run')
parser.add_option('-o', '--stdout', dest='stdout',
help='A file to store the stdout')
parser.add_option('-e', '--stderr', dest='stderr',
help='A file to store the stderr')
parser.add_option('-i', '--stdin', dest='stdin',
help='A file to store the stdin')
parser.add_option('-d', '--cmd_def', dest='cmd_def',
help='The command line definition')
- parser.add_option('-l', '--condor_log', dest='condor_log',
- help='The condor log file')
+ parser.add_option('-l', '--log', dest='condor_log',
+ help='The log file')
+ parser.add_option('-q', '--condor_req', dest='runner_req',
+ help='condor requiements for the job')
return parser
def get_options():
'It returns a dict with the options'
parser = parse_options()
cmd_options = parser.parse_args()[0]
options = {}
if cmd_options.command is None:
raise parser.error('The command should be set')
else:
options['cmd'] = cmd_options.command.split()
if cmd_options.stdout is not None:
options['stdout'] = open(cmd_options.stdout, 'w')
if cmd_options.stderr is not None:
options['stderr'] = open(cmd_options.stderr, 'w')
if cmd_options.stdin is not None:
options['stdin'] = open(cmd_options.stdin)
- options['runner_conf'] = {'transfer_executable':False}
if cmd_options.cmd_def is None:
options['cmd_def'] = []
else:
cmd_def = cmd_options.cmd_def
#it can be a file or an str
if os.path.exists(cmd_def):
cmd_def = open(cmd_def).read()
options['cmd_def'] = eval(cmd_def)
+ runner_conf = {}
if cmd_options.condor_log is not None:
- options['condor_log'] = open(cmd_options.condor_log, 'w')
+ condor_log = open(cmd_options.condor_log, 'w')
+ runner_conf['condor_log'] = condor_log
+ runner_conf['transfer_executable'] = False
+ options['runner_conf'] = runner_conf
return options
def kill_process():
'It kills the ongoing process'
if POPEN is not None:
POPEN.kill()
sys.exit(-1)
def set_signal_handlers():
'It sets the SIGTERM and SIGKILL signals'
signal.signal(signal.SIGTERM, kill_process)
signal.signal(signal.SIGABRT, kill_process)
signal.signal(signal.SIGINT, kill_process)
def main():
'It runs a command in a condor cluster'
set_signal_handlers()
options = get_options()
global POPEN
POPEN = CondorPopen(**options)
sys.exit(POPEN.wait())
if __name__ == '__main__':
main()
\ No newline at end of file
|
JoseBlanca/psubprocess
|
5b7d624ddf056b08bc33101a09af9230a2b18c33
|
The condor_log file can be now set in run_with_condor
|
diff --git a/psubprocess/condor_runner.py b/psubprocess/condor_runner.py
index dc2d65a..3e050c3 100644
--- a/psubprocess/condor_runner.py
+++ b/psubprocess/condor_runner.py
@@ -1,275 +1,278 @@
'''It launches processes using Condor with an interface similar to Popen
Created on 14/07/2009
@author: jose
'''
# Copyright 2009 Jose Blanca, Peio Ziarsolo, COMAV-Univ. Politecnica Valencia
# This file is part of psubprocess.
# psubprocess is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as
# published by the Free Software Foundation, either version 3 of the
# License, or (at your option) any later version.
# psubprocess is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
# You should have received a copy of the GNU Affero General Public License
# along with psubprocess. If not, see <http://www.gnu.org/licenses/>.
from tempfile import NamedTemporaryFile
import subprocess, signal, os.path
from psubprocess.streams import get_streams_from_cmd
def call(cmd, env=None, stdin=None):
'It calls a command and it returns stdout, stderr and retcode'
def subprocess_setup():
''' Python installs a SIGPIPE handler by default. This is usually not
what non-Python subprocesses expect. Taken from this url:
http://www.chiark.greenend.org.uk/ucgi/~cjwatson/blosxom/2009/07/02#
2009-07-02-python-sigpipe'''
signal.signal(signal.SIGPIPE, signal.SIG_DFL)
if stdin is None:
pstdin = None
else:
pstdin = subprocess.PIPE
process = subprocess.Popen(cmd, stdout=subprocess.PIPE,
stderr=subprocess.PIPE, env=env, stdin=pstdin,
preexec_fn=subprocess_setup)
if stdin is None:
stdout, stderr = process.communicate()
else:
# a = stdin.read()
# print a
# stdout, stderr = subprocess.Popen.stdin = stdin
# print stdin.read()
stdout, stderr = process.communicate(stdin)
retcode = process.returncode
return stdout, stderr, retcode
def write_condor_job_file(fhand, parameters):
'It writes a condor job file using the given fhand'
to_print = 'Executable = %s\nArguments = "%s"\nUniverse = vanilla\n' % \
(parameters['executable'], parameters['arguments'])
fhand.write(to_print)
to_print = 'Log = %s\n' % parameters['log_file'].name
fhand.write(to_print)
if parameters['transfer_files']:
to_print = 'When_to_transfer_output = ON_EXIT\n'
fhand.write(to_print)
to_print = 'Getenv = True\n'
fhand.write(to_print)
if ('transfer_executable' in parameters and
parameters['transfer_executable']):
to_print = 'Transfer_executable = %s\n' % \
parameters['transfer_executable']
fhand.write(to_print)
if 'input_fnames' in parameters and parameters['input_fnames']:
ins = ','.join(parameters['input_fnames'])
to_print = 'Transfer_input_files = %s\n' % ins
fhand.write(to_print)
if parameters['transfer_files']:
to_print = 'Should_transfer_files = IF_NEEDED\n'
fhand.write(to_print)
if 'stdout' in parameters:
to_print = 'Output = %s\n' % parameters['stdout'].name
fhand.write(to_print)
if 'stderr' in parameters:
to_print = 'Error = %s\n' % parameters['stderr'].name
fhand.write(to_print)
if 'stdin' in parameters:
to_print = 'Input = %s\n' % parameters['stdin'].name
fhand.write(to_print)
to_print = 'Queue\n'
fhand.write(to_print)
fhand.flush()
class Popen(object):
'It launches and controls a condor job'
def __init__(self, cmd, cmd_def=None, runner_conf=None,
- stdout=None, stderr=None, stdin=None):
+ stdout=None, stderr=None, stdin=None, condor_log=None):
'It launches a condor job'
if cmd_def is None:
cmd_def = []
#runner conf
if runner_conf is None:
runner_conf = {}
#some defaults
if 'transfer_files' not in runner_conf:
runner_conf['transfer_files'] = True
- self._log_file = NamedTemporaryFile(suffix='.log')
+ if condor_log is None:
+ self._log_file = NamedTemporaryFile(suffix='.log')
+ else:
+ self._log_file = condor_log
#create condor job file
condor_job_file = self._create_condor_job_file(cmd, cmd_def,
self._log_file,
runner_conf,
stdout, stderr, stdin)
self._condor_job_file = condor_job_file
#launch condor
self._retcode = None
self._cluster_number = None
self._launch_condor(condor_job_file)
def _launch_condor(self, condor_job_file):
'Given the condor_job_file it launches the condor job'
stdout, stderr, retcode = call(['condor_submit', condor_job_file.name])
if retcode:
msg = 'There was a problem with condor_submit: ' + stderr
raise RuntimeError(msg)
#the condor cluster number is given by condor_submit
#1 job(s) submitted to cluster 15.
for line in stdout.splitlines():
if 'submitted to cluster' in line:
self._cluster_number = line.strip().strip('.').split()[-1]
def _get_pid(self):
'It returns the condor cluster number'
return self._cluster_number
pid = property(_get_pid)
def _get_returncode(self):
'It returns the return code'
return self._retcode
returncode = property(_get_returncode)
@staticmethod
def _remove_paths_from_cmd(cmd, streams, conf):
'''It removes the absolute and relative paths from the cmd,
it returns the modified cmd'''
cmd_mod = cmd[:]
for stream in streams:
if 'fname' not in stream:
continue
fpath = stream['fname']
#for the output files we can't deal with transfering files with
#paths. Condor will deliver those files into the initialdir, not
#where we expected.
if (stream['io'] != 'in' and conf['transfer_files']
and os.path.split(fpath)[-1] != fpath):
msg = 'output files with paths are not transferable'
raise ValueError(msg)
index = cmd_mod.index(fpath)
fpath = os.path.split(fpath)[-1]
cmd_mod[index] = fpath
return cmd_mod
def _create_condor_job_file(self, cmd, cmd_def, log_file, runner_conf,
stdout, stderr, stdin):
'Given a cmd and the cmd_def it returns the condor job file'
#streams
streams = get_streams_from_cmd(cmd, cmd_def, stdout=stdout,
stderr=stderr, stdin=stdin)
#we need some parameters to write the condor file
parameters = {}
#the executable
binary = cmd[0]
#the binary should be an absolute path
if not os.path.isabs(binary):
#the path to the binary could be relative
if os.sep in binary:
#we make the path absolute
binary = os.path.abspath(binary)
else:
#we have to look in the system $PATH
binary = call(['which', binary])[0].strip()
parameters['executable'] = binary
parameters['log_file'] = log_file
#the cmd shouldn't have absolute path in the files because they will be
#transfered to another node in the condor working dir and they wouldn't
#be found with an absolute path
cmd_no_path = self._remove_paths_from_cmd(cmd, streams, runner_conf)
parameters['arguments'] = ' '.join(cmd_no_path[1:])
if stdout is not None:
parameters['stdout'] = stdout
if stderr is not None:
parameters['stderr'] = stderr
if stdin is not None:
parameters['stdin'] = stdin
transfer_bin = False
if 'transfer_executable' in runner_conf:
transfer_bin = runner_conf['transfer_executable']
parameters['transfer_executable'] = transfer_bin
transfer_files = runner_conf['transfer_executable']
parameters['transfer_files'] = str(transfer_files)
in_fnames = []
for stream in streams:
if stream['io'] == 'in':
fname = None
if 'fname' in stream:
fname = stream['fname']
else:
fname = stream['fhand'].name
in_fnames.append(fname)
parameters['input_fnames'] = in_fnames
#now we can create the job file
condor_job_file = NamedTemporaryFile()
write_condor_job_file(condor_job_file, parameters=parameters)
return condor_job_file
def _update_retcode(self):
'It updates the retcode looking at the log file, it returns the retcode'
for line in open(self._log_file.name):
if 'return value' in line:
ret = line.split('return value')[1].strip().strip(')')
self._retcode = int(ret)
return self._retcode
def poll(self):
'It checks if condor has run ower condor cluster'
cluster_number = self._cluster_number
cmd = ['condor_q', cluster_number,
'-format', '"%d.\n"', 'ClusterId']
stdout, stderr, retcode = call(cmd)
if retcode:
msg = 'There was a problem with condor_q: ' + stderr
raise RuntimeError(msg)
if cluster_number not in stdout:
#the job is finished
return self._update_retcode()
return self._retcode
def wait(self):
'It waits until the condor job is finished'
stderr, retcode = call(['condor_wait', self._log_file.name])[1:]
if retcode:
msg = 'There was a problem with condor_wait: ' + stderr
raise RuntimeError(msg)
return self._update_retcode()
def kill(self):
'It runs condor_rm for the condor job'
stderr, retcode = call(['condor_rm', self.pid])[1:]
if retcode:
msg = 'There was a problem with condor_rm: ' + stderr
raise RuntimeError(msg)
return self._update_retcode()
def terminate(self):
'It runs condor_rm for the condor job'
self.kill()
def get_default_splits():
'It returns a suggested number of splits for this Popen runner'
stdout, stderr, retcode = call(['condor_status', '-total'])
if retcode:
msg = 'There was a problem with condor_status: ' + stderr
raise RuntimeError(msg)
for line in stdout.splitlines():
line = line.strip().lower()
if line.startswith('total') and 'owner' not in line:
return int(line.split()[1]) * 2
diff --git a/scripts/run_with_condor.py b/scripts/run_with_condor.py
index bd0127f..9cf3267 100644
--- a/scripts/run_with_condor.py
+++ b/scripts/run_with_condor.py
@@ -1,91 +1,97 @@
'''
Created on 21/07/2009
@author: jose
'''
# Copyright 2009 Jose Blanca, Peio Ziarsolo, COMAV-Univ. Politecnica Valencia
# This file is part of psubprocess.
# psubprocess is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as
# published by the Free Software Foundation, either version 3 of the
# License, or (at your option) any later version.
# psubprocess is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
# You should have received a copy of the GNU Affero General Public License
# along with psubprocess. If not, see <http://www.gnu.org/licenses/>.
from optparse import OptionParser
import os.path, sys, signal
from psubprocess import CondorPopen
POPEN = None
def parse_options():
'It parses the command line arguments'
parser = OptionParser('usage: %prog -c "command"')
parser.add_option('-c', '--command', dest='command',
help='The command to run')
parser.add_option('-o', '--stdout', dest='stdout',
help='A file to store the stdout')
parser.add_option('-e', '--stderr', dest='stderr',
help='A file to store the stderr')
parser.add_option('-i', '--stdin', dest='stdin',
help='A file to store the stdin')
parser.add_option('-d', '--cmd_def', dest='cmd_def',
help='The command line definition')
+ parser.add_option('-l', '--condor_log', dest='condor_log',
+ help='The condor log file')
return parser
def get_options():
'It returns a dict with the options'
parser = parse_options()
cmd_options = parser.parse_args()[0]
options = {}
if cmd_options.command is None:
raise parser.error('The command should be set')
else:
options['cmd'] = cmd_options.command.split()
if cmd_options.stdout is not None:
options['stdout'] = open(cmd_options.stdout, 'w')
if cmd_options.stderr is not None:
options['stderr'] = open(cmd_options.stderr, 'w')
if cmd_options.stdin is not None:
options['stdin'] = open(cmd_options.stdin)
options['runner_conf'] = {'transfer_executable':False}
if cmd_options.cmd_def is None:
options['cmd_def'] = []
else:
cmd_def = cmd_options.cmd_def
#it can be a file or an str
if os.path.exists(cmd_def):
cmd_def = open(cmd_def).read()
options['cmd_def'] = eval(cmd_def)
+
+ if cmd_options.condor_log is not None:
+ options['condor_log'] = open(cmd_options.condor_log, 'w')
+
return options
def kill_process():
'It kills the ongoing process'
if POPEN is not None:
POPEN.kill()
sys.exit(-1)
def set_signal_handlers():
'It sets the SIGTERM and SIGKILL signals'
signal.signal(signal.SIGTERM, kill_process)
signal.signal(signal.SIGABRT, kill_process)
signal.signal(signal.SIGINT, kill_process)
def main():
'It runs a command in a condor cluster'
set_signal_handlers()
options = get_options()
global POPEN
POPEN = CondorPopen(**options)
sys.exit(POPEN.wait())
if __name__ == '__main__':
main()
\ No newline at end of file
|
JoseBlanca/psubprocess
|
3b22ca66e59001ddb5ab3dfae73da51d42bdd35e
|
In condor the binary should have an absolute path. That is now fixed.
|
diff --git a/psubprocess/condor_runner.py b/psubprocess/condor_runner.py
index efdd4d5..dc2d65a 100644
--- a/psubprocess/condor_runner.py
+++ b/psubprocess/condor_runner.py
@@ -1,260 +1,275 @@
'''It launches processes using Condor with an interface similar to Popen
Created on 14/07/2009
@author: jose
'''
# Copyright 2009 Jose Blanca, Peio Ziarsolo, COMAV-Univ. Politecnica Valencia
# This file is part of psubprocess.
# psubprocess is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as
# published by the Free Software Foundation, either version 3 of the
# License, or (at your option) any later version.
# psubprocess is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
# You should have received a copy of the GNU Affero General Public License
# along with psubprocess. If not, see <http://www.gnu.org/licenses/>.
from tempfile import NamedTemporaryFile
import subprocess, signal, os.path
from psubprocess.streams import get_streams_from_cmd
def call(cmd, env=None, stdin=None):
'It calls a command and it returns stdout, stderr and retcode'
def subprocess_setup():
''' Python installs a SIGPIPE handler by default. This is usually not
what non-Python subprocesses expect. Taken from this url:
http://www.chiark.greenend.org.uk/ucgi/~cjwatson/blosxom/2009/07/02#
2009-07-02-python-sigpipe'''
signal.signal(signal.SIGPIPE, signal.SIG_DFL)
if stdin is None:
pstdin = None
else:
pstdin = subprocess.PIPE
process = subprocess.Popen(cmd, stdout=subprocess.PIPE,
stderr=subprocess.PIPE, env=env, stdin=pstdin,
preexec_fn=subprocess_setup)
if stdin is None:
stdout, stderr = process.communicate()
else:
# a = stdin.read()
# print a
# stdout, stderr = subprocess.Popen.stdin = stdin
# print stdin.read()
stdout, stderr = process.communicate(stdin)
retcode = process.returncode
return stdout, stderr, retcode
def write_condor_job_file(fhand, parameters):
'It writes a condor job file using the given fhand'
to_print = 'Executable = %s\nArguments = "%s"\nUniverse = vanilla\n' % \
(parameters['executable'], parameters['arguments'])
fhand.write(to_print)
to_print = 'Log = %s\n' % parameters['log_file'].name
fhand.write(to_print)
if parameters['transfer_files']:
to_print = 'When_to_transfer_output = ON_EXIT\n'
fhand.write(to_print)
to_print = 'Getenv = True\n'
fhand.write(to_print)
- to_print = 'Transfer_executable = %s\n' % parameters['transfer_executable']
- fhand.write(to_print)
+ if ('transfer_executable' in parameters and
+ parameters['transfer_executable']):
+ to_print = 'Transfer_executable = %s\n' % \
+ parameters['transfer_executable']
+ fhand.write(to_print)
if 'input_fnames' in parameters and parameters['input_fnames']:
ins = ','.join(parameters['input_fnames'])
to_print = 'Transfer_input_files = %s\n' % ins
fhand.write(to_print)
if parameters['transfer_files']:
to_print = 'Should_transfer_files = IF_NEEDED\n'
fhand.write(to_print)
if 'stdout' in parameters:
to_print = 'Output = %s\n' % parameters['stdout'].name
fhand.write(to_print)
if 'stderr' in parameters:
to_print = 'Error = %s\n' % parameters['stderr'].name
fhand.write(to_print)
if 'stdin' in parameters:
to_print = 'Input = %s\n' % parameters['stdin'].name
fhand.write(to_print)
to_print = 'Queue\n'
fhand.write(to_print)
fhand.flush()
class Popen(object):
'It launches and controls a condor job'
def __init__(self, cmd, cmd_def=None, runner_conf=None,
stdout=None, stderr=None, stdin=None):
'It launches a condor job'
if cmd_def is None:
cmd_def = []
#runner conf
if runner_conf is None:
runner_conf = {}
#some defaults
if 'transfer_files' not in runner_conf:
runner_conf['transfer_files'] = True
self._log_file = NamedTemporaryFile(suffix='.log')
#create condor job file
condor_job_file = self._create_condor_job_file(cmd, cmd_def,
self._log_file,
runner_conf,
stdout, stderr, stdin)
self._condor_job_file = condor_job_file
#launch condor
self._retcode = None
self._cluster_number = None
self._launch_condor(condor_job_file)
def _launch_condor(self, condor_job_file):
'Given the condor_job_file it launches the condor job'
stdout, stderr, retcode = call(['condor_submit', condor_job_file.name])
if retcode:
msg = 'There was a problem with condor_submit: ' + stderr
raise RuntimeError(msg)
#the condor cluster number is given by condor_submit
#1 job(s) submitted to cluster 15.
for line in stdout.splitlines():
if 'submitted to cluster' in line:
self._cluster_number = line.strip().strip('.').split()[-1]
def _get_pid(self):
'It returns the condor cluster number'
return self._cluster_number
pid = property(_get_pid)
def _get_returncode(self):
'It returns the return code'
return self._retcode
returncode = property(_get_returncode)
@staticmethod
def _remove_paths_from_cmd(cmd, streams, conf):
'''It removes the absolute and relative paths from the cmd,
it returns the modified cmd'''
cmd_mod = cmd[:]
for stream in streams:
if 'fname' not in stream:
continue
fpath = stream['fname']
#for the output files we can't deal with transfering files with
#paths. Condor will deliver those files into the initialdir, not
#where we expected.
if (stream['io'] != 'in' and conf['transfer_files']
and os.path.split(fpath)[-1] != fpath):
msg = 'output files with paths are not transferable'
raise ValueError(msg)
index = cmd_mod.index(fpath)
fpath = os.path.split(fpath)[-1]
cmd_mod[index] = fpath
return cmd_mod
def _create_condor_job_file(self, cmd, cmd_def, log_file, runner_conf,
stdout, stderr, stdin):
'Given a cmd and the cmd_def it returns the condor job file'
#streams
streams = get_streams_from_cmd(cmd, cmd_def, stdout=stdout,
stderr=stderr, stdin=stdin)
#we need some parameters to write the condor file
parameters = {}
- parameters['executable'] = cmd[0]
+ #the executable
+ binary = cmd[0]
+ #the binary should be an absolute path
+ if not os.path.isabs(binary):
+ #the path to the binary could be relative
+ if os.sep in binary:
+ #we make the path absolute
+ binary = os.path.abspath(binary)
+ else:
+ #we have to look in the system $PATH
+ binary = call(['which', binary])[0].strip()
+ parameters['executable'] = binary
+
parameters['log_file'] = log_file
#the cmd shouldn't have absolute path in the files because they will be
#transfered to another node in the condor working dir and they wouldn't
#be found with an absolute path
cmd_no_path = self._remove_paths_from_cmd(cmd, streams, runner_conf)
parameters['arguments'] = ' '.join(cmd_no_path[1:])
if stdout is not None:
parameters['stdout'] = stdout
if stderr is not None:
parameters['stderr'] = stderr
if stdin is not None:
parameters['stdin'] = stdin
transfer_bin = False
if 'transfer_executable' in runner_conf:
transfer_bin = runner_conf['transfer_executable']
- parameters['transfer_executable'] = str(transfer_bin)
+ parameters['transfer_executable'] = transfer_bin
transfer_files = runner_conf['transfer_executable']
parameters['transfer_files'] = str(transfer_files)
in_fnames = []
for stream in streams:
if stream['io'] == 'in':
fname = None
if 'fname' in stream:
fname = stream['fname']
else:
fname = stream['fhand'].name
in_fnames.append(fname)
parameters['input_fnames'] = in_fnames
#now we can create the job file
condor_job_file = NamedTemporaryFile()
write_condor_job_file(condor_job_file, parameters=parameters)
return condor_job_file
def _update_retcode(self):
'It updates the retcode looking at the log file, it returns the retcode'
for line in open(self._log_file.name):
if 'return value' in line:
ret = line.split('return value')[1].strip().strip(')')
self._retcode = int(ret)
return self._retcode
def poll(self):
'It checks if condor has run ower condor cluster'
cluster_number = self._cluster_number
cmd = ['condor_q', cluster_number,
'-format', '"%d.\n"', 'ClusterId']
stdout, stderr, retcode = call(cmd)
if retcode:
msg = 'There was a problem with condor_q: ' + stderr
raise RuntimeError(msg)
if cluster_number not in stdout:
#the job is finished
return self._update_retcode()
return self._retcode
def wait(self):
'It waits until the condor job is finished'
stderr, retcode = call(['condor_wait', self._log_file.name])[1:]
if retcode:
msg = 'There was a problem with condor_wait: ' + stderr
raise RuntimeError(msg)
return self._update_retcode()
def kill(self):
'It runs condor_rm for the condor job'
stderr, retcode = call(['condor_rm', self.pid])[1:]
if retcode:
msg = 'There was a problem with condor_rm: ' + stderr
raise RuntimeError(msg)
return self._update_retcode()
def terminate(self):
'It runs condor_rm for the condor job'
self.kill()
def get_default_splits():
'It returns a suggested number of splits for this Popen runner'
stdout, stderr, retcode = call(['condor_status', '-total'])
if retcode:
msg = 'There was a problem with condor_status: ' + stderr
raise RuntimeError(msg)
for line in stdout.splitlines():
line = line.strip().lower()
if line.startswith('total') and 'owner' not in line:
return int(line.split()[1]) * 2
diff --git a/psubprocess/streams.py b/psubprocess/streams.py
index 0369836..dd7635b 100644
--- a/psubprocess/streams.py
+++ b/psubprocess/streams.py
@@ -1,177 +1,176 @@
'''
What's a stream
A command takes some input streams and creates some output streams
An stream is a file-like object.
Kinds of streams in a cmd
cmd arg1 arg2 -i opt1 -j opt3 arg3 < stdin > stdout stderr retcode
in this general command there are several types of streams:
- previous arguments. arguments (without options) located before the first
option (like arg1 and arg2)
- options with one option, like opt3
- arguments (aka post_arguments). arguments located after the last option
- stdin, stdout, stderr and retcode. The standard ones.
How to define the streams
To create the streams list we need the cmd and the cmd_def. The stream list
will be created using the cmd_def as a starting point and adding some extra
information based in the cmd given.
The cmd_def is defined by a dict with the following keys: options, io, splitter,
fhand, fname, special, location. All of them are optional except the options.
Options: It defines in which options or arguments is the stream found. It
should by just a value or a tuple.
Options kinds:
- -i the stream will be located after the parameter -i
- (-o, --output) the stream will be after -o or --output
- int where in the cmd is the stream (useful for pre-args and
post-args)
- STDIN
- STDOUT
- STDERR
io: It defines if it's an input or an output stream for the cmd
splitter: It defines how the stream should be split. There are three ways of
definint it:
- an str that will be used to scan through the in streams, every
line with the str in in will be considered a token start
e.g '>' for the blast files
- a re every line with a match will be considered a token start
- a function the function should take the stream an return an
iterator with the tokens
joiner: A function that should take the out streams for all jobs and return
the joined stream. If not given the output stream will be just concatenated.
fhand or fpath: the stream file. This information is not part of the cmd_def.
It will be added to the streams looking at the cmd
special: It defines some special treatments for the streams.
- no_split It shouldn't be split
- no_transfer It shouldn't be transfer to all nodes
- no_support An error should be raised if used.
cmd_location: Where in the cmd is the file that corresponds to this stream
is located. This information shouldn't be in the cmd_def it will be added to the
streams using the cmd and the cmd_def.
'''
# Copyright 2009 Jose Blanca, Peio Ziarsolo, COMAV-Univ. Politecnica Valencia
# This file is part of project.
# project is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as
# published by the Free Software Foundation, either version 3 of the
# License, or (at your option) any later version.
# project is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
# You should have received a copy of the GNU Affero General Public License
# along with project. If not, see <http://www.gnu.org/licenses/>.
STDIN = 'stdin'
STDOUT = 'stdout'
STDERR = 'stderr'
def _find_param_def_in_cmd(cmd, param_def):
'''Given a cmd and a parameter definition it returns the index of the param
in the cmd.
If the param is not found in the cmd it will raise a ValueError.
'''
options = param_def['options']
#options could be a list or an item
if not isinstance(options, list) and not isinstance(options, tuple):
options = (options,)
#the standard options with command line options
for index, item in enumerate(cmd):
if item in options:
return index
raise ValueError('Parameter not found in the given cmd')
def _positive_int(index, sequence):
'''It returns the same int index, but positive.'''
if index is None:
return None
elif index < 0:
return len(sequence) + index
return index
def _add_std_cmd_defs(cmd_def, stdout, stdin, stderr):
'''It adds the standard stream to the cmd_def.
If they're already there it just completes them
'''
#which std streams are in the cmd_def?
in_cmd_def = {}
for param_def in cmd_def:
option = param_def['options']
if option in (STDOUT, STDIN, STDERR):
in_cmd_def[option] = True
#we create the missing ones
if stdout is not None and STDOUT not in in_cmd_def:
cmd_def.append({'options':STDOUT, 'io':'out'})
if stderr is not None and STDERR not in in_cmd_def:
cmd_def.append({'options':STDERR, 'io':'out'})
if stdin is not None and STDIN not in in_cmd_def:
cmd_def.append({'options':STDIN, 'io':'in'})
def get_streams_from_cmd(cmd, cmd_def, stdout=None, stdin=None, stderr=None):
'Given a cmd and a cmd definition it returns the streams'
#stdout and stderr might not be in the cmd_def
_add_std_cmd_defs(cmd_def, stdout=stdout, stdin=stdin, stderr=stderr)
-
streams = []
for param_def in cmd_def:
options = param_def['options']
#where is this stream located in the cmd?
location = None
#we have to look for the stream in the cmd
#where is the parameter in the cmd list?
#if the param options is not an int, its a list of strings
if isinstance(options, int):
#for PRE_ARG (1) and POST_ARG (-1)
#we take 1 unit because the options should be 1 to the right
#of the value
index = _positive_int(options, cmd) - 1
elif options in (STDERR, STDOUT, STDIN):
index = options
else:
#look for param in cmd
try:
index = _find_param_def_in_cmd(cmd, param_def)
except ValueError:
index = None
if index == STDERR:
location = STDERR
fname = stderr
elif index == STDOUT:
location = STDOUT
fname = stdout
elif index == STDIN:
location = STDIN
fname = stdin
elif index is not None:
location = index + 1
fname = cmd[location]
#create the result dict
stream = param_def.copy()
if location is not None:
if location in (STDIN, STDOUT, STDERR):
stream['fhand'] = fname
else:
stream['fname'] = fname
stream['cmd_location'] = location
streams.append(stream)
return streams
\ No newline at end of file
diff --git a/scripts/run_with_condor.py b/scripts/run_with_condor.py
index d650b40..bd0127f 100644
--- a/scripts/run_with_condor.py
+++ b/scripts/run_with_condor.py
@@ -1,92 +1,91 @@
'''
Created on 21/07/2009
@author: jose
'''
# Copyright 2009 Jose Blanca, Peio Ziarsolo, COMAV-Univ. Politecnica Valencia
# This file is part of psubprocess.
# psubprocess is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as
# published by the Free Software Foundation, either version 3 of the
# License, or (at your option) any later version.
# psubprocess is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
# You should have received a copy of the GNU Affero General Public License
# along with psubprocess. If not, see <http://www.gnu.org/licenses/>.
from optparse import OptionParser
import os.path, sys, signal
-from psubprocess import CondorPopen, Popen
+from psubprocess import CondorPopen
POPEN = None
def parse_options():
'It parses the command line arguments'
parser = OptionParser('usage: %prog -c "command"')
parser.add_option('-c', '--command', dest='command',
help='The command to run')
parser.add_option('-o', '--stdout', dest='stdout',
help='A file to store the stdout')
parser.add_option('-e', '--stderr', dest='stderr',
help='A file to store the stderr')
parser.add_option('-i', '--stdin', dest='stdin',
help='A file to store the stdin')
parser.add_option('-d', '--cmd_def', dest='cmd_def',
help='The command line definition')
return parser
def get_options():
'It returns a dict with the options'
parser = parse_options()
cmd_options = parser.parse_args()[0]
options = {}
if cmd_options.command is None:
raise parser.error('The command should be set')
else:
options['cmd'] = cmd_options.command.split()
if cmd_options.stdout is not None:
options['stdout'] = open(cmd_options.stdout, 'w')
if cmd_options.stderr is not None:
options['stderr'] = open(cmd_options.stderr, 'w')
if cmd_options.stdin is not None:
options['stdin'] = open(cmd_options.stdin)
- options['runner'] = CondorPopen
options['runner_conf'] = {'transfer_executable':False}
if cmd_options.cmd_def is None:
options['cmd_def'] = []
else:
cmd_def = cmd_options.cmd_def
#it can be a file or an str
if os.path.exists(cmd_def):
cmd_def = open(cmd_def).read()
options['cmd_def'] = eval(cmd_def)
return options
def kill_process():
'It kills the ongoing process'
if POPEN is not None:
POPEN.kill()
sys.exit(-1)
def set_signal_handlers():
'It sets the SIGTERM and SIGKILL signals'
signal.signal(signal.SIGTERM, kill_process)
signal.signal(signal.SIGABRT, kill_process)
signal.signal(signal.SIGINT, kill_process)
def main():
'It runs a command in a condor cluster'
set_signal_handlers()
options = get_options()
global POPEN
- POPEN = Popen(**options)
+ POPEN = CondorPopen(**options)
sys.exit(POPEN.wait())
if __name__ == '__main__':
main()
\ No newline at end of file
diff --git a/test/condor_runner_test.py b/test/condor_runner_test.py
index 3a6e2b9..1bcfb76 100644
--- a/test/condor_runner_test.py
+++ b/test/condor_runner_test.py
@@ -1,181 +1,181 @@
'''
Created on 14/07/2009
@author: jose
'''
# Copyright 2009 Jose Blanca, Peio Ziarsolo, COMAV-Univ. Politecnica Valencia
# This file is part of psubprocess.
# psubprocess is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as
# published by the Free Software Foundation, either version 3 of the
# License, or (at your option) any later version.
# psubprocess is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
# You should have received a copy of the GNU Affero General Public License
# along with psubprocess. If not, see <http://www.gnu.org/licenses/>.
import unittest
from tempfile import NamedTemporaryFile
from StringIO import StringIO
import os
from psubprocess.condor_runner import (write_condor_job_file, Popen,
get_default_splits)
from test_utils import create_test_binary
class CondorRunnerTest(unittest.TestCase):
'It tests the condor runner'
@staticmethod
def test_write_condor_job_file():
'It tests that we can write a condor job file with the right parameters'
fhand1 = NamedTemporaryFile()
fhand2 = NamedTemporaryFile()
flog = NamedTemporaryFile()
stderr_ = NamedTemporaryFile()
stdout_ = NamedTemporaryFile()
stdin_ = NamedTemporaryFile()
- expected = '''Executable = bin
+ expected = '''Executable = /bin/ls
Arguments = "-i %s -j %s"
Universe = vanilla
Log = %s
When_to_transfer_output = ON_EXIT
Getenv = True
Transfer_executable = True
Transfer_input_files = %s,%s
Should_transfer_files = IF_NEEDED
Output = %s
Error = %s
Input = %s
Queue
''' % (fhand1.name, fhand2.name, flog.name, fhand1.name, fhand2.name,
stdout_.name, stderr_.name, stdin_.name)
fhand = StringIO()
- parameters = {'executable':'bin', 'log_file':flog,
+ parameters = {'executable':'/bin/ls', 'log_file':flog,
'input_fnames':[fhand1.name, fhand2.name],
'arguments':'-i %s -j %s' % (fhand1.name, fhand2.name),
'transfer_executable':True, 'transfer_files':True,
'stdout':stdout_, 'stderr':stderr_, 'stdin':stdin_}
write_condor_job_file(fhand, parameters=parameters)
condor = fhand.getvalue()
assert condor == expected
@staticmethod
def test_run_condor_stdout():
'It test that we can run condor job and retrieve stdout and stderr'
bin = create_test_binary()
#a simple job
cmd = [bin]
cmd.extend(['-o', 'hola', '-e', 'caracola'])
stdout = NamedTemporaryFile()
stderr = NamedTemporaryFile()
popen = Popen(cmd, runner_conf={'transfer_executable':True},
stdout=stdout, stderr=stderr)
assert popen.wait() == 0 #waits till finishes and looks to the retcode
assert open(stdout.name).read() == 'hola'
assert open(stderr.name).read() == 'caracola'
os.remove(bin)
@staticmethod
def test_run_condor_stdin():
'It test that we can run condor job with stdin'
bin = create_test_binary()
#a simple job
cmd = [bin]
cmd.extend(['-s'])
stdin = NamedTemporaryFile()
stdout = NamedTemporaryFile()
stdin.write('hola')
stdin.flush()
popen = Popen(cmd, runner_conf={'transfer_executable':True},
stdout=stdout, stdin=stdin)
assert popen.wait() == 0 #waits till finishes and looks to the retcode
assert open(stdout.name).read() == 'hola'
os.remove(bin)
@staticmethod
def test_run_condor_retcode():
'It test that we can run condor job and get the retcode'
bin = create_test_binary()
#a simple job
cmd = [bin]
cmd.extend(['-r', '10'])
popen = Popen(cmd, runner_conf={'transfer_executable':True})
assert popen.wait() == 10 #waits till finishes and looks to the retcode
os.remove(bin)
@staticmethod
def test_run_condor_in_file():
'It test that we can run condor job with an input file'
bin = create_test_binary()
in_file = NamedTemporaryFile()
in_file.write('hola')
in_file.flush()
cmd = [bin]
cmd.extend(['-i', in_file.name])
stdout = NamedTemporaryFile()
stderr = NamedTemporaryFile()
cmd_def = [{'options': ('-i', '--input'), 'io': 'in'}]
popen = Popen(cmd, runner_conf={'transfer_executable':True},
stdout=stdout, stderr=stderr, cmd_def=cmd_def)
assert popen.wait() == 0 #waits till finishes and looks to the retcod
assert open(stdout.name).read() == 'hola'
os.remove(bin)
def test_run_condor_in_out_file(self):
'It test that we can run condor job with an output file'
bin = create_test_binary()
in_file = NamedTemporaryFile()
in_file.write('hola')
in_file.flush()
out_file = open('output.txt', 'w')
cmd = [bin]
cmd.extend(['-i', in_file.name, '-t', out_file.name])
stdout = NamedTemporaryFile()
stderr = NamedTemporaryFile()
cmd_def = [{'options': ('-i', '--input'), 'io': 'in'},
{'options': ('-t', '--output'), 'io': 'out'}]
popen = Popen(cmd, runner_conf={'transfer_executable':True},
stdout=stdout, stderr=stderr, cmd_def=cmd_def)
popen.wait()
assert popen.wait() == 0 #waits till finishes and looks to the retcod
assert open(out_file.name).read() == 'hola'
os.remove(out_file.name)
#and output file with path won't be allowed unless the transfer file
#mechanism is not used
out_file = NamedTemporaryFile()
cmd = [bin]
cmd.extend(['-i', in_file.name, '-t', out_file.name])
stdout = NamedTemporaryFile()
stderr = NamedTemporaryFile()
cmd_def = [{'options': ('-i', '--input'), 'io': 'in'},
{'options': ('-t', '--output'), 'io': 'out'}]
try:
popen = Popen(cmd, runner_conf={'transfer_executable':True},
stdout=stdout, stderr=stderr, cmd_def=cmd_def)
self.fail('ValueError expected')
#pylint: disable-msg=W0704
except ValueError:
pass
os.remove(bin)
@staticmethod
def test_default_splits():
'It tests that we can get a suggested number of splits'
assert get_default_splits() > 0
assert isinstance(get_default_splits(), int)
if __name__ == "__main__":
#import sys;sys.argv = ['', 'Test.testName']
unittest.main()
\ No newline at end of file
|
JoseBlanca/psubprocess
|
e36b3baf78abccbb2fe7a06baa112160cb19bf4b
|
Added a script to run commands using condor
|
diff --git a/scripts/run_with_condor.py b/scripts/run_with_condor.py
new file mode 100644
index 0000000..d650b40
--- /dev/null
+++ b/scripts/run_with_condor.py
@@ -0,0 +1,92 @@
+'''
+Created on 21/07/2009
+
+@author: jose
+'''
+
+# Copyright 2009 Jose Blanca, Peio Ziarsolo, COMAV-Univ. Politecnica Valencia
+# This file is part of psubprocess.
+# psubprocess is free software: you can redistribute it and/or modify
+# it under the terms of the GNU Affero General Public License as
+# published by the Free Software Foundation, either version 3 of the
+# License, or (at your option) any later version.
+
+# psubprocess is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU Affero General Public License for more details.
+
+# You should have received a copy of the GNU Affero General Public License
+# along with psubprocess. If not, see <http://www.gnu.org/licenses/>.
+
+from optparse import OptionParser
+import os.path, sys, signal
+
+from psubprocess import CondorPopen, Popen
+
+POPEN = None
+
+def parse_options():
+ 'It parses the command line arguments'
+ parser = OptionParser('usage: %prog -c "command"')
+ parser.add_option('-c', '--command', dest='command',
+ help='The command to run')
+ parser.add_option('-o', '--stdout', dest='stdout',
+ help='A file to store the stdout')
+ parser.add_option('-e', '--stderr', dest='stderr',
+ help='A file to store the stderr')
+ parser.add_option('-i', '--stdin', dest='stdin',
+ help='A file to store the stdin')
+ parser.add_option('-d', '--cmd_def', dest='cmd_def',
+ help='The command line definition')
+ return parser
+
+def get_options():
+ 'It returns a dict with the options'
+ parser = parse_options()
+ cmd_options = parser.parse_args()[0]
+ options = {}
+ if cmd_options.command is None:
+ raise parser.error('The command should be set')
+ else:
+ options['cmd'] = cmd_options.command.split()
+ if cmd_options.stdout is not None:
+ options['stdout'] = open(cmd_options.stdout, 'w')
+ if cmd_options.stderr is not None:
+ options['stderr'] = open(cmd_options.stderr, 'w')
+ if cmd_options.stdin is not None:
+ options['stdin'] = open(cmd_options.stdin)
+ options['runner'] = CondorPopen
+ options['runner_conf'] = {'transfer_executable':False}
+ if cmd_options.cmd_def is None:
+ options['cmd_def'] = []
+ else:
+ cmd_def = cmd_options.cmd_def
+ #it can be a file or an str
+ if os.path.exists(cmd_def):
+ cmd_def = open(cmd_def).read()
+ options['cmd_def'] = eval(cmd_def)
+ return options
+
+def kill_process():
+ 'It kills the ongoing process'
+ if POPEN is not None:
+ POPEN.kill()
+ sys.exit(-1)
+
+def set_signal_handlers():
+ 'It sets the SIGTERM and SIGKILL signals'
+ signal.signal(signal.SIGTERM, kill_process)
+ signal.signal(signal.SIGABRT, kill_process)
+ signal.signal(signal.SIGINT, kill_process)
+
+def main():
+ 'It runs a command in a condor cluster'
+ set_signal_handlers()
+ options = get_options()
+ global POPEN
+ POPEN = Popen(**options)
+ sys.exit(POPEN.wait())
+
+if __name__ == '__main__':
+ main()
\ No newline at end of file
|
JoseBlanca/psubprocess
|
e29c056bbf0bc322dff1e565d6cedafd5b4a2929
|
The input files created for the subjobs now have the same mode as the input files for the main cmd
|
diff --git a/psubprocess/prunner.py b/psubprocess/prunner.py
index 9ae4304..9b3d6c1 100644
--- a/psubprocess/prunner.py
+++ b/psubprocess/prunner.py
@@ -1,631 +1,639 @@
'''It launches parallel processes with an interface similar to Popen.
It divides jobs into subjobs and launches the subjobs.
This module is useful when we have a non-parallel command to run in a
multiprocessor computer or in a multinode cluster. It will take the input files,
it will split them and it will run a subjob for everyone of the splits. It will
wait for the subjobs to finnish and it will join the output files generated
by all subjobs. At the end of the process will get the same output files as if
the command wasn't run in parallel.
To do it requires the parameters used by popen: cmd, stdin, stdout, stderr and
some extra information: runner, splits and cmd_def.
runner is optional and it should be a subprocess.Popen like class. If it's not
given subprocess.Popen will be used. This will be the class used to run the
subjobs. In that case the subjobs will run in the processors of the local node.
splits is the number of subjobs that we want to generate. If it's not given the
runner will provide a suitable number.
cmd_def is a dict that defines how the cmd defines the input and output files.
We need to tell Popen which are the input and output files in order to split
them and join them.
'''
# Copyright 2009 Jose Blanca, Peio Ziarsolo, COMAV-Univ. Politecnica Valencia
# This file is part of psubprocess.
# psubprocess is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as
# published by the Free Software Foundation, either version 3 of the
# License, or (at your option) any later version.
# psubprocess is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
# You should have received a copy of the GNU Affero General Public License
# along with psubprocess. If not, see <http://www.gnu.org/licenses/>.
from subprocess import Popen as StdPopen
import os, tempfile, shutil, copy
from psubprocess.streams import get_streams_from_cmd, STDOUT, STDERR, STDIN
from psubprocess import condor_runner
RUNNER_MODULES = {}
RUNNER_MODULES['condor_runner'] = condor_runner
class NamedTemporaryDir(object):
'''This class creates temporary directories '''
#pylint: disable-msg=W0622
#we redifine the build in dir because temfile uses that inteface
def __init__(self, dir=None):
'''It initiates the class.'''
self._name = tempfile.mkdtemp(dir=dir)
def get_name(self):
'Returns path to the dict'
return self._name
name = property(get_name)
def close(self):
'''It removes the temp dir'''
if os.path.exists(self._name):
shutil.rmtree(self._name)
def __del__(self):
'''It removes de temp dir when instance is removed and the garbaje
colector decides it'''
self.close()
def NamedTemporaryFile(dir=None, delete=False, suffix=''):
'''It creates a temporary file that won't be deleted when close
This behaviour can be done with tempfile.NamedTemporaryFile in python > 2.6
'''
#pylint: disable-msg=W0613
#delete is not being used, it's there as a reminder, once we start to use
#python 2.6 this function should be removed
#pylint: disable-msg=C0103
#pylint: disable-msg=W0622
#We want to mimick tempfile.NamedTemporaryFile
fpath = tempfile.mkstemp(dir=dir, suffix=suffix)[1]
return open(fpath, 'w')
+def copy_file_mode(fpath1, fpath2):
+ 'It copies the os.stats mode from file1 to file2'
+ mode = os.stat(fpath1)[0]
+ os.chmod(fpath2, mode)
+
class Popen(object):
'''It paralellizes the given processes dividing them into subprocesses.
The interface is similar to subprocess.Popen to ease the use of this class,
although the functionality of this class is much mor limited.
When an instance of this class is created a series of subjobs is launched.
When all subjobs are finished returncode will have an int, if they're still
running returncode will be None.
We can wait for all subjobs to finnish using the wait method or we can
kill or terminate them using kill and terminate.
'''
def __init__(self, cmd, cmd_def=None, runner=None, runner_conf=None,
stdout=None, stderr=None, stdin=None, splits=None):
'''It inits the a Popen instance, it creates and runs the subjobs.
Like the subprocess.Popen it accepts stdin, stdout, stderr, but in this
case all of them should be files, PIPE will not work.
In the cmd_def list we have to tell this Popen how to locate the
input and output files in the cmd and how to split and join them. Look
for the cmd_format in the streams.py file.
keyword arguments:
cmd -- a list with the cmd to parallelize
cmd_def -- the cmd definition list (default [])
runner -- which runner to use (default subprocess.Popen)
runner_conf -- extra parameters for the runner (default {})
stdout -- a fhand to store the stdout (default None)
stderr -- a fhand to store the stderr (default None)
stdin -- a fhand with the stdin (default None)
splits -- number of subjobs to generate
'''
#we want the same interface as subprocess.popen
#pylint: disable-msg=R0913
self._retcode = None
self._outputs_collected = False
#some defaults
#if the runner is not given, we use subprocess.Popen
if runner is None:
runner = StdPopen
if cmd_def is None:
if stdin is not None:
raise ValueError('No cmd_def given but stdin present')
cmd_def = []
#if the number of splits is not given we calculate them
if splits is None:
splits = self.default_splits(runner)
#we need a work dir to create the temporary split files
self._work_dir = NamedTemporaryDir()
+ copy_file_mode('.', self._work_dir.name)
#the main job
self._job = {'cmd': cmd, 'work_dir': self._work_dir}
#we create the new subjobs
self._jobs = self._split_jobs(cmd, cmd_def, splits, self._work_dir,
stdout=stdout, stderr=stderr, stdin=stdin)
#launch every subjobs
self._launch_jobs(self._jobs, runner=runner, runner_conf=runner_conf)
@staticmethod
def _launch_jobs(jobs, runner, runner_conf):
'It launches all jobs and it adds its popen instance to them'
jobs['popens'] = []
cwd = os.getcwd()
for job_index, (cmd, streams, work_dir) in enumerate(zip(jobs['cmds'],
jobs['streams'], jobs['work_dirs'])):
#the std stream can be present or not
stdin, stdout, stderr = None, None, None
if jobs['stdins']:
stdin = jobs['stdins'][job_index]
if jobs['stdouts']:
stdout = jobs['stdouts'][job_index]
if jobs['stderrs']:
stderr = jobs['stderrs'][job_index]
#for every job we go to its dir to launch it
os.chdir(work_dir.name)
#we have to be sure that stdin is open for read
if stdin:
stdin = open(stdin.name)
#we launch the job
if runner == StdPopen:
popen = runner(cmd, stdout=stdout, stderr=stderr, stdin=stdin)
else:
popen = runner(cmd, cmd_def=streams, stdout=stdout,
stderr=stderr, stdin=stdin,
runner_conf=runner_conf)
#we record it's popen instane
jobs['popens'].append(popen)
os.chdir(cwd)
def _split_jobs(self, cmd, cmd_def, splits, work_dir, stdout=None,
stderr=None, stdin=None,):
''''I creates one job for every split.
Every job has a cmd, work_dir and streams, this info is in the jobs dict
with the keys: cmds, work_dirs, streams
'''
#too many arguments, but similar interface to our __init__
#pylint: disable-msg=R0913
#pylint: disable-msg=R0914
#the main job streams
main_job_streams = get_streams_from_cmd(cmd, cmd_def, stdout=stdout,
stderr=stderr, stdin=stdin)
self._job['streams'] = main_job_streams
streams, work_dirs = self._split_streams(main_job_streams, splits,
work_dir.name)
#now we have to create a new cmd with the right in and out streams for
#every split
cmds, stdins, stdouts, stderrs = self._create_cmds(cmd, streams)
jobs = {'cmds': cmds, 'work_dirs': work_dirs, 'streams': streams,
'stdins':stdins, 'stdouts':stdouts, 'stderrs':stderrs}
return jobs
@staticmethod
def _create_cmds(cmd, streams):
'''Given a base cmd and a steams list it creates one modified cmds for
every stream'''
#the streams is a list of streams
streamss = streams
cmds = []
stdouts = []
stdins = []
stderrs = []
for streams in streamss:
new_cmd = copy.deepcopy(cmd)
for stream in streams:
#is the stream in the cmd or in is a std one?
if 'cmd_location' in stream:
location = stream['cmd_location']
else:
location = None
if location is None:
continue
elif location == STDIN:
stdins.append(stream['fhand'])
elif location == STDOUT:
stdouts.append(stream['fhand'])
elif location == STDERR:
stderrs.append(stream['fhand'])
else:
#we modify the cmd[location] with the new file
#we use the fname and no path because the jobs will be
#launched from the job working dir
location = stream['cmd_location']
fpath = stream['fname']
fname = os.path.split(fpath)[-1]
new_cmd[location] = fname
cmds.append(new_cmd)
return cmds, stdins, stdouts, stderrs
@staticmethod
def _split_streams(streams, splits, work_dir):
'''Given a list of streams it splits every stream in the given number of
splits'''
#which are the input and output streams?
input_stream_indexes = []
output_stream_indexes = []
for index, stream in enumerate(streams):
if stream['io'] == 'in':
input_stream_indexes.append(index)
elif stream['io'] == 'out':
output_stream_indexes.append(index)
#we create one work dir for every split
work_dirs = []
for index in range(splits):
- work_dirs.append(NamedTemporaryDir(dir=work_dir))
+ dir_ = NamedTemporaryDir(dir=work_dir)
+ work_dirs.append(dir_)
+ copy_file_mode('.', dir_.name)
#we have to do first the input files because the number of splits could
#be changed by them
#we split the input stream files into several splits
#we have to sort the input_stream_indexes, first we should take the ones
#that have an input file to be split
def to_be_split_first(stream1, stream2):
'It sorts the streams, the ones to be split go first'
split1 = None
split2 = None
for split, stream in ((split1, stream1), (split2, stream2)):
#maybe they shouldn't be split
if 'special' in stream and 'no_split' in stream['special']:
split = False
#maybe the have no file to split
if (('fhand' in stream and stream['fhand'] is None) or
('fname' in stream and stream['fname'] is None) or
('fname' not in stream and 'fhand' not in stream)):
split = False
elif (('fhand' in stream and stream['fhand'] is not None) or
('fname' in stream and stream['fname'] is not None)):
split = True
return int(split1) - int(split2)
input_stream_indexes = sorted(input_stream_indexes, to_be_split_first)
first = True
split_files = {}
for index in input_stream_indexes:
stream = streams[index]
#splitter
if 'splitter' not in stream:
msg = 'An splitter should be provided for every input stream'
msg += 'missing for: ' + str(stream)
raise ValueError(msg)
splitter = stream['splitter']
#the splitter can be a re, in that case with create the function
if '__call__' not in dir(splitter):
splitter = _create_file_splitter_with_re(splitter)
#we split the input files in the splits, every file will be in one
#of the given work_dirs
#the stream can have fname or fhands
if 'fhand' in stream:
file_ = stream['fhand']
elif 'fname' in stream:
file_ = stream['fname']
else:
file_ = None
if file_ is None:
#the stream migth have no file associated
files = [None] * len(work_dirs)
else:
files = splitter(file_, work_dirs)
#the files len can be different than splits, in that case we modify
#the splits or we raise an error
if len(files) != splits:
if first:
splits = len(files)
#we discard the empty temporary dirs
work_dirs = work_dirs[0:splits]
else:
msg = 'Not all input files were divided in the same number'
msg += ' of splits'
raise RuntimeError(msg)
first = False
split_files[index] = files #a list of files for every in stream
#we split the ouptut stream files into several splits
for index in output_stream_indexes:
stream = streams[index]
#for th output we just create the new names, but we don't split
#any file
if 'fhand' in stream:
fname = stream['fhand']
else:
fname = stream['fname']
files = _output_splitter(fname, work_dirs)
split_files[index] = files #a list of files for every in stream
new_streamss = []
#we need one new stream for every split
for split_index in range(splits):
#the streams for one job
new_streams = []
for stream_index, stream in enumerate(streams):
#we duplicate the original stream
new_stream = stream.copy()
#we set the new files
if 'fhand' in stream:
new_stream['fhand'] = split_files[stream_index][split_index]
else:
new_stream['fname'] = split_files[stream_index][split_index]
new_streams.append(new_stream)
new_streamss.append(new_streams)
return new_streamss, work_dirs
@staticmethod
def default_splits(runner):
'Given a runner it returns the number of splits recommended by default'
if runner is StdPopen:
#the number of processors
return os.sysconf('SC_NPROCESSORS_ONLN')
else:
module = runner.__module__.split('.')[-1]
module = RUNNER_MODULES[module]
return module.get_default_splits()
def wait(self):
'It waits for all the works to finnish'
#we wait till all jobs finish
for job in self._jobs['popens']:
job.wait()
#now that all jobs have finished we join the results
self._collect_output_streams()
#we join now the retcodes
self._collect_retcodes()
return self._retcode
def _collect_output_streams(self):
'''It joins all the output streams into the output files and it removes
the work dirs'''
if self._outputs_collected:
return
#for each file in the main job cmd
for stream_index, stream in enumerate(self._job['streams']):
if stream['io'] == 'in':
#now we're dealing only with output files
continue
#every subjob has a part to join for this output stream
part_out_fnames = []
for streams in self._jobs['streams']:
this_stream = streams[stream_index]
if 'fname' in this_stream:
part_out_fnames.append(this_stream['fname'])
else:
part_out_fnames.append(this_stream['fhand'])
#we need a function to join this stream
joiner = None
if joiner in stream:
joiner = stream['joiner']
else:
joiner = default_cat_joiner
if 'fname' in stream:
out_file = stream['fname']
else:
out_file = stream['fhand']
default_cat_joiner(out_file, part_out_fnames)
#now we can delete the tempdirs
for work_dir in self._jobs['work_dirs']:
work_dir.close()
self._outputs_collected = True
def _collect_retcodes(self):
'It gathers the retcodes from all processes'
retcode = None
for popen in self._jobs['popens']:
job_retcode = popen.returncode
if job_retcode is None:
#if some job is yet to be finished the main job is not finished
retcode = None
break
elif job_retcode != 0:
#if one job has finnished badly the main job is badly finished
retcode = job_retcode
break
#it should be 0 at this point
retcode = job_retcode
#if the retcode is not None the jobs have finished and we have to
#collect the outputs
if retcode is not None:
self._collect_output_streams()
self._retcode = retcode
return retcode
def _get_returncode(self):
'It returns the return code'
if self._retcode is None:
self._collect_retcodes()
return self._retcode
returncode = property(_get_returncode)
def kill(self):
'It kills all jobs'
if 'popens' not in self._jobs:
return
for popen in self._jobs['popens']:
popen.kill()
del self._jobs['popens']
def terminate(self):
'It kills all jobs'
if 'popens' not in self._jobs:
return
for popen in self._jobs['popens']:
popen.terminate()
del self._jobs['popens']
def _calculate_divisions(num_items, splits):
'''It calculates how many items should be in every split to divide
the num_items into splits.
Not all splits will have an equal number of items, it will return a tuple
with two tuples inside:
((num_fragments_1, num_items_1), (num_fragments_2, num_items_2))
splits = num_fragments_1 + num_fragments_2
num_items_1 = num_items_2 + 1
num_fragments_1 could be equal to 0.
This is the best way to create as many splits as possible as similar as
possible.
'''
if splits >= num_items:
return ((0, 1), (splits, 1))
num_fragments1 = num_items % splits
num_fragments2 = splits - num_fragments1
num_items2 = num_items // splits
num_items1 = num_items2 + 1
res = ((num_fragments1, num_items1), (num_fragments2, num_items2))
return res
def _items_in_file(fhand, expression_kind, expression):
'''Given an fhand and an expression it yields the items cutting where the
line matches the expression'''
sofar = fhand.readline()
for line in fhand:
if ((expression_kind == 'str' and expression in line) or
(expression_kind != 'str' and expression.search(line))):
yield sofar
sofar = line
else:
sofar += line
else:
#the last item
yield sofar
def _create_file_splitter_with_re(expression):
'''Given an expression it creates a file splitter.
The expression can be a regex or an str.
The item in the file will be defined everytime a line matches the
expression.
'''
expression_kind = None
if isinstance(expression, str):
expression_kind = 'str'
else:
expression_kind = 're'
def splitter(file_, work_dirs):
'''It splits the given file into several splits.
Every split will be located in one of the work_dirs, although it is not
guaranteed to create as many splits as work dirs. If in the file there
are less items than work_dirs some work_dirs will be left empty.
It returns a list with the fpaths or fhands for the splitted files.
file_ can be an fhand or an fname.
'''
#the file_ can be an fname or an fhand. which one is it?
file_is_str = None
if isinstance(file_, str):
fname = file_
file_is_str = True
else:
fname = file_.name
file_is_str = False
#how many splits do we want?
nsplits = len(work_dirs)
#how many items are in the file? We assume that all files have the same
#number of items
nitems = 0
for line in open(fname, 'r'):
if ((expression_kind == 'str' and expression in line) or
(expression_kind != 'str' and expression.search(line))):
nitems += 1
#how many splits a we going to create? and how many items will be in
#every split
#if there are more items than splits we create as many splits as items
if nsplits > nitems:
nsplits = nitems
(nsplits1, nitems1), (nsplits2, nitems2) = _calculate_divisions(nitems,
nsplits)
#we have to create nsplits1 files with nitems1 in it and nsplits2 files
#with nitems2 items in it
new_files = []
fhand = open(fname, 'r')
items = _items_in_file(fhand, expression_kind, expression)
splits_made = 0
for nsplits, nitems in ((nsplits1, nitems1), (nsplits2, nitems2)):
#we have to create nsplits files with nitems in it
#we don't need the split_index for anything
#pylint: disable-msg=W0612
for split_index in range(nsplits):
suffix = os.path.splitext(fname)[-1]
work_dir = work_dirs[splits_made]
ofh = NamedTemporaryFile(dir=work_dir.name, delete=False,
suffix=suffix)
+ copy_file_mode(fhand.name, ofh.name)
for item_index in range(nitems):
ofh.write(items.next())
ofh.flush()
if file_is_str:
new_files.append(ofh.name)
ofh.close()
else:
new_files.append(ofh)
splits_made += 1
return new_files
return splitter
def _output_splitter(file_, work_dirs):
'''It creates one output file for every splits.
Every split will be located in one of the work_dirs.
It returns a list with the fpaths for the new output files.
'''
#the file_ can be an fname or an fhand. which one is it?
file_is_str = None
if isinstance(file_, str):
fname = file_
file_is_str = True
else:
fname = file_.name
file_is_str = False
#how many splits do we want?
nsplits = len(work_dirs)
new_fpaths = []
#we have to create nsplits
for split_index in range(nsplits):
suffix = os.path.splitext(fname)[-1]
work_dir = work_dirs[split_index]
#we use delete=False because this temp file is in a temp dir that will
#be completely deleted. If we use delete=True we get an error because
#the file might be already deleted when its __del__ method is called
ofh = NamedTemporaryFile(dir=work_dir.name, suffix=suffix,
delete=False)
#the file will be deleted
#what do we need the fname or the fhand?
if file_is_str:
#it will be deleted because we just need the name in the temporary
#directory. tempfile.mktemp would be better for this use, but it is
#deprecated
new_fpaths.append(ofh.name)
- ofh.close()
else:
new_fpaths.append(ofh)
return new_fpaths
def default_cat_joiner(out_file_, in_files_):
'''It joins the given in files into the given out file.
It works with fnames or fhands.
'''
#are we working with fhands or fnames?
file_is_str = None
if isinstance(out_file_, str):
file_is_str = True
else:
file_is_str = False
#the output fhand
if file_is_str:
out_fhand = open(out_file_, 'w')
else:
out_fhand = open(out_file_.name, 'w')
for in_file_ in in_files_:
#the input fhand
if file_is_str:
in_fhand = open(in_file_, 'r')
else:
in_fhand = open(in_file_.name, 'r')
for line in in_fhand:
out_fhand.write(line)
in_fhand.close()
out_fhand.close()
diff --git a/test/prunner_test.py b/test/prunner_test.py
index 574a584..f8cba0d 100644
--- a/test/prunner_test.py
+++ b/test/prunner_test.py
@@ -1,191 +1,191 @@
'''
Created on 16/07/2009
@author: jose
'''
# Copyright 2009 Jose Blanca, Peio Ziarsolo, COMAV-Univ. Politecnica Valencia
# This file is part of psubprocess.
# psubprocess is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as
# published by the Free Software Foundation, either version 3 of the
# License, or (at your option) any later version.
# psubprocess is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
# You should have received a copy of the GNU Affero General Public License
# along with psubprocess. If not, see <http://www.gnu.org/licenses/>.
import unittest
from tempfile import NamedTemporaryFile
import os
from psubprocess import Popen
from psubprocess.streams import STDIN
from test_utils import create_test_binary
class PRunnerTest(unittest.TestCase):
'It test that we can parallelize processes'
@staticmethod
def test_file_in():
'It tests the most basic behaviour'
bin = create_test_binary()
#with infile
in_file = NamedTemporaryFile()
in_file.write('hola')
in_file.flush()
cmd = [bin]
cmd.extend(['-i', in_file.name])
stdout = NamedTemporaryFile()
stderr = NamedTemporaryFile()
cmd_def = [{'options': ('-i', '--input'), 'io': 'in', 'splitter':''}]
popen = Popen(cmd, stdout=stdout, stderr=stderr, cmd_def=cmd_def)
assert popen.wait() == 0 #waits till finishes and looks to the retcod
assert open(stdout.name).read() == 'hola'
in_file.close()
os.remove(bin)
@staticmethod
def test_job_no_in_stream():
'It test that a job with no in stream is run splits times'
bin = create_test_binary()
cmd = [bin]
cmd.extend(['-o', 'hola', '-e', 'caracola'])
stdout = NamedTemporaryFile()
stderr = NamedTemporaryFile()
cmd_def = [{'options': ('-i', '--input'), 'io': 'in', 'splitter':''}]
splits = 4
popen = Popen(cmd, stdout=stdout, stderr=stderr, cmd_def=cmd_def,
splits=splits)
assert popen.wait() == 0 #waits till finishes and looks to the retcod
assert open(stdout.name).read() == 'hola' * splits
assert open(stderr.name).read() == 'caracola' * splits
os.remove(bin)
@staticmethod
def test_stdin():
'It test that stdin works as input'
bin = create_test_binary()
#with stdin
content = 'hola1\nhola2\nhola3\nhola4\nhola5\nhola6\nhola7\nhola8\n'
content += 'hola9\nhola10|n'
stdin = NamedTemporaryFile()
stdin.write(content)
stdin.flush()
cmd = [bin]
cmd.extend(['-s'])
stdout = NamedTemporaryFile()
stderr = NamedTemporaryFile()
cmd_def = [{'options':STDIN, 'io': 'in', 'splitter':''}]
popen = Popen(cmd, stdout=stdout, stderr=stderr, stdin=stdin,
cmd_def=cmd_def)
assert popen.wait() == 0 #waits till finishes and looks to the retcod
assert open(stdout.name).read() == content
assert open(stderr.name).read() == ''
os.remove(bin)
@staticmethod
def test_infile_outfile():
'It tests that we can set an input file and an output file'
bin = create_test_binary()
#with infile
in_file = NamedTemporaryFile()
content = 'hola1\nhola2\nhola3\nhola4\nhola5\nhola6\nhola7\nhola8\n'
content += 'hola9\nhola10|n'
in_file.write(content)
in_file.flush()
out_file = NamedTemporaryFile()
cmd = [bin]
cmd.extend(['-i', in_file.name, '-t', out_file.name])
stdout = NamedTemporaryFile()
stderr = NamedTemporaryFile()
cmd_def = [{'options': ('-i', '--input'), 'io': 'in', 'splitter':''},
{'options': ('-t', '--output'), 'io': 'out'}]
popen = Popen(cmd, stdout=stdout, stderr=stderr, cmd_def=cmd_def)
assert popen.wait() == 0 #waits till finishes and looks to the retcod
assert not open(stdout.name).read()
assert not open(stderr.name).read()
assert open(out_file.name).read() == content
in_file.close()
os.remove(bin)
@staticmethod
def test_retcode():
'It tests that we get the correct returncode'
bin = create_test_binary()
cmd = [bin]
cmd.extend(['-r', '20'])
stdout = NamedTemporaryFile()
stderr = NamedTemporaryFile()
popen = Popen(cmd, stdout=stdout, stderr=stderr, cmd_def=[])
assert popen.wait() == 20 #waits till finishes and looks to the retcod
assert not open(stdout.name).read()
assert not open(stderr.name).read()
os.remove(bin)
@staticmethod
- def test_infile_outfile_condor():
+ def xtest_infile_outfile_condor():
'It tests that we can set an input file and an output file'
bin = create_test_binary()
#with infile
in_file = NamedTemporaryFile()
content = 'hola1\nhola2\nhola3\nhola4\nhola5\nhola6\nhola7\nhola8\n'
content += 'hola9\nhola10|n'
in_file.write(content)
in_file.flush()
out_file = NamedTemporaryFile()
cmd = [bin]
cmd.extend(['-i', in_file.name, '-t', out_file.name])
stdout = NamedTemporaryFile()
stderr = NamedTemporaryFile()
cmd_def = [{'options': ('-i', '--input'), 'io': 'in', 'splitter':''},
{'options': ('-t', '--output'), 'io': 'out'}]
from psubprocess import CondorPopen
popen = Popen(cmd, stdout=stdout, stderr=stderr, cmd_def=cmd_def,
runner=CondorPopen,
runner_conf={'transfer_executable':True})
assert popen.wait() == 0 #waits till finishes and looks to the retcod
assert not open(stdout.name).read()
assert not open(stderr.name).read()
assert open(out_file.name).read() == content
in_file.close()
os.remove(bin)
@staticmethod
def test_stdin_real_splitter():
'It test that stdin works as input'
bin = create_test_binary()
#with stdin
content = '>hola1\nhola2\n>hola3\nhola4\n>hola5\nhola6\n>hola7\nhola8\n'
content += '>hola9\nhola10|n'
stdin = NamedTemporaryFile()
stdin.write(content)
stdin.flush()
cmd = [bin]
cmd.extend(['-s'])
stdout = NamedTemporaryFile()
stderr = NamedTemporaryFile()
cmd_def = [{'options':STDIN, 'io': 'in', 'splitter':'>'}]
popen = Popen(cmd, stdout=stdout, stderr=stderr, stdin=stdin,
cmd_def=cmd_def)
assert popen.wait() == 0 #waits till finishes and looks to the retcod
assert open(stdout.name).read() == content
assert open(stderr.name).read() == ''
os.remove(bin)
if __name__ == "__main__":
#import sys;sys.argv = ['', 'Test.testName']
unittest.main()
\ No newline at end of file
|
JoseBlanca/psubprocess
|
fb8417aca8cbfbaa5bbdc6973da4ad23bee025ac
|
Documentation added
|
diff --git a/psubprocess/prunner.py b/psubprocess/prunner.py
index ff9fcb8..9ae4304 100644
--- a/psubprocess/prunner.py
+++ b/psubprocess/prunner.py
@@ -1,590 +1,631 @@
'''It launches parallel processes with an interface similar to Popen.
It divides jobs into subjobs and launches the subjobs.
-Created on 16/07/2009
-
-@author: jose
+This module is useful when we have a non-parallel command to run in a
+multiprocessor computer or in a multinode cluster. It will take the input files,
+it will split them and it will run a subjob for everyone of the splits. It will
+wait for the subjobs to finnish and it will join the output files generated
+by all subjobs. At the end of the process will get the same output files as if
+the command wasn't run in parallel.
+
+To do it requires the parameters used by popen: cmd, stdin, stdout, stderr and
+some extra information: runner, splits and cmd_def.
+
+runner is optional and it should be a subprocess.Popen like class. If it's not
+given subprocess.Popen will be used. This will be the class used to run the
+subjobs. In that case the subjobs will run in the processors of the local node.
+
+splits is the number of subjobs that we want to generate. If it's not given the
+runner will provide a suitable number.
+
+cmd_def is a dict that defines how the cmd defines the input and output files.
+We need to tell Popen which are the input and output files in order to split
+them and join them.
'''
# Copyright 2009 Jose Blanca, Peio Ziarsolo, COMAV-Univ. Politecnica Valencia
# This file is part of psubprocess.
# psubprocess is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as
# published by the Free Software Foundation, either version 3 of the
# License, or (at your option) any later version.
# psubprocess is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
# You should have received a copy of the GNU Affero General Public License
# along with psubprocess. If not, see <http://www.gnu.org/licenses/>.
from subprocess import Popen as StdPopen
import os, tempfile, shutil, copy
from psubprocess.streams import get_streams_from_cmd, STDOUT, STDERR, STDIN
from psubprocess import condor_runner
RUNNER_MODULES = {}
RUNNER_MODULES['condor_runner'] = condor_runner
class NamedTemporaryDir(object):
'''This class creates temporary directories '''
#pylint: disable-msg=W0622
#we redifine the build in dir because temfile uses that inteface
def __init__(self, dir=None):
'''It initiates the class.'''
self._name = tempfile.mkdtemp(dir=dir)
def get_name(self):
'Returns path to the dict'
return self._name
name = property(get_name)
def close(self):
'''It removes the temp dir'''
if os.path.exists(self._name):
shutil.rmtree(self._name)
def __del__(self):
'''It removes de temp dir when instance is removed and the garbaje
colector decides it'''
self.close()
def NamedTemporaryFile(dir=None, delete=False, suffix=''):
'''It creates a temporary file that won't be deleted when close
This behaviour can be done with tempfile.NamedTemporaryFile in python > 2.6
'''
#pylint: disable-msg=W0613
#delete is not being used, it's there as a reminder, once we start to use
#python 2.6 this function should be removed
#pylint: disable-msg=C0103
#pylint: disable-msg=W0622
#We want to mimick tempfile.NamedTemporaryFile
fpath = tempfile.mkstemp(dir=dir, suffix=suffix)[1]
return open(fpath, 'w')
-def _calculate_divisions(num_items, splits):
- '''It calculates how many items should be in every split to divide
- the num_items into splits.
- Not all splits will have an equal number of items, it will return a tuple
- with two tuples inside:
- ((num_fragments_1, num_items_1), (num_fragments_2, num_items_2))
- splits = num_fragments_1 + num_fragments_2
- num_items_1 = num_items_2 + 1
- num_fragments_1 could be equal to 0.
- This is the best way to create as many splits as possible as similar as
- possible.
- '''
- if splits >= num_items:
- return ((0, 1), (splits, 1))
- num_fragments1 = num_items % splits
- num_fragments2 = splits - num_fragments1
- num_items2 = num_items // splits
- num_items1 = num_items2 + 1
- res = ((num_fragments1, num_items1), (num_fragments2, num_items2))
- return res
-
-def _items_in_file(fhand, expression_kind, expression):
- '''Given an fhand and an expression it yields the items cutting where the
- line matches the expression'''
- sofar = fhand.readline()
- for line in fhand:
- if ((expression_kind == 'str' and expression in line) or
- (expression_kind != 'str' and expression.search(line))):
- yield sofar
- sofar = line
- else:
- sofar += line
- else:
- #the last item
- yield sofar
-
-def _create_file_splitter_with_re(expression):
- '''Given an expression it creates a file splitter.
-
- The expression can be a regex or an str.
- The item in the file will be defined everytime a line matches the
- expression.
- '''
- expression_kind = None
- if isinstance(expression, str):
- expression_kind = 'str'
- else:
- expression_kind = 're'
- def splitter(file_, work_dirs):
- '''It splits the given file into several splits.
-
- Every split will be located in one of the work_dirs, although it is not
- guaranteed to create as many splits as work dirs. If in the file there
- are less items than work_dirs some work_dirs will be left empty.
- It returns a list with the fpaths or fhands for the splitted files.
- file_ can be an fhand or an fname.
- '''
- #the file_ can be an fname or an fhand. which one is it?
- file_is_str = None
- if isinstance(file_, str):
- fname = file_
- file_is_str = True
- else:
- fname = file_.name
- file_is_str = False
-
- #how many splits do we want?
- nsplits = len(work_dirs)
- #how many items are in the file? We assume that all files have the same
- #number of items
- nitems = 0
- for line in open(fname, 'r'):
- if ((expression_kind == 'str' and expression in line) or
- (expression_kind != 'str' and expression.search(line))):
- nitems += 1
-
- #how many splits a we going to create? and how many items will be in
- #every split
- #if there are more items than splits we create as many splits as items
- if nsplits > nitems:
- nsplits = nitems
- (nsplits1, nitems1), (nsplits2, nitems2) = _calculate_divisions(nitems,
- nsplits)
- #we have to create nsplits1 files with nitems1 in it and nsplits2 files
- #with nitems2 items in it
- new_files = []
- fhand = open(fname, 'r')
- items = _items_in_file(fhand, expression_kind, expression)
- splits_made = 0
- for nsplits, nitems in ((nsplits1, nitems1), (nsplits2, nitems2)):
- #we have to create nsplits files with nitems in it
- #we don't need the split_index for anything
- #pylint: disable-msg=W0612
- for split_index in range(nsplits):
- suffix = os.path.splitext(fname)[-1]
- work_dir = work_dirs[splits_made]
- ofh = NamedTemporaryFile(dir=work_dir.name, delete=False,
- suffix=suffix)
- for item_index in range(nitems):
- ofh.write(items.next())
- ofh.flush()
- if file_is_str:
- new_files.append(ofh.name)
- ofh.close()
- else:
- new_files.append(ofh)
- splits_made += 1
- return new_files
- return splitter
-
-def _output_splitter(file_, work_dirs):
- '''It creates one output file for every splits.
-
- Every split will be located in one of the work_dirs.
- It returns a list with the fpaths for the new output files.
- '''
- #the file_ can be an fname or an fhand. which one is it?
- file_is_str = None
- if isinstance(file_, str):
- fname = file_
- file_is_str = True
- else:
- fname = file_.name
- file_is_str = False
- #how many splits do we want?
- nsplits = len(work_dirs)
-
- new_fpaths = []
- #we have to create nsplits
- for split_index in range(nsplits):
- suffix = os.path.splitext(fname)[-1]
- work_dir = work_dirs[split_index]
- #we use delete=False because this temp file is in a temp dir that will
- #be completely deleted. If we use delete=True we get an error because
- #the file might be already deleted when its __del__ method is called
- ofh = NamedTemporaryFile(dir=work_dir.name, suffix=suffix,
- delete=False)
- #the file will be deleted
- #what do we need the fname or the fhand?
- if file_is_str:
- #it will be deleted because we just need the name in the temporary
- #directory. tempfile.mktemp would be better for this use, but it is
- #deprecated
- new_fpaths.append(ofh.name)
- ofh.close()
- else:
- new_fpaths.append(ofh)
- return new_fpaths
-
-def default_cat_joiner(out_file_, in_files_):
- '''It joins the given in files into the given out file.
-
- It works with fnames or fhands.
- '''
- #are we working with fhands or fnames?
- file_is_str = None
- if isinstance(out_file_, str):
- file_is_str = True
- else:
- file_is_str = False
-
- #the output fhand
- if file_is_str:
- out_fhand = open(out_file_, 'w')
- else:
- out_fhand = open(out_file_.name, 'w')
- for in_file_ in in_files_:
- #the input fhand
- if file_is_str:
- in_fhand = open(in_file_, 'r')
- else:
- in_fhand = open(in_file_.name, 'r')
- for line in in_fhand:
- out_fhand.write(line)
- in_fhand.close()
- out_fhand.close()
-
class Popen(object):
- 'It paralellizes the given processes divinding them into subprocesses.'
+ '''It paralellizes the given processes dividing them into subprocesses.
+
+ The interface is similar to subprocess.Popen to ease the use of this class,
+ although the functionality of this class is much mor limited.
+ When an instance of this class is created a series of subjobs is launched.
+ When all subjobs are finished returncode will have an int, if they're still
+ running returncode will be None.
+ We can wait for all subjobs to finnish using the wait method or we can
+ kill or terminate them using kill and terminate.
+ '''
def __init__(self, cmd, cmd_def=None, runner=None, runner_conf=None,
stdout=None, stderr=None, stdin=None, splits=None):
- '''
- Constructor
+ '''It inits the a Popen instance, it creates and runs the subjobs.
+
+ Like the subprocess.Popen it accepts stdin, stdout, stderr, but in this
+ case all of them should be files, PIPE will not work.
+
+ In the cmd_def list we have to tell this Popen how to locate the
+ input and output files in the cmd and how to split and join them. Look
+ for the cmd_format in the streams.py file.
+
+ keyword arguments:
+ cmd -- a list with the cmd to parallelize
+ cmd_def -- the cmd definition list (default [])
+ runner -- which runner to use (default subprocess.Popen)
+ runner_conf -- extra parameters for the runner (default {})
+ stdout -- a fhand to store the stdout (default None)
+ stderr -- a fhand to store the stderr (default None)
+ stdin -- a fhand with the stdin (default None)
+ splits -- number of subjobs to generate
'''
#we want the same interface as subprocess.popen
#pylint: disable-msg=R0913
self._retcode = None
self._outputs_collected = False
#some defaults
#if the runner is not given, we use subprocess.Popen
if runner is None:
runner = StdPopen
if cmd_def is None:
if stdin is not None:
raise ValueError('No cmd_def given but stdin present')
cmd_def = []
#if the number of splits is not given we calculate them
if splits is None:
splits = self.default_splits(runner)
#we need a work dir to create the temporary split files
self._work_dir = NamedTemporaryDir()
#the main job
self._job = {'cmd': cmd, 'work_dir': self._work_dir}
#we create the new subjobs
self._jobs = self._split_jobs(cmd, cmd_def, splits, self._work_dir,
stdout=stdout, stderr=stderr, stdin=stdin)
#launch every subjobs
self._launch_jobs(self._jobs, runner=runner, runner_conf=runner_conf)
@staticmethod
def _launch_jobs(jobs, runner, runner_conf):
'It launches all jobs and it adds its popen instance to them'
jobs['popens'] = []
cwd = os.getcwd()
for job_index, (cmd, streams, work_dir) in enumerate(zip(jobs['cmds'],
jobs['streams'], jobs['work_dirs'])):
#the std stream can be present or not
stdin, stdout, stderr = None, None, None
if jobs['stdins']:
stdin = jobs['stdins'][job_index]
if jobs['stdouts']:
stdout = jobs['stdouts'][job_index]
if jobs['stderrs']:
stderr = jobs['stderrs'][job_index]
#for every job we go to its dir to launch it
os.chdir(work_dir.name)
#we have to be sure that stdin is open for read
if stdin:
stdin = open(stdin.name)
#we launch the job
if runner == StdPopen:
popen = runner(cmd, stdout=stdout, stderr=stderr, stdin=stdin)
else:
popen = runner(cmd, cmd_def=streams, stdout=stdout,
stderr=stderr, stdin=stdin,
runner_conf=runner_conf)
#we record it's popen instane
jobs['popens'].append(popen)
os.chdir(cwd)
def _split_jobs(self, cmd, cmd_def, splits, work_dir, stdout=None,
stderr=None, stdin=None,):
''''I creates one job for every split.
Every job has a cmd, work_dir and streams, this info is in the jobs dict
with the keys: cmds, work_dirs, streams
'''
#too many arguments, but similar interface to our __init__
#pylint: disable-msg=R0913
#pylint: disable-msg=R0914
#the main job streams
main_job_streams = get_streams_from_cmd(cmd, cmd_def, stdout=stdout,
stderr=stderr, stdin=stdin)
self._job['streams'] = main_job_streams
streams, work_dirs = self._split_streams(main_job_streams, splits,
work_dir.name)
#now we have to create a new cmd with the right in and out streams for
#every split
cmds, stdins, stdouts, stderrs = self._create_cmds(cmd, streams)
jobs = {'cmds': cmds, 'work_dirs': work_dirs, 'streams': streams,
'stdins':stdins, 'stdouts':stdouts, 'stderrs':stderrs}
return jobs
@staticmethod
def _create_cmds(cmd, streams):
'''Given a base cmd and a steams list it creates one modified cmds for
every stream'''
#the streams is a list of streams
streamss = streams
cmds = []
stdouts = []
stdins = []
stderrs = []
for streams in streamss:
new_cmd = copy.deepcopy(cmd)
for stream in streams:
#is the stream in the cmd or in is a std one?
if 'cmd_location' in stream:
location = stream['cmd_location']
else:
location = None
if location is None:
continue
elif location == STDIN:
stdins.append(stream['fhand'])
elif location == STDOUT:
stdouts.append(stream['fhand'])
elif location == STDERR:
stderrs.append(stream['fhand'])
else:
#we modify the cmd[location] with the new file
#we use the fname and no path because the jobs will be
#launched from the job working dir
location = stream['cmd_location']
fpath = stream['fname']
fname = os.path.split(fpath)[-1]
new_cmd[location] = fname
cmds.append(new_cmd)
return cmds, stdins, stdouts, stderrs
@staticmethod
def _split_streams(streams, splits, work_dir):
'''Given a list of streams it splits every stream in the given number of
splits'''
#which are the input and output streams?
input_stream_indexes = []
output_stream_indexes = []
for index, stream in enumerate(streams):
if stream['io'] == 'in':
input_stream_indexes.append(index)
elif stream['io'] == 'out':
output_stream_indexes.append(index)
#we create one work dir for every split
work_dirs = []
for index in range(splits):
work_dirs.append(NamedTemporaryDir(dir=work_dir))
#we have to do first the input files because the number of splits could
#be changed by them
#we split the input stream files into several splits
#we have to sort the input_stream_indexes, first we should take the ones
#that have an input file to be split
def to_be_split_first(stream1, stream2):
'It sorts the streams, the ones to be split go first'
split1 = None
split2 = None
for split, stream in ((split1, stream1), (split2, stream2)):
#maybe they shouldn't be split
if 'special' in stream and 'no_split' in stream['special']:
split = False
#maybe the have no file to split
if (('fhand' in stream and stream['fhand'] is None) or
('fname' in stream and stream['fname'] is None) or
('fname' not in stream and 'fhand' not in stream)):
split = False
elif (('fhand' in stream and stream['fhand'] is not None) or
('fname' in stream and stream['fname'] is not None)):
split = True
return int(split1) - int(split2)
input_stream_indexes = sorted(input_stream_indexes, to_be_split_first)
first = True
split_files = {}
for index in input_stream_indexes:
stream = streams[index]
#splitter
if 'splitter' not in stream:
msg = 'An splitter should be provided for every input stream'
msg += 'missing for: ' + str(stream)
raise ValueError(msg)
splitter = stream['splitter']
#the splitter can be a re, in that case with create the function
if '__call__' not in dir(splitter):
splitter = _create_file_splitter_with_re(splitter)
#we split the input files in the splits, every file will be in one
#of the given work_dirs
#the stream can have fname or fhands
if 'fhand' in stream:
file_ = stream['fhand']
elif 'fname' in stream:
file_ = stream['fname']
else:
file_ = None
if file_ is None:
#the stream migth have no file associated
files = [None] * len(work_dirs)
else:
files = splitter(file_, work_dirs)
#the files len can be different than splits, in that case we modify
#the splits or we raise an error
if len(files) != splits:
if first:
splits = len(files)
#we discard the empty temporary dirs
work_dirs = work_dirs[0:splits]
else:
msg = 'Not all input files were divided in the same number'
msg += ' of splits'
raise RuntimeError(msg)
first = False
split_files[index] = files #a list of files for every in stream
#we split the ouptut stream files into several splits
for index in output_stream_indexes:
stream = streams[index]
#for th output we just create the new names, but we don't split
#any file
if 'fhand' in stream:
fname = stream['fhand']
else:
fname = stream['fname']
files = _output_splitter(fname, work_dirs)
split_files[index] = files #a list of files for every in stream
new_streamss = []
#we need one new stream for every split
for split_index in range(splits):
#the streams for one job
new_streams = []
for stream_index, stream in enumerate(streams):
#we duplicate the original stream
new_stream = stream.copy()
#we set the new files
if 'fhand' in stream:
new_stream['fhand'] = split_files[stream_index][split_index]
else:
new_stream['fname'] = split_files[stream_index][split_index]
new_streams.append(new_stream)
new_streamss.append(new_streams)
return new_streamss, work_dirs
@staticmethod
def default_splits(runner):
'Given a runner it returns the number of splits recommended by default'
if runner is StdPopen:
#the number of processors
return os.sysconf('SC_NPROCESSORS_ONLN')
else:
module = runner.__module__.split('.')[-1]
module = RUNNER_MODULES[module]
return module.get_default_splits()
def wait(self):
'It waits for all the works to finnish'
#we wait till all jobs finish
for job in self._jobs['popens']:
job.wait()
#now that all jobs have finished we join the results
self._collect_output_streams()
#we join now the retcodes
self._collect_retcodes()
return self._retcode
def _collect_output_streams(self):
'''It joins all the output streams into the output files and it removes
the work dirs'''
if self._outputs_collected:
return
#for each file in the main job cmd
for stream_index, stream in enumerate(self._job['streams']):
if stream['io'] == 'in':
#now we're dealing only with output files
continue
#every subjob has a part to join for this output stream
part_out_fnames = []
for streams in self._jobs['streams']:
this_stream = streams[stream_index]
if 'fname' in this_stream:
part_out_fnames.append(this_stream['fname'])
else:
part_out_fnames.append(this_stream['fhand'])
#we need a function to join this stream
joiner = None
if joiner in stream:
joiner = stream['joiner']
else:
joiner = default_cat_joiner
if 'fname' in stream:
out_file = stream['fname']
else:
out_file = stream['fhand']
default_cat_joiner(out_file, part_out_fnames)
#now we can delete the tempdirs
for work_dir in self._jobs['work_dirs']:
work_dir.close()
self._outputs_collected = True
def _collect_retcodes(self):
'It gathers the retcodes from all processes'
retcode = None
for popen in self._jobs['popens']:
job_retcode = popen.returncode
if job_retcode is None:
#if some job is yet to be finished the main job is not finished
retcode = None
break
elif job_retcode != 0:
#if one job has finnished badly the main job is badly finished
retcode = job_retcode
break
#it should be 0 at this point
retcode = job_retcode
#if the retcode is not None the jobs have finished and we have to
#collect the outputs
if retcode is not None:
self._collect_output_streams()
self._retcode = retcode
return retcode
def _get_returncode(self):
'It returns the return code'
if self._retcode is None:
self._collect_retcodes()
return self._retcode
returncode = property(_get_returncode)
def kill(self):
'It kills all jobs'
if 'popens' not in self._jobs:
return
for popen in self._jobs['popens']:
popen.kill()
del self._jobs['popens']
def terminate(self):
'It kills all jobs'
if 'popens' not in self._jobs:
return
for popen in self._jobs['popens']:
popen.terminate()
del self._jobs['popens']
+def _calculate_divisions(num_items, splits):
+ '''It calculates how many items should be in every split to divide
+ the num_items into splits.
+ Not all splits will have an equal number of items, it will return a tuple
+ with two tuples inside:
+ ((num_fragments_1, num_items_1), (num_fragments_2, num_items_2))
+ splits = num_fragments_1 + num_fragments_2
+ num_items_1 = num_items_2 + 1
+ num_fragments_1 could be equal to 0.
+ This is the best way to create as many splits as possible as similar as
+ possible.
+ '''
+ if splits >= num_items:
+ return ((0, 1), (splits, 1))
+ num_fragments1 = num_items % splits
+ num_fragments2 = splits - num_fragments1
+ num_items2 = num_items // splits
+ num_items1 = num_items2 + 1
+ res = ((num_fragments1, num_items1), (num_fragments2, num_items2))
+ return res
+
+def _items_in_file(fhand, expression_kind, expression):
+ '''Given an fhand and an expression it yields the items cutting where the
+ line matches the expression'''
+ sofar = fhand.readline()
+ for line in fhand:
+ if ((expression_kind == 'str' and expression in line) or
+ (expression_kind != 'str' and expression.search(line))):
+ yield sofar
+ sofar = line
+ else:
+ sofar += line
+ else:
+ #the last item
+ yield sofar
+
+def _create_file_splitter_with_re(expression):
+ '''Given an expression it creates a file splitter.
+
+ The expression can be a regex or an str.
+ The item in the file will be defined everytime a line matches the
+ expression.
+ '''
+ expression_kind = None
+ if isinstance(expression, str):
+ expression_kind = 'str'
+ else:
+ expression_kind = 're'
+ def splitter(file_, work_dirs):
+ '''It splits the given file into several splits.
+
+ Every split will be located in one of the work_dirs, although it is not
+ guaranteed to create as many splits as work dirs. If in the file there
+ are less items than work_dirs some work_dirs will be left empty.
+ It returns a list with the fpaths or fhands for the splitted files.
+ file_ can be an fhand or an fname.
+ '''
+ #the file_ can be an fname or an fhand. which one is it?
+ file_is_str = None
+ if isinstance(file_, str):
+ fname = file_
+ file_is_str = True
+ else:
+ fname = file_.name
+ file_is_str = False
+
+ #how many splits do we want?
+ nsplits = len(work_dirs)
+ #how many items are in the file? We assume that all files have the same
+ #number of items
+ nitems = 0
+ for line in open(fname, 'r'):
+ if ((expression_kind == 'str' and expression in line) or
+ (expression_kind != 'str' and expression.search(line))):
+ nitems += 1
+
+ #how many splits a we going to create? and how many items will be in
+ #every split
+ #if there are more items than splits we create as many splits as items
+ if nsplits > nitems:
+ nsplits = nitems
+ (nsplits1, nitems1), (nsplits2, nitems2) = _calculate_divisions(nitems,
+ nsplits)
+ #we have to create nsplits1 files with nitems1 in it and nsplits2 files
+ #with nitems2 items in it
+ new_files = []
+ fhand = open(fname, 'r')
+ items = _items_in_file(fhand, expression_kind, expression)
+ splits_made = 0
+ for nsplits, nitems in ((nsplits1, nitems1), (nsplits2, nitems2)):
+ #we have to create nsplits files with nitems in it
+ #we don't need the split_index for anything
+ #pylint: disable-msg=W0612
+ for split_index in range(nsplits):
+ suffix = os.path.splitext(fname)[-1]
+ work_dir = work_dirs[splits_made]
+ ofh = NamedTemporaryFile(dir=work_dir.name, delete=False,
+ suffix=suffix)
+ for item_index in range(nitems):
+ ofh.write(items.next())
+ ofh.flush()
+ if file_is_str:
+ new_files.append(ofh.name)
+ ofh.close()
+ else:
+ new_files.append(ofh)
+ splits_made += 1
+ return new_files
+ return splitter
+
+def _output_splitter(file_, work_dirs):
+ '''It creates one output file for every splits.
+
+ Every split will be located in one of the work_dirs.
+ It returns a list with the fpaths for the new output files.
+ '''
+ #the file_ can be an fname or an fhand. which one is it?
+ file_is_str = None
+ if isinstance(file_, str):
+ fname = file_
+ file_is_str = True
+ else:
+ fname = file_.name
+ file_is_str = False
+ #how many splits do we want?
+ nsplits = len(work_dirs)
+
+ new_fpaths = []
+ #we have to create nsplits
+ for split_index in range(nsplits):
+ suffix = os.path.splitext(fname)[-1]
+ work_dir = work_dirs[split_index]
+ #we use delete=False because this temp file is in a temp dir that will
+ #be completely deleted. If we use delete=True we get an error because
+ #the file might be already deleted when its __del__ method is called
+ ofh = NamedTemporaryFile(dir=work_dir.name, suffix=suffix,
+ delete=False)
+ #the file will be deleted
+ #what do we need the fname or the fhand?
+ if file_is_str:
+ #it will be deleted because we just need the name in the temporary
+ #directory. tempfile.mktemp would be better for this use, but it is
+ #deprecated
+ new_fpaths.append(ofh.name)
+ ofh.close()
+ else:
+ new_fpaths.append(ofh)
+ return new_fpaths
+
+def default_cat_joiner(out_file_, in_files_):
+ '''It joins the given in files into the given out file.
+
+ It works with fnames or fhands.
+ '''
+ #are we working with fhands or fnames?
+ file_is_str = None
+ if isinstance(out_file_, str):
+ file_is_str = True
+ else:
+ file_is_str = False
+
+ #the output fhand
+ if file_is_str:
+ out_fhand = open(out_file_, 'w')
+ else:
+ out_fhand = open(out_file_.name, 'w')
+ for in_file_ in in_files_:
+ #the input fhand
+ if file_is_str:
+ in_fhand = open(in_file_, 'r')
+ else:
+ in_fhand = open(in_file_.name, 'r')
+ for line in in_fhand:
+ out_fhand.write(line)
+ in_fhand.close()
+ out_fhand.close()
diff --git a/psubprocess/streams.py b/psubprocess/streams.py
index 32b7016..0369836 100644
--- a/psubprocess/streams.py
+++ b/psubprocess/streams.py
@@ -1,123 +1,177 @@
'''
-Created on 13/07/2009
-
-@author: jose
+What's a stream
+
+A command takes some input streams and creates some output streams
+An stream is a file-like object.
+
+Kinds of streams in a cmd
+cmd arg1 arg2 -i opt1 -j opt3 arg3 < stdin > stdout stderr retcode
+in this general command there are several types of streams:
+ - previous arguments. arguments (without options) located before the first
+ option (like arg1 and arg2)
+ - options with one option, like opt3
+ - arguments (aka post_arguments). arguments located after the last option
+ - stdin, stdout, stderr and retcode. The standard ones.
+
+How to define the streams
+To create the streams list we need the cmd and the cmd_def. The stream list
+will be created using the cmd_def as a starting point and adding some extra
+information based in the cmd given.
+The cmd_def is defined by a dict with the following keys: options, io, splitter,
+fhand, fname, special, location. All of them are optional except the options.
+
+Options: It defines in which options or arguments is the stream found. It
+should by just a value or a tuple.
+Options kinds:
+ - -i the stream will be located after the parameter -i
+ - (-o, --output) the stream will be after -o or --output
+ - int where in the cmd is the stream (useful for pre-args and
+ post-args)
+ - STDIN
+ - STDOUT
+ - STDERR
+
+io: It defines if it's an input or an output stream for the cmd
+
+splitter: It defines how the stream should be split. There are three ways of
+definint it:
+ - an str that will be used to scan through the in streams, every
+ line with the str in in will be considered a token start
+ e.g '>' for the blast files
+ - a re every line with a match will be considered a token start
+ - a function the function should take the stream an return an
+ iterator with the tokens
+
+joiner: A function that should take the out streams for all jobs and return
+the joined stream. If not given the output stream will be just concatenated.
+
+fhand or fpath: the stream file. This information is not part of the cmd_def.
+It will be added to the streams looking at the cmd
+
+special: It defines some special treatments for the streams.
+ - no_split It shouldn't be split
+ - no_transfer It shouldn't be transfer to all nodes
+ - no_support An error should be raised if used.
+
+cmd_location: Where in the cmd is the file that corresponds to this stream
+is located. This information shouldn't be in the cmd_def it will be added to the
+streams using the cmd and the cmd_def.
'''
# Copyright 2009 Jose Blanca, Peio Ziarsolo, COMAV-Univ. Politecnica Valencia
# This file is part of project.
# project is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as
# published by the Free Software Foundation, either version 3 of the
# License, or (at your option) any later version.
# project is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
# You should have received a copy of the GNU Affero General Public License
# along with project. If not, see <http://www.gnu.org/licenses/>.
STDIN = 'stdin'
STDOUT = 'stdout'
STDERR = 'stderr'
def _find_param_def_in_cmd(cmd, param_def):
'''Given a cmd and a parameter definition it returns the index of the param
in the cmd.
If the param is not found in the cmd it will raise a ValueError.
'''
options = param_def['options']
#options could be a list or an item
if not isinstance(options, list) and not isinstance(options, tuple):
options = (options,)
#the standard options with command line options
for index, item in enumerate(cmd):
if item in options:
return index
raise ValueError('Parameter not found in the given cmd')
def _positive_int(index, sequence):
'''It returns the same int index, but positive.'''
if index is None:
return None
elif index < 0:
return len(sequence) + index
return index
def _add_std_cmd_defs(cmd_def, stdout, stdin, stderr):
'''It adds the standard stream to the cmd_def.
If they're already there it just completes them
'''
#which std streams are in the cmd_def?
in_cmd_def = {}
for param_def in cmd_def:
option = param_def['options']
if option in (STDOUT, STDIN, STDERR):
in_cmd_def[option] = True
#we create the missing ones
if stdout is not None and STDOUT not in in_cmd_def:
cmd_def.append({'options':STDOUT, 'io':'out'})
if stderr is not None and STDERR not in in_cmd_def:
cmd_def.append({'options':STDERR, 'io':'out'})
if stdin is not None and STDIN not in in_cmd_def:
cmd_def.append({'options':STDIN, 'io':'in'})
def get_streams_from_cmd(cmd, cmd_def, stdout=None, stdin=None, stderr=None):
'Given a cmd and a cmd definition it returns the streams'
#stdout and stderr might not be in the cmd_def
_add_std_cmd_defs(cmd_def, stdout=stdout, stdin=stdin, stderr=stderr)
streams = []
for param_def in cmd_def:
options = param_def['options']
#where is this stream located in the cmd?
location = None
#we have to look for the stream in the cmd
#where is the parameter in the cmd list?
#if the param options is not an int, its a list of strings
if isinstance(options, int):
#for PRE_ARG (1) and POST_ARG (-1)
#we take 1 unit because the options should be 1 to the right
#of the value
index = _positive_int(options, cmd) - 1
elif options in (STDERR, STDOUT, STDIN):
index = options
else:
#look for param in cmd
try:
index = _find_param_def_in_cmd(cmd, param_def)
except ValueError:
index = None
if index == STDERR:
location = STDERR
fname = stderr
elif index == STDOUT:
location = STDOUT
fname = stdout
elif index == STDIN:
location = STDIN
fname = stdin
elif index is not None:
location = index + 1
fname = cmd[location]
#create the result dict
stream = param_def.copy()
if location is not None:
if location in (STDIN, STDOUT, STDERR):
stream['fhand'] = fname
else:
stream['fname'] = fname
stream['cmd_location'] = location
streams.append(stream)
return streams
\ No newline at end of file
diff --git a/test/cmd_def_test.py b/test/cmd_def_test.py
index 35c3fb6..38777d9 100644
--- a/test/cmd_def_test.py
+++ b/test/cmd_def_test.py
@@ -1,157 +1,103 @@
'''
Created on 13/07/2009
@author: jose
'''
# Copyright 2009 Jose Blanca, Peio Ziarsolo, COMAV-Univ. Politecnica Valencia
# This file is part of psubprocess.
# psubprocess is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as
# published by the Free Software Foundation, either version 3 of the
# License, or (at your option) any later version.
# psubprocess is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
# You should have received a copy of the GNU Affero General Public License
# along with psubprocess. If not, see <http://www.gnu.org/licenses/>.
import unittest
from psubprocess.streams import (get_streams_from_cmd,
STDIN, STDOUT, STDERR)
-#What's a stream
-#
-#A command takes some input streams and creates some ouput streams
-#An stream is a file-like object or a directory. In fact an stream can be
-#composed by several files (e.g. a seq and a qual file that should be splitted
-#together)
-#
-#Kinds of streams in a cmd
-#cmd arg1 arg2 -i opt1 opt2 -j opt3 arg3 < stdin > stdout stderr retcode
-#in this general command there are several types of streams:
-# - previous arguments. arguments (without options) located before the first
-# option (like arg1 and arg2)
-# - options with one option, like opt3
-# - options with several arguments, like -i that has opt1 and opt2
-# - arguments (aka post_arguments). arguments located after the last option
-# - stdin, stdout, stderr and retcode. The standard ones.
-#
-#How to define the streams
-#An stream is defined by a dict with the following keys: options, io, splitter,
-#value, special, location. All of them are optional except the options.
-#Options: It defines in which options or arguments is the stream found. It
-#should by just a value or a tuple.
-#Options kinds:
-# - -i the stream will be located after the parameter -i
-# - (-o, --output) the stream will be after -o or --output
-# - PRE_ARG right after the cmd and before the first parameter
-# - POST_ARG after the last option
-# - STDIN
-# - STDOUT
-# - STDERR
-#io: It defines if it's an input or an output stream for the cmd
-#splitter: It defines how the stream should be split. There are three ways of
-#definint it:
-# - an str that will be used to scan through the in streams, every
-# line with the str in in will be considered a token start
-# e.g '>' for the blast files
-# - a re every line with a match will be considered a token start
-# - a function the function should take the stream an return an
-# iterator with the tokens
-#joiner: A function that should take the out streams for all jobs and return
-#the joined stream. If not given the output stream will be just concatenated.
-#value: the value for the stream, this stream will not define the value in the
-#command line, it will be implicit
-#special: It defines some special treaments for the streams.
-# - no_split It shouldn't be split
-# - no_transfer It shouldn't be transfer to all nodes
-# - no_abspath Its path shouldn't be converted to absolute
-# - create It should be created before running the command
-# - no_support An error should be raised if used.
-#cmd_locations: If the location is not given the assumed location will be 0.
-#That means that the stream will be located in the 0 position after the option.
-#It can be either an int or an slice. In the slice case several substreams will
-#be taken together in the stream. Useful for instance for the case of two fasta
-#files with the seq an qual that should be split together and transfered
-#together.
+
def _check_streams(streams, expected_streams):
'It checks that streams meet the requirements set by the expected streams'
for stream, expected_stream in zip(streams, expected_streams):
for key in expected_stream:
assert stream[key] == expected_stream[key]
class StreamsFromCmdTest(unittest.TestCase):
'It tests that we can get the input and output files from the cmd'
@staticmethod
def test_simple_case():
'It tests the most simple cases'
#a simple case
cmd = ['hola', '-i', 'caracola.txt']
cmd_def = [{'options': ('-i', '--input'), 'io': 'in'}]
expected_streams = [{'fname': 'caracola.txt', 'io':'in',
'cmd_location':2}]
streams = get_streams_from_cmd(cmd, cmd_def=cmd_def)
_check_streams(streams, expected_streams)
#a parameter not found in the cmd
cmd = ['hola', '-i', 'caracola.txt']
cmd_def = [{'options': ('-i', '--input'), 'io': 'in'},
{'options': ('-j', '--input2'), 'io': 'in'}]
expected_streams = [{'fname': 'caracola.txt', 'io':'in',
'cmd_location': 2}]
streams = get_streams_from_cmd(cmd, cmd_def=cmd_def)
_check_streams(streams, expected_streams)
@staticmethod
def test_arguments():
'It test that it works with cmd arguments, not options'
#the options we want is in the pre_argv, after the binary
cmd = ['hola', 'hola.txt', '-i', 'caracola.txt']
cmd_def = [{'options': 1, 'io': 'in'}]
expected_streams = [{'fname': 'hola.txt', 'io':'in',
'cmd_location':1}]
streams = get_streams_from_cmd(cmd, cmd_def=cmd_def)
_check_streams(streams, expected_streams)
#the option we want is at the end of the cmd
cmd = ['hola', '-i', 'caracola.txt', 'hola.txt']
cmd_def = [{'options': -1, 'io': 'in'}]
expected_streams = [{'fname': 'hola.txt', 'io':'in',
'cmd_location':3}]
streams = get_streams_from_cmd(cmd, cmd_def=cmd_def)
_check_streams(streams, expected_streams)
@staticmethod
def test_stdin():
'We want stdin, stdout and stderr as streams'
#stdin
cmd = ['hola']
cmd_def = [{'options':STDIN, 'io': 'in'},
{'options':STDOUT, 'io': 'out'}]
stdout = 'stdout' #in the real world they will be files
stderr = 'stderr'
stdin = 'stdin'
expected_streams = [{'fhand': stdin, 'io':'in', 'cmd_location':STDIN,
'options': stdin},
{'fhand': stdout, 'io':'out', 'cmd_location':STDOUT,
'options':stdout},
{'fhand': stderr, 'io':'out', 'cmd_location':STDERR,
'options':stderr}]
streams = get_streams_from_cmd(cmd, cmd_def=cmd_def, stdout=stdout,
stderr=stderr, stdin=stdin)
_check_streams(streams, expected_streams)
assert 'fname' not in streams[0]
assert 'fname' not in streams[1]
assert 'fname' not in streams[2]
if __name__ == "__main__":
#import sys;sys.argv = ['', 'Test.testName']
unittest.main()
\ No newline at end of file
|
JoseBlanca/psubprocess
|
061900658c931fa91966dfe7168c6dd0bfc92b14
|
Some bug fixes
|
diff --git a/psubprocess/prunner.py b/psubprocess/prunner.py
index 8d4e6c5..ff9fcb8 100644
--- a/psubprocess/prunner.py
+++ b/psubprocess/prunner.py
@@ -1,590 +1,590 @@
'''It launches parallel processes with an interface similar to Popen.
It divides jobs into subjobs and launches the subjobs.
Created on 16/07/2009
@author: jose
'''
# Copyright 2009 Jose Blanca, Peio Ziarsolo, COMAV-Univ. Politecnica Valencia
# This file is part of psubprocess.
# psubprocess is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as
# published by the Free Software Foundation, either version 3 of the
# License, or (at your option) any later version.
# psubprocess is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
# You should have received a copy of the GNU Affero General Public License
# along with psubprocess. If not, see <http://www.gnu.org/licenses/>.
from subprocess import Popen as StdPopen
import os, tempfile, shutil, copy
from psubprocess.streams import get_streams_from_cmd, STDOUT, STDERR, STDIN
from psubprocess import condor_runner
RUNNER_MODULES = {}
RUNNER_MODULES['condor_runner'] = condor_runner
class NamedTemporaryDir(object):
'''This class creates temporary directories '''
#pylint: disable-msg=W0622
#we redifine the build in dir because temfile uses that inteface
def __init__(self, dir=None):
'''It initiates the class.'''
self._name = tempfile.mkdtemp(dir=dir)
def get_name(self):
'Returns path to the dict'
return self._name
name = property(get_name)
def close(self):
'''It removes the temp dir'''
if os.path.exists(self._name):
shutil.rmtree(self._name)
def __del__(self):
'''It removes de temp dir when instance is removed and the garbaje
colector decides it'''
self.close()
def NamedTemporaryFile(dir=None, delete=False, suffix=''):
'''It creates a temporary file that won't be deleted when close
This behaviour can be done with tempfile.NamedTemporaryFile in python > 2.6
'''
#pylint: disable-msg=W0613
#delete is not being used, it's there as a reminder, once we start to use
#python 2.6 this function should be removed
#pylint: disable-msg=C0103
#pylint: disable-msg=W0622
#We want to mimick tempfile.NamedTemporaryFile
fpath = tempfile.mkstemp(dir=dir, suffix=suffix)[1]
return open(fpath, 'w')
def _calculate_divisions(num_items, splits):
'''It calculates how many items should be in every split to divide
the num_items into splits.
Not all splits will have an equal number of items, it will return a tuple
with two tuples inside:
((num_fragments_1, num_items_1), (num_fragments_2, num_items_2))
splits = num_fragments_1 + num_fragments_2
num_items_1 = num_items_2 + 1
num_fragments_1 could be equal to 0.
This is the best way to create as many splits as possible as similar as
possible.
'''
if splits >= num_items:
return ((0, 1), (splits, 1))
num_fragments1 = num_items % splits
num_fragments2 = splits - num_fragments1
num_items2 = num_items // splits
num_items1 = num_items2 + 1
res = ((num_fragments1, num_items1), (num_fragments2, num_items2))
return res
def _items_in_file(fhand, expression_kind, expression):
'''Given an fhand and an expression it yields the items cutting where the
line matches the expression'''
sofar = fhand.readline()
for line in fhand:
if ((expression_kind == 'str' and expression in line) or
- expression.search(line)):
+ (expression_kind != 'str' and expression.search(line))):
yield sofar
sofar = line
else:
sofar += line
else:
#the last item
yield sofar
def _create_file_splitter_with_re(expression):
'''Given an expression it creates a file splitter.
The expression can be a regex or an str.
The item in the file will be defined everytime a line matches the
expression.
'''
expression_kind = None
if isinstance(expression, str):
expression_kind = 'str'
else:
expression_kind = 're'
def splitter(file_, work_dirs):
'''It splits the given file into several splits.
Every split will be located in one of the work_dirs, although it is not
guaranteed to create as many splits as work dirs. If in the file there
are less items than work_dirs some work_dirs will be left empty.
It returns a list with the fpaths or fhands for the splitted files.
file_ can be an fhand or an fname.
'''
#the file_ can be an fname or an fhand. which one is it?
file_is_str = None
if isinstance(file_, str):
fname = file_
file_is_str = True
else:
fname = file_.name
file_is_str = False
#how many splits do we want?
nsplits = len(work_dirs)
#how many items are in the file? We assume that all files have the same
#number of items
nitems = 0
for line in open(fname, 'r'):
if ((expression_kind == 'str' and expression in line) or
- expression.search(line)):
+ (expression_kind != 'str' and expression.search(line))):
nitems += 1
#how many splits a we going to create? and how many items will be in
#every split
#if there are more items than splits we create as many splits as items
if nsplits > nitems:
nsplits = nitems
(nsplits1, nitems1), (nsplits2, nitems2) = _calculate_divisions(nitems,
nsplits)
#we have to create nsplits1 files with nitems1 in it and nsplits2 files
#with nitems2 items in it
new_files = []
fhand = open(fname, 'r')
items = _items_in_file(fhand, expression_kind, expression)
splits_made = 0
for nsplits, nitems in ((nsplits1, nitems1), (nsplits2, nitems2)):
#we have to create nsplits files with nitems in it
#we don't need the split_index for anything
#pylint: disable-msg=W0612
for split_index in range(nsplits):
suffix = os.path.splitext(fname)[-1]
work_dir = work_dirs[splits_made]
ofh = NamedTemporaryFile(dir=work_dir.name, delete=False,
suffix=suffix)
for item_index in range(nitems):
ofh.write(items.next())
ofh.flush()
if file_is_str:
new_files.append(ofh.name)
ofh.close()
else:
new_files.append(ofh)
splits_made += 1
return new_files
return splitter
def _output_splitter(file_, work_dirs):
'''It creates one output file for every splits.
Every split will be located in one of the work_dirs.
It returns a list with the fpaths for the new output files.
'''
#the file_ can be an fname or an fhand. which one is it?
file_is_str = None
if isinstance(file_, str):
fname = file_
file_is_str = True
else:
fname = file_.name
file_is_str = False
#how many splits do we want?
nsplits = len(work_dirs)
new_fpaths = []
#we have to create nsplits
for split_index in range(nsplits):
suffix = os.path.splitext(fname)[-1]
work_dir = work_dirs[split_index]
#we use delete=False because this temp file is in a temp dir that will
#be completely deleted. If we use delete=True we get an error because
#the file might be already deleted when its __del__ method is called
ofh = NamedTemporaryFile(dir=work_dir.name, suffix=suffix,
delete=False)
#the file will be deleted
#what do we need the fname or the fhand?
if file_is_str:
#it will be deleted because we just need the name in the temporary
#directory. tempfile.mktemp would be better for this use, but it is
#deprecated
new_fpaths.append(ofh.name)
ofh.close()
else:
new_fpaths.append(ofh)
return new_fpaths
def default_cat_joiner(out_file_, in_files_):
'''It joins the given in files into the given out file.
It works with fnames or fhands.
'''
#are we working with fhands or fnames?
file_is_str = None
if isinstance(out_file_, str):
file_is_str = True
else:
file_is_str = False
#the output fhand
if file_is_str:
out_fhand = open(out_file_, 'w')
else:
out_fhand = open(out_file_.name, 'w')
for in_file_ in in_files_:
#the input fhand
if file_is_str:
in_fhand = open(in_file_, 'r')
else:
in_fhand = open(in_file_.name, 'r')
for line in in_fhand:
out_fhand.write(line)
in_fhand.close()
out_fhand.close()
class Popen(object):
'It paralellizes the given processes divinding them into subprocesses.'
def __init__(self, cmd, cmd_def=None, runner=None, runner_conf=None,
stdout=None, stderr=None, stdin=None, splits=None):
'''
Constructor
'''
#we want the same interface as subprocess.popen
#pylint: disable-msg=R0913
self._retcode = None
self._outputs_collected = False
#some defaults
#if the runner is not given, we use subprocess.Popen
if runner is None:
runner = StdPopen
if cmd_def is None:
if stdin is not None:
raise ValueError('No cmd_def given but stdin present')
cmd_def = []
#if the number of splits is not given we calculate them
if splits is None:
splits = self.default_splits(runner)
#we need a work dir to create the temporary split files
self._work_dir = NamedTemporaryDir()
#the main job
self._job = {'cmd': cmd, 'work_dir': self._work_dir}
#we create the new subjobs
self._jobs = self._split_jobs(cmd, cmd_def, splits, self._work_dir,
stdout=stdout, stderr=stderr, stdin=stdin)
#launch every subjobs
self._launch_jobs(self._jobs, runner=runner, runner_conf=runner_conf)
@staticmethod
def _launch_jobs(jobs, runner, runner_conf):
'It launches all jobs and it adds its popen instance to them'
jobs['popens'] = []
cwd = os.getcwd()
for job_index, (cmd, streams, work_dir) in enumerate(zip(jobs['cmds'],
jobs['streams'], jobs['work_dirs'])):
#the std stream can be present or not
stdin, stdout, stderr = None, None, None
if jobs['stdins']:
stdin = jobs['stdins'][job_index]
if jobs['stdouts']:
stdout = jobs['stdouts'][job_index]
if jobs['stderrs']:
stderr = jobs['stderrs'][job_index]
#for every job we go to its dir to launch it
os.chdir(work_dir.name)
#we have to be sure that stdin is open for read
if stdin:
stdin = open(stdin.name)
#we launch the job
if runner == StdPopen:
popen = runner(cmd, stdout=stdout, stderr=stderr, stdin=stdin)
else:
popen = runner(cmd, cmd_def=streams, stdout=stdout,
stderr=stderr, stdin=stdin,
runner_conf=runner_conf)
#we record it's popen instane
jobs['popens'].append(popen)
os.chdir(cwd)
def _split_jobs(self, cmd, cmd_def, splits, work_dir, stdout=None,
stderr=None, stdin=None,):
''''I creates one job for every split.
Every job has a cmd, work_dir and streams, this info is in the jobs dict
with the keys: cmds, work_dirs, streams
'''
#too many arguments, but similar interface to our __init__
#pylint: disable-msg=R0913
#pylint: disable-msg=R0914
#the main job streams
main_job_streams = get_streams_from_cmd(cmd, cmd_def, stdout=stdout,
stderr=stderr, stdin=stdin)
self._job['streams'] = main_job_streams
streams, work_dirs = self._split_streams(main_job_streams, splits,
work_dir.name)
#now we have to create a new cmd with the right in and out streams for
#every split
cmds, stdins, stdouts, stderrs = self._create_cmds(cmd, streams)
jobs = {'cmds': cmds, 'work_dirs': work_dirs, 'streams': streams,
'stdins':stdins, 'stdouts':stdouts, 'stderrs':stderrs}
return jobs
@staticmethod
def _create_cmds(cmd, streams):
'''Given a base cmd and a steams list it creates one modified cmds for
every stream'''
#the streams is a list of streams
streamss = streams
cmds = []
stdouts = []
stdins = []
stderrs = []
for streams in streamss:
new_cmd = copy.deepcopy(cmd)
for stream in streams:
#is the stream in the cmd or in is a std one?
if 'cmd_location' in stream:
location = stream['cmd_location']
else:
location = None
if location is None:
continue
elif location == STDIN:
stdins.append(stream['fhand'])
elif location == STDOUT:
stdouts.append(stream['fhand'])
elif location == STDERR:
stderrs.append(stream['fhand'])
else:
#we modify the cmd[location] with the new file
#we use the fname and no path because the jobs will be
#launched from the job working dir
location = stream['cmd_location']
fpath = stream['fname']
fname = os.path.split(fpath)[-1]
new_cmd[location] = fname
cmds.append(new_cmd)
return cmds, stdins, stdouts, stderrs
@staticmethod
def _split_streams(streams, splits, work_dir):
'''Given a list of streams it splits every stream in the given number of
splits'''
#which are the input and output streams?
input_stream_indexes = []
output_stream_indexes = []
for index, stream in enumerate(streams):
if stream['io'] == 'in':
input_stream_indexes.append(index)
elif stream['io'] == 'out':
output_stream_indexes.append(index)
#we create one work dir for every split
work_dirs = []
for index in range(splits):
work_dirs.append(NamedTemporaryDir(dir=work_dir))
#we have to do first the input files because the number of splits could
#be changed by them
#we split the input stream files into several splits
#we have to sort the input_stream_indexes, first we should take the ones
#that have an input file to be split
def to_be_split_first(stream1, stream2):
'It sorts the streams, the ones to be split go first'
split1 = None
split2 = None
for split, stream in ((split1, stream1), (split2, stream2)):
#maybe they shouldn't be split
if 'special' in stream and 'no_split' in stream['special']:
split = False
#maybe the have no file to split
if (('fhand' in stream and stream['fhand'] is None) or
('fname' in stream and stream['fname'] is None) or
('fname' not in stream and 'fhand' not in stream)):
split = False
elif (('fhand' in stream and stream['fhand'] is not None) or
('fname' in stream and stream['fname'] is not None)):
split = True
return int(split1) - int(split2)
input_stream_indexes = sorted(input_stream_indexes, to_be_split_first)
first = True
split_files = {}
for index in input_stream_indexes:
stream = streams[index]
#splitter
if 'splitter' not in stream:
msg = 'An splitter should be provided for every input stream'
msg += 'missing for: ' + str(stream)
raise ValueError(msg)
splitter = stream['splitter']
#the splitter can be a re, in that case with create the function
if '__call__' not in dir(splitter):
splitter = _create_file_splitter_with_re(splitter)
#we split the input files in the splits, every file will be in one
#of the given work_dirs
#the stream can have fname or fhands
if 'fhand' in stream:
file_ = stream['fhand']
elif 'fname' in stream:
file_ = stream['fname']
else:
file_ = None
if file_ is None:
#the stream migth have no file associated
files = [None] * len(work_dirs)
else:
files = splitter(file_, work_dirs)
#the files len can be different than splits, in that case we modify
#the splits or we raise an error
if len(files) != splits:
if first:
splits = len(files)
#we discard the empty temporary dirs
work_dirs = work_dirs[0:splits]
else:
msg = 'Not all input files were divided in the same number'
msg += ' of splits'
raise RuntimeError(msg)
first = False
split_files[index] = files #a list of files for every in stream
#we split the ouptut stream files into several splits
for index in output_stream_indexes:
stream = streams[index]
#for th output we just create the new names, but we don't split
#any file
if 'fhand' in stream:
fname = stream['fhand']
else:
fname = stream['fname']
files = _output_splitter(fname, work_dirs)
split_files[index] = files #a list of files for every in stream
new_streamss = []
#we need one new stream for every split
for split_index in range(splits):
#the streams for one job
new_streams = []
for stream_index, stream in enumerate(streams):
#we duplicate the original stream
new_stream = stream.copy()
#we set the new files
if 'fhand' in stream:
new_stream['fhand'] = split_files[stream_index][split_index]
else:
new_stream['fname'] = split_files[stream_index][split_index]
new_streams.append(new_stream)
new_streamss.append(new_streams)
return new_streamss, work_dirs
@staticmethod
def default_splits(runner):
'Given a runner it returns the number of splits recommended by default'
if runner is StdPopen:
#the number of processors
return os.sysconf('SC_NPROCESSORS_ONLN')
else:
module = runner.__module__.split('.')[-1]
module = RUNNER_MODULES[module]
return module.get_default_splits()
def wait(self):
'It waits for all the works to finnish'
#we wait till all jobs finish
for job in self._jobs['popens']:
job.wait()
#now that all jobs have finished we join the results
self._collect_output_streams()
#we join now the retcodes
self._collect_retcodes()
return self._retcode
def _collect_output_streams(self):
'''It joins all the output streams into the output files and it removes
the work dirs'''
if self._outputs_collected:
return
#for each file in the main job cmd
for stream_index, stream in enumerate(self._job['streams']):
if stream['io'] == 'in':
#now we're dealing only with output files
continue
#every subjob has a part to join for this output stream
part_out_fnames = []
for streams in self._jobs['streams']:
this_stream = streams[stream_index]
if 'fname' in this_stream:
part_out_fnames.append(this_stream['fname'])
else:
part_out_fnames.append(this_stream['fhand'])
#we need a function to join this stream
joiner = None
if joiner in stream:
joiner = stream['joiner']
else:
joiner = default_cat_joiner
if 'fname' in stream:
out_file = stream['fname']
else:
out_file = stream['fhand']
default_cat_joiner(out_file, part_out_fnames)
#now we can delete the tempdirs
for work_dir in self._jobs['work_dirs']:
work_dir.close()
self._outputs_collected = True
def _collect_retcodes(self):
'It gathers the retcodes from all processes'
retcode = None
for popen in self._jobs['popens']:
job_retcode = popen.returncode
if job_retcode is None:
#if some job is yet to be finished the main job is not finished
retcode = None
break
elif job_retcode != 0:
#if one job has finnished badly the main job is badly finished
retcode = job_retcode
break
#it should be 0 at this point
retcode = job_retcode
#if the retcode is not None the jobs have finished and we have to
#collect the outputs
if retcode is not None:
self._collect_output_streams()
self._retcode = retcode
return retcode
def _get_returncode(self):
'It returns the return code'
if self._retcode is None:
self._collect_retcodes()
return self._retcode
returncode = property(_get_returncode)
def kill(self):
'It kills all jobs'
if 'popens' not in self._jobs:
return
for popen in self._jobs['popens']:
popen.kill()
del self._jobs['popens']
def terminate(self):
'It kills all jobs'
if 'popens' not in self._jobs:
return
for popen in self._jobs['popens']:
popen.terminate()
del self._jobs['popens']
diff --git a/scripts/run_in_parallel.py b/scripts/run_in_parallel.py
index 399fe56..04536dd 100644
--- a/scripts/run_in_parallel.py
+++ b/scripts/run_in_parallel.py
@@ -1,102 +1,102 @@
'''
Created on 21/07/2009
@author: jose
'''
# Copyright 2009 Jose Blanca, Peio Ziarsolo, COMAV-Univ. Politecnica Valencia
# This file is part of psubprocess.
# psubprocess is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as
# published by the Free Software Foundation, either version 3 of the
# License, or (at your option) any later version.
# psubprocess is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
# You should have received a copy of the GNU Affero General Public License
# along with psubprocess. If not, see <http://www.gnu.org/licenses/>.
from optparse import OptionParser
import os.path, sys, signal
from psubprocess import CondorPopen, Popen
POPEN = None
def parse_options():
'It parses the command line arguments'
parser = OptionParser('usage: %prog -c "command"')
parser.add_option('-n', '--nsplits', dest='splits',
help='number of subjobs to create')
parser.add_option('-r', '--runner', dest='runner', default='subprocess',
help='who should run the subjobs (subprocess or condor)')
parser.add_option('-c', '--command', dest='command',
help='The command to run')
parser.add_option('-o', '--stdout', dest='stdout',
help='A file to store the stdout')
parser.add_option('-e', '--stderr', dest='stderr',
help='A file to store the stderr')
parser.add_option('-i', '--stdin', dest='stdin',
help='A file to store the stdin')
parser.add_option('-d', '--cmd_def', dest='cmd_def',
help='The command line definition')
return parser
def get_options():
'It returns a dict with the options'
parser = parse_options()
cmd_options = parser.parse_args()[0]
options = {}
if cmd_options.command is None:
raise parser.error('The command should be set')
else:
- options['cmd'] = cmd_options.command
+ options['cmd'] = cmd_options.command.split()
if cmd_options.stdout is not None:
options['stdout'] = open(cmd_options.stdout, 'w')
if cmd_options.stderr is not None:
options['stderr'] = open(cmd_options.stderr, 'w')
if cmd_options.stdin is not None:
options['stdin'] = open(cmd_options.stdin)
if cmd_options.runner == 'subprocess':
options['runner'] = None
elif cmd_options.runner == 'condor':
options['runner'] = CondorPopen
options['runner_conf'] = {'transfer_executable':False}
else:
parser.error('Allowable runners are: subprocess and condor')
if cmd_options.cmd_def is None:
options['cmd_def'] = []
else:
cmd_def = cmd_options.cmd_def
#it can be a file or an str
if os.path.exists(cmd_def):
cmd_def = open(cmd_def).read()
options['cmd_def'] = eval(cmd_def)
return options
def kill_process():
'It kills the ongoing process'
if POPEN is not None:
POPEN.kill()
sys.exit(-1)
def set_signal_handlers():
'It sets the SIGTERM and SIGKILL signals'
signal.signal(signal.SIGTERM, kill_process)
- signal.signal(signal.SIGKILL, kill_process)
- signal.signal(signal.SIGINT, kill_process)
+ signal.signal(signal.SIGABRT, kill_process)
+ signal.signal(signal.SIGINT, kill_process)
def main():
'It runs a command in parallel'
set_signal_handlers()
options = get_options()
global POPEN
POPEN = Popen(**options)
sys.exit(POPEN.wait())
if __name__ == '__main__':
main()
\ No newline at end of file
diff --git a/test/prunner_test.py b/test/prunner_test.py
index df3db46..574a584 100644
--- a/test/prunner_test.py
+++ b/test/prunner_test.py
@@ -1,166 +1,191 @@
'''
Created on 16/07/2009
@author: jose
'''
# Copyright 2009 Jose Blanca, Peio Ziarsolo, COMAV-Univ. Politecnica Valencia
# This file is part of psubprocess.
# psubprocess is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as
# published by the Free Software Foundation, either version 3 of the
# License, or (at your option) any later version.
# psubprocess is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
# You should have received a copy of the GNU Affero General Public License
# along with psubprocess. If not, see <http://www.gnu.org/licenses/>.
import unittest
from tempfile import NamedTemporaryFile
import os
from psubprocess import Popen
from psubprocess.streams import STDIN
from test_utils import create_test_binary
class PRunnerTest(unittest.TestCase):
'It test that we can parallelize processes'
@staticmethod
def test_file_in():
'It tests the most basic behaviour'
bin = create_test_binary()
#with infile
in_file = NamedTemporaryFile()
in_file.write('hola')
in_file.flush()
cmd = [bin]
cmd.extend(['-i', in_file.name])
stdout = NamedTemporaryFile()
stderr = NamedTemporaryFile()
cmd_def = [{'options': ('-i', '--input'), 'io': 'in', 'splitter':''}]
popen = Popen(cmd, stdout=stdout, stderr=stderr, cmd_def=cmd_def)
assert popen.wait() == 0 #waits till finishes and looks to the retcod
assert open(stdout.name).read() == 'hola'
in_file.close()
os.remove(bin)
@staticmethod
def test_job_no_in_stream():
'It test that a job with no in stream is run splits times'
bin = create_test_binary()
cmd = [bin]
cmd.extend(['-o', 'hola', '-e', 'caracola'])
stdout = NamedTemporaryFile()
stderr = NamedTemporaryFile()
cmd_def = [{'options': ('-i', '--input'), 'io': 'in', 'splitter':''}]
splits = 4
popen = Popen(cmd, stdout=stdout, stderr=stderr, cmd_def=cmd_def,
splits=splits)
assert popen.wait() == 0 #waits till finishes and looks to the retcod
assert open(stdout.name).read() == 'hola' * splits
assert open(stderr.name).read() == 'caracola' * splits
os.remove(bin)
@staticmethod
def test_stdin():
'It test that stdin works as input'
bin = create_test_binary()
#with stdin
content = 'hola1\nhola2\nhola3\nhola4\nhola5\nhola6\nhola7\nhola8\n'
content += 'hola9\nhola10|n'
stdin = NamedTemporaryFile()
stdin.write(content)
stdin.flush()
cmd = [bin]
cmd.extend(['-s'])
stdout = NamedTemporaryFile()
stderr = NamedTemporaryFile()
cmd_def = [{'options':STDIN, 'io': 'in', 'splitter':''}]
popen = Popen(cmd, stdout=stdout, stderr=stderr, stdin=stdin,
cmd_def=cmd_def)
assert popen.wait() == 0 #waits till finishes and looks to the retcod
assert open(stdout.name).read() == content
assert open(stderr.name).read() == ''
os.remove(bin)
@staticmethod
def test_infile_outfile():
'It tests that we can set an input file and an output file'
bin = create_test_binary()
#with infile
in_file = NamedTemporaryFile()
content = 'hola1\nhola2\nhola3\nhola4\nhola5\nhola6\nhola7\nhola8\n'
content += 'hola9\nhola10|n'
in_file.write(content)
in_file.flush()
out_file = NamedTemporaryFile()
cmd = [bin]
cmd.extend(['-i', in_file.name, '-t', out_file.name])
stdout = NamedTemporaryFile()
stderr = NamedTemporaryFile()
cmd_def = [{'options': ('-i', '--input'), 'io': 'in', 'splitter':''},
{'options': ('-t', '--output'), 'io': 'out'}]
popen = Popen(cmd, stdout=stdout, stderr=stderr, cmd_def=cmd_def)
assert popen.wait() == 0 #waits till finishes and looks to the retcod
assert not open(stdout.name).read()
assert not open(stderr.name).read()
assert open(out_file.name).read() == content
in_file.close()
os.remove(bin)
@staticmethod
def test_retcode():
'It tests that we get the correct returncode'
bin = create_test_binary()
cmd = [bin]
cmd.extend(['-r', '20'])
stdout = NamedTemporaryFile()
stderr = NamedTemporaryFile()
popen = Popen(cmd, stdout=stdout, stderr=stderr, cmd_def=[])
assert popen.wait() == 20 #waits till finishes and looks to the retcod
assert not open(stdout.name).read()
assert not open(stderr.name).read()
os.remove(bin)
@staticmethod
def test_infile_outfile_condor():
'It tests that we can set an input file and an output file'
bin = create_test_binary()
#with infile
in_file = NamedTemporaryFile()
content = 'hola1\nhola2\nhola3\nhola4\nhola5\nhola6\nhola7\nhola8\n'
content += 'hola9\nhola10|n'
in_file.write(content)
in_file.flush()
out_file = NamedTemporaryFile()
cmd = [bin]
cmd.extend(['-i', in_file.name, '-t', out_file.name])
stdout = NamedTemporaryFile()
stderr = NamedTemporaryFile()
cmd_def = [{'options': ('-i', '--input'), 'io': 'in', 'splitter':''},
{'options': ('-t', '--output'), 'io': 'out'}]
from psubprocess import CondorPopen
popen = Popen(cmd, stdout=stdout, stderr=stderr, cmd_def=cmd_def,
runner=CondorPopen,
runner_conf={'transfer_executable':True})
assert popen.wait() == 0 #waits till finishes and looks to the retcod
assert not open(stdout.name).read()
assert not open(stderr.name).read()
assert open(out_file.name).read() == content
in_file.close()
os.remove(bin)
+ @staticmethod
+ def test_stdin_real_splitter():
+ 'It test that stdin works as input'
+ bin = create_test_binary()
+
+ #with stdin
+ content = '>hola1\nhola2\n>hola3\nhola4\n>hola5\nhola6\n>hola7\nhola8\n'
+ content += '>hola9\nhola10|n'
+ stdin = NamedTemporaryFile()
+ stdin.write(content)
+ stdin.flush()
+
+ cmd = [bin]
+ cmd.extend(['-s'])
+ stdout = NamedTemporaryFile()
+ stderr = NamedTemporaryFile()
+ cmd_def = [{'options':STDIN, 'io': 'in', 'splitter':'>'}]
+ popen = Popen(cmd, stdout=stdout, stderr=stderr, stdin=stdin,
+ cmd_def=cmd_def)
+ assert popen.wait() == 0 #waits till finishes and looks to the retcod
+ assert open(stdout.name).read() == content
+ assert open(stderr.name).read() == ''
+ os.remove(bin)
+
+
if __name__ == "__main__":
#import sys;sys.argv = ['', 'Test.testName']
unittest.main()
\ No newline at end of file
|
JoseBlanca/psubprocess
|
c858aa4d61f8601dab84562e9f59daec30ebdc96
|
The package is named psubprocess
|
diff --git a/setup.py b/setup.py
index 522c88c..6370d93 100644
--- a/setup.py
+++ b/setup.py
@@ -1,20 +1,20 @@
'''
Created on 25/03/2009
@author: jose blanca
'''
from setuptools import setup
setup(
# basic package data
- name = "subprocess",
+ name = "psubprocess",
version = "0.0.1",
author='Jose Blanca, Peio Ziarsolo',
author_email='jblanca@btc.upv.es',
description='runs commands in parallel environments',
# package structure
packages=['psubprocess'],
package_dir={'':'.'},
requires=[],
scripts=['scripts/run_in_parallel.py']
)
|
JoseBlanca/psubprocess
|
e90951b425d1b02c8555337a892809eff43975a7
|
Added an script to run the commands in parallel
|
diff --git a/scripts/__init__.py b/scripts/__init__.py
new file mode 100644
index 0000000..e69de29
diff --git a/scripts/run_in_parallel.py b/scripts/run_in_parallel.py
new file mode 100644
index 0000000..399fe56
--- /dev/null
+++ b/scripts/run_in_parallel.py
@@ -0,0 +1,102 @@
+'''
+Created on 21/07/2009
+
+@author: jose
+'''
+
+# Copyright 2009 Jose Blanca, Peio Ziarsolo, COMAV-Univ. Politecnica Valencia
+# This file is part of psubprocess.
+# psubprocess is free software: you can redistribute it and/or modify
+# it under the terms of the GNU Affero General Public License as
+# published by the Free Software Foundation, either version 3 of the
+# License, or (at your option) any later version.
+
+# psubprocess is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU Affero General Public License for more details.
+
+# You should have received a copy of the GNU Affero General Public License
+# along with psubprocess. If not, see <http://www.gnu.org/licenses/>.
+
+from optparse import OptionParser
+import os.path, sys, signal
+
+from psubprocess import CondorPopen, Popen
+
+POPEN = None
+
+def parse_options():
+ 'It parses the command line arguments'
+ parser = OptionParser('usage: %prog -c "command"')
+ parser.add_option('-n', '--nsplits', dest='splits',
+ help='number of subjobs to create')
+ parser.add_option('-r', '--runner', dest='runner', default='subprocess',
+ help='who should run the subjobs (subprocess or condor)')
+ parser.add_option('-c', '--command', dest='command',
+ help='The command to run')
+ parser.add_option('-o', '--stdout', dest='stdout',
+ help='A file to store the stdout')
+ parser.add_option('-e', '--stderr', dest='stderr',
+ help='A file to store the stderr')
+ parser.add_option('-i', '--stdin', dest='stdin',
+ help='A file to store the stdin')
+ parser.add_option('-d', '--cmd_def', dest='cmd_def',
+ help='The command line definition')
+
+ return parser
+
+def get_options():
+ 'It returns a dict with the options'
+ parser = parse_options()
+ cmd_options = parser.parse_args()[0]
+ options = {}
+ if cmd_options.command is None:
+ raise parser.error('The command should be set')
+ else:
+ options['cmd'] = cmd_options.command
+ if cmd_options.stdout is not None:
+ options['stdout'] = open(cmd_options.stdout, 'w')
+ if cmd_options.stderr is not None:
+ options['stderr'] = open(cmd_options.stderr, 'w')
+ if cmd_options.stdin is not None:
+ options['stdin'] = open(cmd_options.stdin)
+ if cmd_options.runner == 'subprocess':
+ options['runner'] = None
+ elif cmd_options.runner == 'condor':
+ options['runner'] = CondorPopen
+ options['runner_conf'] = {'transfer_executable':False}
+ else:
+ parser.error('Allowable runners are: subprocess and condor')
+ if cmd_options.cmd_def is None:
+ options['cmd_def'] = []
+ else:
+ cmd_def = cmd_options.cmd_def
+ #it can be a file or an str
+ if os.path.exists(cmd_def):
+ cmd_def = open(cmd_def).read()
+ options['cmd_def'] = eval(cmd_def)
+ return options
+
+def kill_process():
+ 'It kills the ongoing process'
+ if POPEN is not None:
+ POPEN.kill()
+ sys.exit(-1)
+
+def set_signal_handlers():
+ 'It sets the SIGTERM and SIGKILL signals'
+ signal.signal(signal.SIGTERM, kill_process)
+ signal.signal(signal.SIGKILL, kill_process)
+ signal.signal(signal.SIGINT, kill_process)
+
+def main():
+ 'It runs a command in parallel'
+ set_signal_handlers()
+ options = get_options()
+ global POPEN
+ POPEN = Popen(**options)
+ sys.exit(POPEN.wait())
+
+if __name__ == '__main__':
+ main()
\ No newline at end of file
diff --git a/setup.py b/setup.py
new file mode 100644
index 0000000..522c88c
--- /dev/null
+++ b/setup.py
@@ -0,0 +1,20 @@
+'''
+Created on 25/03/2009
+
+@author: jose blanca
+'''
+
+from setuptools import setup
+setup(
+ # basic package data
+ name = "subprocess",
+ version = "0.0.1",
+ author='Jose Blanca, Peio Ziarsolo',
+ author_email='jblanca@btc.upv.es',
+ description='runs commands in parallel environments',
+ # package structure
+ packages=['psubprocess'],
+ package_dir={'':'.'},
+ requires=[],
+ scripts=['scripts/run_in_parallel.py']
+)
|
JoseBlanca/psubprocess
|
8ae3f627a55b38e8486cc763286f3c5d4e0f409b
|
Now the parallel popen can kill all subjbos.
|
diff --git a/psubprocess/prunner.py b/psubprocess/prunner.py
index 4ce511d..8d4e6c5 100644
--- a/psubprocess/prunner.py
+++ b/psubprocess/prunner.py
@@ -61,513 +61,530 @@ def NamedTemporaryFile(dir=None, delete=False, suffix=''):
#python 2.6 this function should be removed
#pylint: disable-msg=C0103
#pylint: disable-msg=W0622
#We want to mimick tempfile.NamedTemporaryFile
fpath = tempfile.mkstemp(dir=dir, suffix=suffix)[1]
return open(fpath, 'w')
def _calculate_divisions(num_items, splits):
'''It calculates how many items should be in every split to divide
the num_items into splits.
Not all splits will have an equal number of items, it will return a tuple
with two tuples inside:
((num_fragments_1, num_items_1), (num_fragments_2, num_items_2))
splits = num_fragments_1 + num_fragments_2
num_items_1 = num_items_2 + 1
num_fragments_1 could be equal to 0.
This is the best way to create as many splits as possible as similar as
possible.
'''
if splits >= num_items:
return ((0, 1), (splits, 1))
num_fragments1 = num_items % splits
num_fragments2 = splits - num_fragments1
num_items2 = num_items // splits
num_items1 = num_items2 + 1
res = ((num_fragments1, num_items1), (num_fragments2, num_items2))
return res
def _items_in_file(fhand, expression_kind, expression):
'''Given an fhand and an expression it yields the items cutting where the
line matches the expression'''
sofar = fhand.readline()
for line in fhand:
if ((expression_kind == 'str' and expression in line) or
expression.search(line)):
yield sofar
sofar = line
else:
sofar += line
else:
#the last item
yield sofar
def _create_file_splitter_with_re(expression):
'''Given an expression it creates a file splitter.
The expression can be a regex or an str.
The item in the file will be defined everytime a line matches the
expression.
'''
expression_kind = None
if isinstance(expression, str):
expression_kind = 'str'
else:
expression_kind = 're'
def splitter(file_, work_dirs):
'''It splits the given file into several splits.
Every split will be located in one of the work_dirs, although it is not
guaranteed to create as many splits as work dirs. If in the file there
are less items than work_dirs some work_dirs will be left empty.
It returns a list with the fpaths or fhands for the splitted files.
file_ can be an fhand or an fname.
'''
#the file_ can be an fname or an fhand. which one is it?
file_is_str = None
if isinstance(file_, str):
fname = file_
file_is_str = True
else:
fname = file_.name
file_is_str = False
#how many splits do we want?
nsplits = len(work_dirs)
#how many items are in the file? We assume that all files have the same
#number of items
nitems = 0
for line in open(fname, 'r'):
if ((expression_kind == 'str' and expression in line) or
expression.search(line)):
nitems += 1
#how many splits a we going to create? and how many items will be in
#every split
#if there are more items than splits we create as many splits as items
if nsplits > nitems:
nsplits = nitems
(nsplits1, nitems1), (nsplits2, nitems2) = _calculate_divisions(nitems,
nsplits)
#we have to create nsplits1 files with nitems1 in it and nsplits2 files
#with nitems2 items in it
new_files = []
fhand = open(fname, 'r')
items = _items_in_file(fhand, expression_kind, expression)
splits_made = 0
for nsplits, nitems in ((nsplits1, nitems1), (nsplits2, nitems2)):
#we have to create nsplits files with nitems in it
#we don't need the split_index for anything
#pylint: disable-msg=W0612
for split_index in range(nsplits):
suffix = os.path.splitext(fname)[-1]
work_dir = work_dirs[splits_made]
ofh = NamedTemporaryFile(dir=work_dir.name, delete=False,
suffix=suffix)
for item_index in range(nitems):
ofh.write(items.next())
ofh.flush()
if file_is_str:
new_files.append(ofh.name)
ofh.close()
else:
new_files.append(ofh)
splits_made += 1
return new_files
return splitter
def _output_splitter(file_, work_dirs):
'''It creates one output file for every splits.
Every split will be located in one of the work_dirs.
It returns a list with the fpaths for the new output files.
'''
#the file_ can be an fname or an fhand. which one is it?
file_is_str = None
if isinstance(file_, str):
fname = file_
file_is_str = True
else:
fname = file_.name
file_is_str = False
#how many splits do we want?
nsplits = len(work_dirs)
new_fpaths = []
#we have to create nsplits
for split_index in range(nsplits):
suffix = os.path.splitext(fname)[-1]
work_dir = work_dirs[split_index]
#we use delete=False because this temp file is in a temp dir that will
#be completely deleted. If we use delete=True we get an error because
#the file might be already deleted when its __del__ method is called
ofh = NamedTemporaryFile(dir=work_dir.name, suffix=suffix,
delete=False)
#the file will be deleted
#what do we need the fname or the fhand?
if file_is_str:
#it will be deleted because we just need the name in the temporary
#directory. tempfile.mktemp would be better for this use, but it is
#deprecated
new_fpaths.append(ofh.name)
ofh.close()
else:
new_fpaths.append(ofh)
return new_fpaths
def default_cat_joiner(out_file_, in_files_):
'''It joins the given in files into the given out file.
It works with fnames or fhands.
'''
#are we working with fhands or fnames?
file_is_str = None
if isinstance(out_file_, str):
file_is_str = True
else:
file_is_str = False
#the output fhand
if file_is_str:
out_fhand = open(out_file_, 'w')
else:
out_fhand = open(out_file_.name, 'w')
for in_file_ in in_files_:
#the input fhand
if file_is_str:
in_fhand = open(in_file_, 'r')
else:
in_fhand = open(in_file_.name, 'r')
for line in in_fhand:
out_fhand.write(line)
in_fhand.close()
out_fhand.close()
class Popen(object):
'It paralellizes the given processes divinding them into subprocesses.'
def __init__(self, cmd, cmd_def=None, runner=None, runner_conf=None,
stdout=None, stderr=None, stdin=None, splits=None):
'''
Constructor
'''
#we want the same interface as subprocess.popen
#pylint: disable-msg=R0913
self._retcode = None
self._outputs_collected = False
#some defaults
#if the runner is not given, we use subprocess.Popen
if runner is None:
runner = StdPopen
if cmd_def is None:
if stdin is not None:
raise ValueError('No cmd_def given but stdin present')
cmd_def = []
#if the number of splits is not given we calculate them
if splits is None:
splits = self.default_splits(runner)
#we need a work dir to create the temporary split files
self._work_dir = NamedTemporaryDir()
#the main job
self._job = {'cmd': cmd, 'work_dir': self._work_dir}
#we create the new subjobs
self._jobs = self._split_jobs(cmd, cmd_def, splits, self._work_dir,
stdout=stdout, stderr=stderr, stdin=stdin)
#launch every subjobs
self._launch_jobs(self._jobs, runner=runner, runner_conf=runner_conf)
@staticmethod
def _launch_jobs(jobs, runner, runner_conf):
'It launches all jobs and it adds its popen instance to them'
jobs['popens'] = []
cwd = os.getcwd()
for job_index, (cmd, streams, work_dir) in enumerate(zip(jobs['cmds'],
jobs['streams'], jobs['work_dirs'])):
#the std stream can be present or not
stdin, stdout, stderr = None, None, None
if jobs['stdins']:
stdin = jobs['stdins'][job_index]
if jobs['stdouts']:
stdout = jobs['stdouts'][job_index]
if jobs['stderrs']:
stderr = jobs['stderrs'][job_index]
#for every job we go to its dir to launch it
os.chdir(work_dir.name)
#we have to be sure that stdin is open for read
if stdin:
stdin = open(stdin.name)
#we launch the job
if runner == StdPopen:
popen = runner(cmd, stdout=stdout, stderr=stderr, stdin=stdin)
else:
popen = runner(cmd, cmd_def=streams, stdout=stdout,
stderr=stderr, stdin=stdin,
runner_conf=runner_conf)
#we record it's popen instane
jobs['popens'].append(popen)
os.chdir(cwd)
def _split_jobs(self, cmd, cmd_def, splits, work_dir, stdout=None,
stderr=None, stdin=None,):
''''I creates one job for every split.
Every job has a cmd, work_dir and streams, this info is in the jobs dict
with the keys: cmds, work_dirs, streams
'''
#too many arguments, but similar interface to our __init__
#pylint: disable-msg=R0913
#pylint: disable-msg=R0914
#the main job streams
main_job_streams = get_streams_from_cmd(cmd, cmd_def, stdout=stdout,
stderr=stderr, stdin=stdin)
self._job['streams'] = main_job_streams
streams, work_dirs = self._split_streams(main_job_streams, splits,
work_dir.name)
#now we have to create a new cmd with the right in and out streams for
#every split
cmds, stdins, stdouts, stderrs = self._create_cmds(cmd, streams)
jobs = {'cmds': cmds, 'work_dirs': work_dirs, 'streams': streams,
'stdins':stdins, 'stdouts':stdouts, 'stderrs':stderrs}
return jobs
@staticmethod
def _create_cmds(cmd, streams):
'''Given a base cmd and a steams list it creates one modified cmds for
every stream'''
#the streams is a list of streams
streamss = streams
cmds = []
stdouts = []
stdins = []
stderrs = []
for streams in streamss:
new_cmd = copy.deepcopy(cmd)
for stream in streams:
#is the stream in the cmd or in is a std one?
if 'cmd_location' in stream:
location = stream['cmd_location']
else:
location = None
if location is None:
continue
elif location == STDIN:
stdins.append(stream['fhand'])
elif location == STDOUT:
stdouts.append(stream['fhand'])
elif location == STDERR:
stderrs.append(stream['fhand'])
else:
#we modify the cmd[location] with the new file
#we use the fname and no path because the jobs will be
#launched from the job working dir
location = stream['cmd_location']
fpath = stream['fname']
fname = os.path.split(fpath)[-1]
new_cmd[location] = fname
cmds.append(new_cmd)
return cmds, stdins, stdouts, stderrs
@staticmethod
def _split_streams(streams, splits, work_dir):
'''Given a list of streams it splits every stream in the given number of
splits'''
#which are the input and output streams?
input_stream_indexes = []
output_stream_indexes = []
for index, stream in enumerate(streams):
if stream['io'] == 'in':
input_stream_indexes.append(index)
elif stream['io'] == 'out':
output_stream_indexes.append(index)
#we create one work dir for every split
work_dirs = []
for index in range(splits):
work_dirs.append(NamedTemporaryDir(dir=work_dir))
#we have to do first the input files because the number of splits could
#be changed by them
#we split the input stream files into several splits
#we have to sort the input_stream_indexes, first we should take the ones
#that have an input file to be split
def to_be_split_first(stream1, stream2):
'It sorts the streams, the ones to be split go first'
split1 = None
split2 = None
for split, stream in ((split1, stream1), (split2, stream2)):
#maybe they shouldn't be split
if 'special' in stream and 'no_split' in stream['special']:
split = False
#maybe the have no file to split
if (('fhand' in stream and stream['fhand'] is None) or
('fname' in stream and stream['fname'] is None) or
('fname' not in stream and 'fhand' not in stream)):
split = False
elif (('fhand' in stream and stream['fhand'] is not None) or
('fname' in stream and stream['fname'] is not None)):
split = True
return int(split1) - int(split2)
input_stream_indexes = sorted(input_stream_indexes, to_be_split_first)
first = True
split_files = {}
for index in input_stream_indexes:
stream = streams[index]
#splitter
if 'splitter' not in stream:
msg = 'An splitter should be provided for every input stream'
msg += 'missing for: ' + str(stream)
raise ValueError(msg)
splitter = stream['splitter']
#the splitter can be a re, in that case with create the function
if '__call__' not in dir(splitter):
splitter = _create_file_splitter_with_re(splitter)
#we split the input files in the splits, every file will be in one
#of the given work_dirs
#the stream can have fname or fhands
if 'fhand' in stream:
file_ = stream['fhand']
elif 'fname' in stream:
file_ = stream['fname']
else:
file_ = None
if file_ is None:
#the stream migth have no file associated
files = [None] * len(work_dirs)
else:
files = splitter(file_, work_dirs)
#the files len can be different than splits, in that case we modify
#the splits or we raise an error
if len(files) != splits:
if first:
splits = len(files)
#we discard the empty temporary dirs
work_dirs = work_dirs[0:splits]
else:
msg = 'Not all input files were divided in the same number'
msg += ' of splits'
raise RuntimeError(msg)
first = False
split_files[index] = files #a list of files for every in stream
#we split the ouptut stream files into several splits
for index in output_stream_indexes:
stream = streams[index]
#for th output we just create the new names, but we don't split
#any file
if 'fhand' in stream:
fname = stream['fhand']
else:
fname = stream['fname']
files = _output_splitter(fname, work_dirs)
split_files[index] = files #a list of files for every in stream
new_streamss = []
#we need one new stream for every split
for split_index in range(splits):
#the streams for one job
new_streams = []
for stream_index, stream in enumerate(streams):
#we duplicate the original stream
new_stream = stream.copy()
#we set the new files
if 'fhand' in stream:
new_stream['fhand'] = split_files[stream_index][split_index]
else:
new_stream['fname'] = split_files[stream_index][split_index]
new_streams.append(new_stream)
new_streamss.append(new_streams)
return new_streamss, work_dirs
@staticmethod
def default_splits(runner):
'Given a runner it returns the number of splits recommended by default'
if runner is StdPopen:
#the number of processors
return os.sysconf('SC_NPROCESSORS_ONLN')
else:
module = runner.__module__.split('.')[-1]
module = RUNNER_MODULES[module]
return module.get_default_splits()
def wait(self):
'It waits for all the works to finnish'
#we wait till all jobs finish
for job in self._jobs['popens']:
job.wait()
#now that all jobs have finished we join the results
self._collect_output_streams()
#we join now the retcodes
self._collect_retcodes()
return self._retcode
def _collect_output_streams(self):
'''It joins all the output streams into the output files and it removes
the work dirs'''
if self._outputs_collected:
return
#for each file in the main job cmd
for stream_index, stream in enumerate(self._job['streams']):
if stream['io'] == 'in':
#now we're dealing only with output files
continue
#every subjob has a part to join for this output stream
part_out_fnames = []
for streams in self._jobs['streams']:
this_stream = streams[stream_index]
if 'fname' in this_stream:
part_out_fnames.append(this_stream['fname'])
else:
part_out_fnames.append(this_stream['fhand'])
#we need a function to join this stream
joiner = None
if joiner in stream:
joiner = stream['joiner']
else:
joiner = default_cat_joiner
if 'fname' in stream:
out_file = stream['fname']
else:
out_file = stream['fhand']
default_cat_joiner(out_file, part_out_fnames)
#now we can delete the tempdirs
for work_dir in self._jobs['work_dirs']:
work_dir.close()
self._outputs_collected = True
def _collect_retcodes(self):
'It gathers the retcodes from all processes'
retcode = None
for popen in self._jobs['popens']:
job_retcode = popen.returncode
if job_retcode is None:
#if some job is yet to be finished the main job is not finished
retcode = None
break
elif job_retcode != 0:
#if one job has finnished badly the main job is badly finished
retcode = job_retcode
break
#it should be 0 at this point
retcode = job_retcode
#if the retcode is not None the jobs have finished and we have to
#collect the outputs
if retcode is not None:
self._collect_output_streams()
self._retcode = retcode
return retcode
def _get_returncode(self):
'It returns the return code'
if self._retcode is None:
self._collect_retcodes()
return self._retcode
- returncode = property(_get_returncode)
\ No newline at end of file
+ returncode = property(_get_returncode)
+
+ def kill(self):
+ 'It kills all jobs'
+ if 'popens' not in self._jobs:
+ return
+ for popen in self._jobs['popens']:
+ popen.kill()
+ del self._jobs['popens']
+
+ def terminate(self):
+ 'It kills all jobs'
+ if 'popens' not in self._jobs:
+ return
+ for popen in self._jobs['popens']:
+ popen.terminate()
+ del self._jobs['popens']
+
|
JoseBlanca/psubprocess
|
707f83bb021652579aa548ecd0fb1d3e57c8a82f
|
Added a test for returncode in prunner
|
diff --git a/test/prunner_test.py b/test/prunner_test.py
index 7330e85..df3db46 100644
--- a/test/prunner_test.py
+++ b/test/prunner_test.py
@@ -1,154 +1,166 @@
'''
Created on 16/07/2009
@author: jose
'''
# Copyright 2009 Jose Blanca, Peio Ziarsolo, COMAV-Univ. Politecnica Valencia
# This file is part of psubprocess.
# psubprocess is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as
# published by the Free Software Foundation, either version 3 of the
# License, or (at your option) any later version.
# psubprocess is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
# You should have received a copy of the GNU Affero General Public License
# along with psubprocess. If not, see <http://www.gnu.org/licenses/>.
import unittest
from tempfile import NamedTemporaryFile
import os
from psubprocess import Popen
from psubprocess.streams import STDIN
from test_utils import create_test_binary
class PRunnerTest(unittest.TestCase):
'It test that we can parallelize processes'
@staticmethod
def test_file_in():
'It tests the most basic behaviour'
bin = create_test_binary()
#with infile
in_file = NamedTemporaryFile()
in_file.write('hola')
in_file.flush()
cmd = [bin]
cmd.extend(['-i', in_file.name])
stdout = NamedTemporaryFile()
stderr = NamedTemporaryFile()
cmd_def = [{'options': ('-i', '--input'), 'io': 'in', 'splitter':''}]
popen = Popen(cmd, stdout=stdout, stderr=stderr, cmd_def=cmd_def)
assert popen.wait() == 0 #waits till finishes and looks to the retcod
assert open(stdout.name).read() == 'hola'
in_file.close()
os.remove(bin)
@staticmethod
def test_job_no_in_stream():
'It test that a job with no in stream is run splits times'
bin = create_test_binary()
cmd = [bin]
cmd.extend(['-o', 'hola', '-e', 'caracola'])
stdout = NamedTemporaryFile()
stderr = NamedTemporaryFile()
cmd_def = [{'options': ('-i', '--input'), 'io': 'in', 'splitter':''}]
splits = 4
popen = Popen(cmd, stdout=stdout, stderr=stderr, cmd_def=cmd_def,
splits=splits)
assert popen.wait() == 0 #waits till finishes and looks to the retcod
assert open(stdout.name).read() == 'hola' * splits
assert open(stderr.name).read() == 'caracola' * splits
os.remove(bin)
@staticmethod
def test_stdin():
'It test that stdin works as input'
bin = create_test_binary()
#with stdin
content = 'hola1\nhola2\nhola3\nhola4\nhola5\nhola6\nhola7\nhola8\n'
content += 'hola9\nhola10|n'
stdin = NamedTemporaryFile()
stdin.write(content)
stdin.flush()
cmd = [bin]
cmd.extend(['-s'])
stdout = NamedTemporaryFile()
stderr = NamedTemporaryFile()
cmd_def = [{'options':STDIN, 'io': 'in', 'splitter':''}]
popen = Popen(cmd, stdout=stdout, stderr=stderr, stdin=stdin,
cmd_def=cmd_def)
assert popen.wait() == 0 #waits till finishes and looks to the retcod
assert open(stdout.name).read() == content
assert open(stderr.name).read() == ''
os.remove(bin)
@staticmethod
def test_infile_outfile():
'It tests that we can set an input file and an output file'
bin = create_test_binary()
#with infile
in_file = NamedTemporaryFile()
content = 'hola1\nhola2\nhola3\nhola4\nhola5\nhola6\nhola7\nhola8\n'
content += 'hola9\nhola10|n'
in_file.write(content)
in_file.flush()
out_file = NamedTemporaryFile()
cmd = [bin]
cmd.extend(['-i', in_file.name, '-t', out_file.name])
stdout = NamedTemporaryFile()
stderr = NamedTemporaryFile()
cmd_def = [{'options': ('-i', '--input'), 'io': 'in', 'splitter':''},
{'options': ('-t', '--output'), 'io': 'out'}]
popen = Popen(cmd, stdout=stdout, stderr=stderr, cmd_def=cmd_def)
assert popen.wait() == 0 #waits till finishes and looks to the retcod
assert not open(stdout.name).read()
assert not open(stderr.name).read()
assert open(out_file.name).read() == content
in_file.close()
os.remove(bin)
+ @staticmethod
+ def test_retcode():
+ 'It tests that we get the correct returncode'
+ bin = create_test_binary()
+ cmd = [bin]
+ cmd.extend(['-r', '20'])
+ stdout = NamedTemporaryFile()
+ stderr = NamedTemporaryFile()
+ popen = Popen(cmd, stdout=stdout, stderr=stderr, cmd_def=[])
+ assert popen.wait() == 20 #waits till finishes and looks to the retcod
+ assert not open(stdout.name).read()
+ assert not open(stderr.name).read()
+ os.remove(bin)
+
@staticmethod
def test_infile_outfile_condor():
'It tests that we can set an input file and an output file'
bin = create_test_binary()
#with infile
in_file = NamedTemporaryFile()
content = 'hola1\nhola2\nhola3\nhola4\nhola5\nhola6\nhola7\nhola8\n'
content += 'hola9\nhola10|n'
in_file.write(content)
in_file.flush()
out_file = NamedTemporaryFile()
cmd = [bin]
cmd.extend(['-i', in_file.name, '-t', out_file.name])
stdout = NamedTemporaryFile()
stderr = NamedTemporaryFile()
cmd_def = [{'options': ('-i', '--input'), 'io': 'in', 'splitter':''},
{'options': ('-t', '--output'), 'io': 'out'}]
from psubprocess import CondorPopen
popen = Popen(cmd, stdout=stdout, stderr=stderr, cmd_def=cmd_def,
runner=CondorPopen,
runner_conf={'transfer_executable':True})
assert popen.wait() == 0 #waits till finishes and looks to the retcod
assert not open(stdout.name).read()
assert not open(stderr.name).read()
assert open(out_file.name).read() == content
in_file.close()
os.remove(bin)
- #TODO test retcode
-
if __name__ == "__main__":
#import sys;sys.argv = ['', 'Test.testName']
unittest.main()
\ No newline at end of file
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.