In development, a major revision of NumPy to better support a range of data integration and processing use cases.
pandas (Python package)
A Python library for analysis of relational/tabular data, built on NumPy, and inspired by R’s dataframe concept. Functionality includes support for missing data, inserting and deleting columns, group by/aggregation, merging, joining, reshaping, pivoting.
tabular (Python package)
A Python package for working with tabular data. The tabarray class supports both row-oriented and column-oriented access to data, including selection and filtering of rows/columns, matrix math (tabular extends NumPy), sort, aggregate, join, transpose, comparisons.
Does require a uniform datatype for each column. All data is handled in memory.
datarray (Python package)
Datarray provides a subclass of Numpy ndarrays that support individual dimensions (axes) being labeled with meaningful descriptions labeled ‘ticks’ along each axis indexing and slicing by named axis indexing on any axis with the tick labels instead of only integers reduction operations (like .sum, .mean, etc) support named axis arguments instead of only integer indices.
pydataframe (Python package)
An implemention of an almost R like DataFrame object.
larry (Python package)
The main class of the la package is a labeled array, larry. A larry consists of data and labels. The data is stored as a NumPy array and the labels as a list of lists (one list per dimension). larry has built-in methods such as ranking, merge, shuffle, move_sum, zscore, demean, lag as well as typical Numpy methods like sum, max, std, sign, clip. NaNs are treated as missing data.
picalo (Python package)
A GUI application and Python library primarily aimed at data analysis for auditors & fraud examiners, but has a number of general purpose data mining and transformation capabilities like filter, join, transpose, crosstable/pivot.
Does not rely on streaming/iterative processing of data, and has a persistence capability based on zodb for handling larger datasets.
csvkit (Python package)
A set of command-line utilities for transforming tabular data from CSV (delimited) files. Includes csvclean, csvcut, csvjoin, csvsort, csvstack, csvstat, csvgrep, csvlook.
csvutils (Python package)
python-pipeline (Python package)
A web application for exploring, filtering, cleaning and transforming a table of data. Some excellent functionality for finding and fixing problems in data. Does have the capability to join two tables, but generally it’s one table at a time. Some question marks over ability to handle larger datasets.
Has an extension capability, two third party extensions known at the time of writing, including a stats extension.
A web application for exploring, transforming and cleaning tabular data, in a similar vein to Google Refine but with a strong focus on usability, and more capabilities for transforming tables, including folding/unfolding (similar to R reshape’s melt/cast) and cross-tabulation.
Currently a client-side only web application, not available for download. There is also a Python library providing data transformation functions as found in the GUI. The research paper has a good discussion of data transformation and quality issues, esp. w.r.t. tool usability.
Pentaho Data Integration (a.k.a. Kettle)
A data integration platform, where ETL components are web resources with a RESTful interface. Standard components for transforms like filter, join and sort.
Flat File Checker (FlaFi)
North Concepts Data Pipeline
SAS Clinical Data Integration
R Reshape Package
pygrametl (Python package)
etlpy (Python package)
Looks abandoned since 2009, but there is some code.
Google Fusion Tables
pivottable (Python package)
PrettyTable (Python package)
PyTables (Python package)
- http://technet.microsoft.com/en-us/library/ee176874.aspx - Import-Csv
- http://technet.microsoft.com/en-us/library/ee176955.aspx - Select-Object
- http://technet.microsoft.com/en-us/library/ee176968.aspx - Sort-Object
- http://technet.microsoft.com/en-us/library/ee176864.aspx - Group-Object
Data Science Toolkit
Doesn’t have any ETL functionality, but possibly (enormously) relevant to exploratory development of a transformation pipeline, because you could avoid having to rerun the whole pipeline every time you add a new step.
Articles, Blogs, Other
- http://www.hanselman.com/blog/ParsingCSVsAndPoorMansWebLogAnalysisWithPowerShell.aspx - nice example of a data transformation problem, done in PowerShell
- http://wesmckinney.com/blog/?p=8 - on grouping with pandas