Compare commits

...

845 Commits

Author SHA1 Message Date
Ted Nyman
8c9ba2214a Bump to 2.10.5 2013-12-15 12:21:40 -08:00
Ted Nyman
8ba773127c Drop incorrect interpeter
Fixes 1.8.7 issues. cc @arfon
2013-12-15 12:20:10 -08:00
Ted Nyman
c598c9717d Add ace mode 2013-12-15 00:09:54 -08:00
Ted Nyman
ef41a1ac67 Merge pull request #831 from ethanwhite/master
Added RMarkdown to the list of languages
2013-12-15 00:07:35 -08:00
Ted Nyman
f6c4e39dbc Merge pull request #835 from hkdobrev/jshintrc
Add `.jshintrc` to JSON filenames
2013-12-14 18:28:21 -08:00
Ted Nyman
7c636c4f65 Merge pull request #836 from github/debug
Nicer debug factoring
2013-12-14 15:25:33 -08:00
Ted Nyman
6a8de63d2d Nicer debug factoring 2013-12-14 15:24:26 -08:00
Haralan Dobrev
107fee8859 Add .jshintrc to JSON filenames
JSHint tool for linting JavaScript uses `.jshintrc` configuration file.
It is in JSON format.

Example: https://github.com/jquery/jquery/blob/master/.jshintrc
2013-12-14 16:23:22 +02:00
Ethan White
1cb1705f8e Added RMarkdown to the list of languages 2013-12-12 14:55:52 -05:00
Ted Nyman
e0c1a84821 2.10.4 2013-12-12 00:56:12 -08:00
Ted Nyman
b7249b671f Update samples 2013-12-12 00:55:49 -08:00
Ted Nyman
f5e86bc691 Add a .pod sample 2013-12-12 00:55:03 -08:00
Ted Nyman
109841ceb1 2.10.3 2013-12-11 23:30:28 -08:00
Ted Nyman
1b96f87888 Update samples 2013-12-11 15:13:27 -08:00
Ted Nyman
4d40cab954 Ad .sc sample 2013-12-11 15:06:14 -08:00
Ted Nyman
4533b9baaa Merge pull request #822 from akre54/stylus-support
Add stylus support
2013-12-11 15:03:42 -08:00
Adam Krebs
5b35f92bfe remove accidental sinatra sample. copypasta.... 2013-12-11 18:01:40 -05:00
Ted Nyman
17d54d61b4 Add .sc extension for Scala 2013-12-11 14:57:47 -08:00
Ted Nyman
2a867c9c7f Merge pull request #823 from github/pod
Add separate entry for Pod format
2013-12-10 07:54:25 -08:00
Ted Nyman
5a4bbf42c1 Keep .pod extension for now 2013-12-10 07:53:51 -08:00
Ted Nyman
087ce10f12 Merge pull request #824 from github/wrap-tex
Wrap .tex
2013-12-10 07:47:49 -08:00
Brandon Keepers
27092191a8 Wrap .tex 2013-12-10 10:35:28 -05:00
Brandon Keepers
bc93a99864 update test by type 2013-12-10 10:29:40 -05:00
Brandon Keepers
2e4fbe3430 Create separate entry for .pod 2013-12-10 10:26:55 -05:00
Adam Krebs
64f77293e8 set stylus lexer to be text only for now 2013-12-10 09:40:06 -05:00
Adam Krebs
9a42628577 add more demos 2013-12-09 22:56:04 -05:00
Adam Krebs
fc9bc8b9e1 Add stylus support 2013-12-09 22:42:57 -05:00
Ted Nyman
2180c11dc6 Bump to 2.10.2 2013-12-08 18:40:14 -08:00
Ted Nyman
11207283c8 Merge pull request #819 from github/tell-me-a-story-of-prose
Explicitly mention prose types as `prose`
2013-12-08 18:36:18 -08:00
Garen Torikian
8552ec35b3 Cleanup languages file to fix tests 2013-12-08 16:41:56 -08:00
Garen Torikian
5fdc2e12bf Alphabetize RST correctly 2013-12-08 16:33:59 -08:00
Garen Torikian
37c8a94369 Include RST in the languages list 2013-12-08 16:19:19 -08:00
Garen Torikian
7d17d69c1b Set Textile as a prose language 2013-12-08 16:19:08 -08:00
Garen Torikian
ee519aeb4b Add helper method to retrieve languages by type 2013-12-08 16:12:03 -08:00
Garen Torikian
a825a013d6 Explicitly mention prose types within languages.yml 2013-12-08 16:11:47 -08:00
Ted Nyman
b4906fc3b8 Bit of docs 2013-12-08 15:24:53 -08:00
Ted Nyman
fafeead5dc Merge pull request #818 from Flyingmana/patch-1
recognize composer.lock as generated file
2013-12-08 15:23:19 -08:00
Garen Torikian
5c602d0a4e Revert "Revert "Merge pull request #695 from github/detect-prose"" 2013-12-08 13:51:27 -08:00
Daniel Fahlke
96084fa59a recognize composer.lock as generated file 2013-12-08 21:31:29 +01:00
Daniel Fahlke
c8eeda6c8a add generated test and sample for composer.lock 2013-12-08 21:22:50 +01:00
Ted Nyman
2315cdb993 Merge pull request #816 from sgallagher/master
Add .lmi to Python extensions
2013-12-06 22:28:05 -08:00
Ted Nyman
2245174d28 2.10.1 2013-12-06 22:10:03 -08:00
Ted Nyman
c089b3f28f Update samples 2013-12-06 22:09:29 -08:00
Ted Nyman
7e178cc416 Place guards, checks for multiline shell hacks 2013-12-06 22:04:40 -08:00
Ted Nyman
8603760ebe Merge branch 'master' into more-687 2013-12-06 20:32:22 -08:00
Stephen Gallagher
5bbffb00f5 Add .lmi to Python extensions
The OpenLMI project provides a scripting environment based on
Python (wrapping the Python interpreter with the 'lmishell'
command). Python scripts intended to be executed via lmishell are
conventionally given the suffix .lmi to identify them. Since the
syntax is identical to that of Python, it would be best to
identify it that way.
2013-12-06 09:40:10 -05:00
Ted Nyman
4476a23f5a Merge pull request #813 from computmaxer/brightscript_support
Adding support for the Brightscript language
2013-12-05 12:01:58 -08:00
Max Peterson
834f37810b Merge branch 'master' into brightscript_support
Conflicts:
	lib/linguist/samples.json
2013-12-05 13:02:00 -06:00
Ted Nyman
7e9bc26796 Merge pull request #740 from danluu/vhdl_extensions
Add common VHDL file extensions
2013-12-04 15:49:16 -08:00
Ted Nyman
f83f226edc Merge pull request #812 from Giacom/patch-1
Updated the DM language to use the C++ lexer.
2013-12-04 13:45:40 -08:00
Giacom
a04b9dd7cd Updated the DM language to use the C++ lexer. 2013-12-04 12:07:46 +00:00
Ted Nyman
de636f1c0b Colors for agda and tex 2013-12-04 02:08:58 -08:00
Ted Nyman
283cc3a975 2.10.0 2013-12-03 21:12:12 -08:00
Ted Nyman
23af754194 Merge pull request #810 from github/bump-pygments
Bump pygments
2013-12-03 21:10:51 -08:00
Charlie Somerville
50f4050444 Merge pull request #809 from github/languages.json
Prefer to load from languages.json if it exists
2013-12-03 21:07:44 -08:00
Charlie Somerville
bf11900bc9 prefer to load from languages.json if it exists 2013-12-04 15:58:34 +11:00
Ted Nyman
61b8a8969f 2.9.9 2013-12-03 19:21:32 -08:00
Ted Nyman
0fb7017add Bump pygments.rb to 0.5.4 2013-12-03 19:17:32 -08:00
Charlie Somerville
4a5165ad7f Merge pull request #807 from github/update-escape_utils-dep
Require escape_utils >= 0.3.1
2013-12-02 21:47:17 -08:00
Charlie Somerville
017c6fd3f2 force escape_utils 0.3.2 on ruby < 1.9.3 2013-12-03 16:42:18 +11:00
Charlie Somerville
3887acd915 require escape_utils >= 0.3.1 2013-12-03 16:15:06 +11:00
Charlie Somerville
a80bf9e024 Merge pull request #806 from github/use-json-for-loading-samples
Use JSON instead of YAML for loading samples.json
2013-12-02 21:00:14 -08:00
Charlie Somerville
27c9774d1b prefer JSON, but fall back to YAML if JSON isn't available 2013-12-03 15:55:25 +11:00
Charlie Somerville
10cadb8725 use JSON instead of YAML for loading samples.json 2013-12-03 15:51:24 +11:00
Ted Nyman
9a5d52e460 Update samples 2013-11-24 10:59:52 -08:00
Ted Nyman
a8ae3d3ae5 Just the extensions for now 2013-11-24 10:58:47 -08:00
Ted Nyman
9bf1b5867a Merge pull request #794 from natcl/patch-1
Changed primary extension for Max
2013-11-24 10:56:30 -08:00
Nathanaël Lécaudé
f64a589e98 Moved the Max examples to the Max folder 2013-11-24 13:54:10 -05:00
Nathanaël Lécaudé
72026d3a3d Changed primary extension for Max
Changed primary extension for Max to .maxpat 
.mxt is a legacy file format.
2013-11-24 13:36:26 -05:00
Ted Nyman
3a8651e31f Merge pull request #790 from hkdobrev/composer-lock
Add composer.lock to JSON filenames
2013-11-23 16:39:52 -08:00
Ted Nyman
953768641c Merge pull request #791 from mikepurvis/patch-1
Add five new extensions to XML, YAML in support of ROS usage.
2013-11-23 16:36:57 -08:00
Mike Purvis
b59d80b00c Add five new extensions to XML, YAML in support of ROS usage.
These extensions are in common use in packages part of ROS, the Robot Operating System.

[urdf](http://wiki.ros.org/urdf), [srdf](http://wiki.ros.org/srdf): These are Robot Description Files, XML documents which describe the physical realities of a robotics platform, for the purposes of consumption by common libraries.

[xacro](http://wiki.ros.org/xacro) is for input files to the XML macro processor xacro, which is used in ROS to output URDF and SRDF files.

[launch](http://wiki.ros.org/roslaunch/XML): Documents which describe sets of ROS nodes to launch together.

[rviz](https://github.com/ros-visualization/rviz/blob/hydro-devel/default.rviz): YAML configuration files belonging to the [rviz](http://wiki.ros.org/rviz#Overview) utility.

Each one has been in use for several years in various ROS-related applications; lots of examples should be apparent in orgs like ros, ros-drivers, ros-visualization, pr2, turtlebot, husky, etc.

Thanks for your consideration!
2013-11-23 19:31:05 -05:00
Haralan Dobrev
56dec42c70 Add composer.lock to JSON filenames
[Composer](http://getcomposer.org] uses one configuration file (composer.json) and
one lock file (composer.lock). They both use valid JSON.

Example: https://github.com/OpenBuildings/jam/blob/master/composer.lock
2013-11-23 13:03:22 +02:00
Max Peterson
c88585cffb Adding support for the Brightscript language, with samples. 2013-11-21 19:26:44 -06:00
Ted Nyman
46779da3b5 Check line length for minified for now 2013-11-20 09:33:25 -08:00
Ted Nyman
654050a459 Minor gemspec edits 2013-11-19 17:44:47 -08:00
Ted Nyman
fe9f186b13 Merge pull request #782 from liluo/dev
fix typo
2013-11-17 13:41:16 -08:00
liluo
01616ef54e fix typo 2013-11-17 20:08:42 +08:00
Ted Nyman
9fd802a208 Regenerate samples 2013-11-16 12:33:51 -08:00
Ted Nyman
86e0b94700 Merge master 2013-11-16 12:33:18 -08:00
Ted Nyman
6e4e5e78ad Regenerate samples.json 2013-11-16 12:29:25 -08:00
Ted Nyman
183c280263 Better lexer 2013-11-16 12:28:42 -08:00
Ted Nyman
cb0b3a688f Merge pull request #771 from GreatEmerald/master
Add initial UnrealScript support
2013-11-16 12:24:29 -08:00
Ted Nyman
4f656c200b Minor docs/naming 2013-11-15 18:42:53 -08:00
Ted Nyman
791d9eed41 Merge pull request #779 from jvanegmond/patch-1
Added AutoIt
2013-11-15 18:38:18 -08:00
Jos van Egmond
74775b2e0a Added AutoIt
Language website: http://autoitscript.com
Example project on GitHub: https://github.com/jvanegmond/au3-minecraft-monitor
2013-11-14 11:15:38 +01:00
Ted Nyman
04c78c8c33 Bit more README 2013-11-10 17:57:43 -08:00
Ted Nyman
762b389721 Minor README updates 2013-11-10 17:55:17 -08:00
Ted Nyman
32e10d2c37 Merge pull request #775 from wjk/fix-doc-typo
Confusing Typo Fix
2013-11-10 09:25:52 -08:00
William Kent
d7baf4ed7b Fixed typo in a documentation comment. 2013-11-10 10:48:03 -05:00
Ted Nyman
5bef198e6d More lenient regex for LICENSE 2013-11-09 19:13:08 -08:00
Ted Nyman
c03e310422 Merge pull request #770 from hkdobrev/phpunit.xml.dist
Add phpunit.xml.dist to XML filenames
2013-11-09 19:11:18 -08:00
Ted Nyman
43723ba5ef Simpler samples 2013-11-09 16:43:45 -08:00
Ted Nyman
25954c8992 Merge pull request #773 from afischer15/master
Added Initial NetLogo support
2013-11-09 16:42:22 -08:00
Ted Nyman
12f01e9e94 Color for dart 2013-11-09 16:39:55 -08:00
Eric Schulte
41f7589d4e unit test for find_by_shebang 2013-11-09 11:44:17 -07:00
Eric Schulte
d93edf0897 adding interpreter arrays to some languages 2013-11-09 11:44:17 -07:00
Eric Schulte
7a6202a8c3 language interpreters and shebang lines
Add an interpreter array to each language, and match interpreters found
in the shebang lines of scripts to this array to identify the language
of scripts.

With suggestions from tnm. https://github.com/github/linguist/pull/687
2013-11-09 11:44:17 -07:00
Andrew Fischer
240f6a63f4 removed error causing readme... oops. 2013-11-09 12:44:09 -05:00
Andrew Fischer
f0558769f2 added initial NetLogo support 2013-11-09 12:31:29 -05:00
GreatEmerald
12086b69ac Add initial UnrealScript support
The two samples are for two different UnrealScript generations:
MutU2Weapons is UnrealScript 2, US3HelloWorld is UnrealScript 3.

Signed-off-by: GreatEmerald <pastas4@gmail.com>
2013-11-09 15:29:59 +02:00
Haralan Dobrev
e47b312866 Add phpunit.xml.dist to XML filenames
PHPUnit (a popular unit testing tool for PHP) uses `phpunit.xml`
for its configuration.

However it would use `phpunit.xml.dist` as well if `phpunit.xml`
is not available.

The reason is to track `phpunit.xml.dist` in your repo.
And to ignore `phpunit.xml`.
By default everyone (including a CI) would use `phpunit.xml.dist`
except you add `phpunit.xml` locally.

`phpunit.xml.dist` has the same XML structure as `phpunit.xml`.
So it should be detected as XML by linguist.

Example: https://github.com/erusev/parsedown/blob/master/phpunit.xml.dist

I don't know why linguist is not detecting this file as XML since
it starts with `<?xml`. Perhaps it is another issue.
2013-11-09 12:53:15 +02:00
Ted Nyman
eb5f1468d2 .pluginspec for XML 2013-11-08 14:13:45 -08:00
Ted Nyman
77c7ee6d2e Update samples 2013-11-07 17:26:20 -08:00
Ted Nyman
4f547d79a9 2.0.0 Ruby builds 2013-11-07 17:25:05 -08:00
Ted Nyman
8a5b26536e Merge pull request #755 from frunns/node-modules-generated
Added node_modules/ to generated files.
2013-11-07 17:01:53 -08:00
Ted Nyman
355ac3d81a Merge pull request #500 from Leushenko/master
Added BlitzBasic
2013-11-06 20:45:30 -08:00
Ted Nyman
8b8123a3c1 Add TeX as detectable markup 2013-11-06 20:17:47 -08:00
Ted Nyman
fc44af9343 Merge pull request #764 from bricker/add-appraisals
Add Appraisals to Ruby filenames
2013-11-06 20:15:12 -08:00
Ted Nyman
4654553d07 Merge pull request #765 from GordonSmith/UpperCaseECL
Uppercase ECL
2013-11-06 17:45:23 -08:00
Gordon Smith
940df300e8 Uppercase "ECL" Sample folder
Signed-off-by: Gordon Smith <gordon.smith@lexisnexis.com>
2013-11-06 09:19:29 +00:00
Gordon Smith
72b8e1c76f Uppercase "Ecl" language to "ECL"
Signed-off-by: Gordon Smith <gordon.smith@lexisnexis.com>
2013-11-06 09:17:03 +00:00
Bryan Ricker
92282e3677 Add Appraisals to Ruby filenames 2013-11-05 18:28:01 -08:00
Ted Nyman
9e9cbb144e Merge pull request #763 from lukaselmer/patch-2
Describe how to update samples.json
2013-11-05 17:46:53 -08:00
Lukas Elmer
8c5f1e201e Describe how to update samples.json
Describe how to update samples.json after adding new samples.
2013-11-06 02:44:07 +01:00
Lukas Elmer
ab20c033fe update samples.yml 2013-11-06 02:36:36 +01:00
Lukas Elmer
fd9c657ed4 Merge branch 'master' of https://github.com/github/linguist into patch-1 2013-11-06 02:26:50 +01:00
Ted Nyman
fea07d025d Bump to 2.9.8 2013-11-05 17:14:23 -08:00
Ted Nyman
41a570818d Just .veo for now 2013-11-05 15:24:05 -08:00
Ted Nyman
c1e38425d0 Merge pull request #741 from danluu/verilog_extensions
Add common Verilog extensions
2013-11-05 15:23:38 -08:00
Ted Nyman
6fb4e6836c Just primary extension for now 2013-11-05 15:21:27 -08:00
Ted Nyman
a2f9150f50 Merge pull request #760 from vszakats/patch-4
recognize xBase sources
2013-11-05 15:20:55 -08:00
Ted Nyman
2dcee1e43c Update samples 2013-11-05 15:18:10 -08:00
Ted Nyman
867a4d96fe Unique primary extension 2013-11-05 15:16:36 -08:00
Ted Nyman
b8c7b71ca5 Text only for now 2013-11-05 15:15:40 -08:00
Ted Nyman
70396ab636 Merge pull request #460 from remobjects/master
Oxygene language detection
2013-11-05 15:14:34 -08:00
Ted Nyman
67b5b51c47 Merge pull request #499 from ppannuto/nesc
Add support for nesC
2013-11-05 15:13:32 -08:00
Ted Nyman
7b443fcdde Merge pull request #758 from kashif/nvidia-cuda
added cuda lexer and removed example from c++ samples
2013-11-05 13:57:49 -08:00
Kashif Rasul
d86f8ba12f merged from master and updated samples.json 2013-11-05 22:07:39 +01:00
Kashif Rasul
856ee4724c Merge remote-tracking branch 'upstream/master' into nvidia-cuda
# Please enter a commit message to explain why this merge is necessary,
# especially if it merges an updated upstream into a topic branch.
#
# Lines starting with '#' will be ignored, and an empty message aborts
# the commit.
2013-11-05 22:06:20 +01:00
Kashif Rasul
e635af4ef9 Revert 94b3ea3..b301634
This rolls back to commit 94b3ea3df5.
2013-11-05 22:05:12 +01:00
Kashif Rasul
b30163444f checked in updated samples.json 2013-11-05 22:02:41 +01:00
Ted Nyman
051bedefab Include .emacs filenames 2013-11-05 12:26:17 -08:00
Ted Nyman
1d7f63e38b Remove extra ace modes 2013-11-05 12:15:14 -08:00
Ted Nyman
d96657a48b Alphabetize 2013-11-05 11:57:40 -08:00
Ted Nyman
8bdd6ea510 Merge pull request #578 from timm/patch-1
Update languages.yml
2013-11-05 11:57:11 -08:00
Ted Nyman
9f65e702fc Remove dupe extension 2013-11-05 11:42:09 -08:00
Ted Nyman
a500dee94e Update samples file 2013-11-05 11:36:56 -08:00
Ted Nyman
1a11a6ab48 Merge pull request #555 from timjb/master
Support Agda and Literate Agda
2013-11-05 11:36:20 -08:00
Ted Nyman
c5f1317b47 Update samples file 2013-11-05 11:34:48 -08:00
Tim Baumann
5e03ff961b Merge branch 'master' of https://github.com/github/linguist
Conflicts:
	lib/linguist/languages.yml
2013-11-05 16:55:48 +01:00
Viktor Szakáts
a0c06eb6b9 recognize xBase sources
[xBase: https://en.wikipedia.org/wiki/xBase]

Reopened PR 593 with the two language additions split off.
2013-11-05 11:30:04 +01:00
Kashif Rasul
94b3ea3df5 added cuda lexer and removed example from c++ samples 2013-11-05 10:57:12 +01:00
Ted Nyman
6d7eae5011 Merge pull request #757 from chlorinejs/clojure-samples
add Clojure and its dialects to /samples
2013-11-05 00:39:27 -08:00
Hoàng Minh Thắng
3bbeea3682 add Clojure and its dialects to /samples 2013-11-05 15:33:05 +07:00
Frans Krojegård
562ec13696 Added node_modules/ to generated files. 2013-11-05 09:31:46 +01:00
Ted Nyman
a5c3bd7c13 Remove this until heuristic improves 2013-11-05 00:20:28 -08:00
Ted Nyman
6ae6882e1a Merge pull request #657 from mndrix/prolog
Add misclassified Prolog file
2013-11-04 22:43:19 -08:00
Ted Nyman
c4ad830931 Add .vimrc 2013-11-04 22:29:24 -08:00
Ted Nyman
5d417b4669 Fix syntax 2013-11-04 22:19:38 -08:00
Ted Nyman
02a264fad8 Merge pull request #612 from adityam/master
Add extensions for ConTeXt
2013-11-04 22:19:05 -08:00
Ted Nyman
31c3c43f64 Merge pull request #550 from rschiang/master
Include Qt QML markup detection
2013-11-04 22:11:03 -08:00
Ted Nyman
5197ea2488 Merge pull request #731 from zhuzhuor/master
Add support for RobotFramework .robot files
2013-11-04 22:00:17 -08:00
Ted Nyman
66167de1f9 Merge pull request #754 from myguidingstar/master
add more Clojure extensions and/or dialects'
2013-11-04 21:58:37 -08:00
Hoàng Minh Thắng
9482c2b822 add more Clojure extensions and/or dialects' 2013-11-05 12:52:14 +07:00
Ted Nyman
012a9c0e05 Merge pull request #495 from andygrunwald/decimal-places-in-output
Add decimal places to statistic output
2013-11-04 21:46:32 -08:00
Ted Nyman
881201a2c6 Merge pull request #682 from Jaxan/patch-1
Added Clean language
2013-11-04 21:10:34 -08:00
Ted Nyman
d656988258 Merge pull request #589 from kynetx/adding_krl
added KRL config and sample
2013-11-04 21:09:36 -08:00
Ted Nyman
89f7f8a00b Merge pull request #595 from j-jorge/patch-1
Add common file extensions to the c++ language
2013-11-04 21:08:20 -08:00
Ted Nyman
e4ec48fe8d Merge pull request #751 from CodeBlock/master
Add Idris.
2013-11-04 21:07:25 -08:00
Ted Nyman
1a4f890d04 Merge pull request #620 from Bartvds/master
Added Vagrantfile and Chef /cookbooks to vendors.yml
2013-11-04 21:05:08 -08:00
Ted Nyman
71633871f3 Merge pull request #561 from assassini/master
Added an exclusion pattern for a "dependencies" folder in the root directory
2013-11-04 21:00:50 -08:00
Ted Nyman
f523561e66 Ignore .osx files 2013-11-04 20:57:40 -08:00
Bartvds
f3007215b1 Added Vagrantfile to vendors.yml
Closes #619
2013-11-05 05:53:09 +01:00
Ted Nyman
1847b237c9 Remove alias 2013-11-04 20:46:11 -08:00
Ted Nyman
07169db217 Merge pull request #531 from chriskuehl/master
Add vendor exception for PhoneGap/Cordova device-specific JavaScript libraries.
2013-11-04 20:43:46 -08:00
Ted Nyman
4b4b368356 Add .R extension 2013-11-04 20:41:30 -08:00
Ted Nyman
09ef0cd3e1 Add cproject extension 2013-11-04 20:39:48 -08:00
Ted Nyman
0f6bca7a3d Merge pull request #596 from HQ063/patch-1
Update languages.yml
2013-11-04 20:37:05 -08:00
Chris Kuehl
5e3d811902 Merge github.com:github/linguist
Conflicts:
	lib/linguist/vendor.yml
2013-11-04 20:32:52 -08:00
Ted Nyman
12c655a48a Merge pull request #508 from AdamFerguson/master
Add Jade and Scaml
2013-11-04 20:30:20 -08:00
Ted Nyman
fdddffe041 Just .g4 for now 2013-11-04 20:27:46 -08:00
Ted Nyman
c3aab69b11 Merge pull request #697 from robstoll/master
Added ANTLR to the list, Pygments should have a lexer for ANTLR
2013-11-04 20:27:20 -08:00
Ted Nyman
3f1161d713 Merge pull request #643 from zulus/extjs_exclude
Exclude ExtJS library
2013-11-04 20:10:03 -08:00
Ted Nyman
cf14c5fa4f Merge pull request #450 from Giacom/master
Added DM (Dream Maker) language.
2013-11-04 19:53:25 -08:00
Ted Nyman
8aac009b00 Add more xquery extensions 2013-11-04 19:49:54 -08:00
Ted Nyman
05bb8b10fd Merge pull request #562 from hkdobrev/sublime-text
Added JSON extensions for Sublime Text
2013-11-04 19:40:48 -08:00
Ted Nyman
ecacbc937b Remove ace mode 2013-11-04 19:34:30 -08:00
Ted Nyman
86d0f0a84a Merge pull request #392 from qqshfox/protocol_buffers
Add Protocol Buffers
2013-11-04 19:33:43 -08:00
Ted Nyman
34218d9a5c Merge pull request #475 from pointwise/glyph
Added Glyph scripting language
2013-11-04 19:28:18 -08:00
Ted Nyman
de7ca0d954 Ignore files under thirdparty/ 2013-11-04 19:24:32 -08:00
Ted Nyman
81176f8dfa Add generated JNI detection, update samples 2013-11-04 19:14:34 -08:00
Ted Nyman
6ee999617e Add syscalldefs.h sample 2013-11-04 19:05:56 -08:00
Ted Nyman
88442094f9 Merge pull request #590 from jsocol/master
Add .adp for Tcl files to languages.yml
2013-11-01 11:04:07 -07:00
Ricky Elrod
5037dd5add Add Idris.
This adds Idris into the mix and uses the text-only parser for now, pending
upstream merging this patch in:
https://bitbucket.org/birkenfeld/pygments-main/pull-request/210/idris-lexer-added-lexer-for-idris/diff

Once that gets merged in, the lexer should change to idris.
2013-11-01 04:52:35 -04:00
Ted Nyman
aa41c87158 Merge pull request #739 from danluu/bluespec
Add Bluespec language
2013-10-30 23:50:52 -07:00
Ted Nyman
569eac2222 Whitespace 2013-10-30 23:46:42 -07:00
Ted Nyman
2de23046cc Merge pull request #572 from Aaron1011/add_realbasic
Added REALbasic
2013-10-30 23:46:09 -07:00
Ted Nyman
ca8b27ff15 Merge pull request #614 from robin850/patch-1
Lex .mspec files like Ruby
2013-10-30 23:45:36 -07:00
Ted Nyman
0f17ba0fcf Merge pull request #715 from Jaykul/master
Update PowerShell File Extensions
2013-10-30 23:44:18 -07:00
Ted Nyman
d5002ef06a Start vendor work for bootstrap by ignoring minimized bootstrap js and css 2013-10-30 19:12:52 -07:00
Ted Nyman
ce443e73f1 Merge pull request #707 from larsbrinkhoff/lisp
Common Lisp misidentified as OpenCL
2013-10-30 00:34:03 -07:00
Lars Brinkhoff
89b5e9f5e6 Rebuild samples.json to make Travis happy. 2013-10-30 07:28:09 +01:00
Lars Brinkhoff
c6c5e79ccf Add .cl as a Common Lisp file extension. 2013-10-30 07:28:08 +01:00
Ted Nyman
2bac3af299 Fix typo 2013-10-29 11:59:58 -07:00
Ted Nyman
5411c5457d Need to wait on pygments update 2013-10-29 11:59:27 -07:00
Ted Nyman
92595cffa3 Merge pull request #749 from hoelzro/master
Use the Haxe lexer for Haxe
2013-10-29 11:54:42 -07:00
Ted Nyman
ad77279bbf Text only lexer for Inno Setup 2013-10-29 11:54:25 -07:00
Rob Hoelz
e7d8b99ca2 Use the Haxe lexer for Haxe
The lexer for Haxe is no longer named haXe, so linguist is currently
unable to find it.  This fixes that.
2013-10-29 08:56:37 +01:00
Ted Nyman
971d848eec Merge pull request #737 from jeabakker/patch-1
Update vendor.yml
2013-10-28 12:27:54 -07:00
Ted Nyman
f353fa3890 Merge pull request #691 from le717/master
Detect Inno Setup installer scripts
2013-10-28 11:36:51 -07:00
Ted Nyman
64ce62a804 2.9.7 2013-10-28 11:24:58 -07:00
Ted Nyman
bb9537c5b4 Merge pull request #746 from tnm/no-prose
Revert "Merge pull request #695 from github/detect-prose"
2013-10-28 11:23:12 -07:00
Ted Nyman
9f00b5478d Revert "Merge pull request #695 from github/detect-prose"
This reverts commit 80321272b1, reversing
changes made to 02500d3830.
2013-10-28 11:21:56 -07:00
Lukas Elmer
086b565488 Added another matlab example 2013-10-28 18:06:17 +01:00
Lukas Elmer
27566d93e2 Added another matlab example 2013-10-28 18:02:15 +01:00
Dan Luu
7b1c78b848 Add common Verilog extensions 2013-10-26 14:15:20 -05:00
Dan Luu
4ec9145700 Add common VHDL file extensions 2013-10-26 14:10:46 -05:00
Dan Luu
922fe46f56 Add Bluespec examples 2013-10-26 14:02:04 -05:00
Dan Luu
d4628cf5db Add bluespec to language list 2013-10-25 20:58:17 -05:00
Jerome Bakker
cb9c3732f2 Update vendor.yml
extend the vendor/ exclusion to handle vendors/

Some projects use this folder to store external libaries (eg https://github.com/Elgg/Elgg)
2013-10-25 17:02:04 +02:00
Ted Nyman
ce6a8aa671 Merge pull request #733 from Sheeo/patch-1
Ungroup Elm from Haskell
2013-10-23 00:02:16 -07:00
Ted Nyman
bda61ec3d2 Merge pull request #732 from sstephenson/master
Highlight Bats test files as Shell
2013-10-23 00:00:13 -07:00
Ted Nyman
3ea0d479fc Merge pull request #730 from WestleyArgentum/patch-1
Color for Julia!
2013-10-22 20:01:23 -07:00
Michael
9ff1a9a54c Ungroup Elm from Haskell
Elm is its own language and should be counted as such
2013-10-22 14:32:12 +02:00
Sam Stephenson
024005d912 Highlight Bats test files as Shell 2013-10-21 19:44:09 -05:00
Bo Zhu
4d7cd834be add support for RobotFramework .robot files 2013-10-21 13:59:22 -04:00
Hanfei Shen
281e7456d5 Add Protocol Buffers 2013-10-19 13:24:43 +08:00
Westley Argentum Hennigh
f41c79066b Color for Julia!
See https://github.com/JuliaLang/julia/issues/4569
2013-10-18 10:37:31 -07:00
Ted Nyman
063ba50952 Bump to 2.9.6 2013-10-17 13:57:00 -07:00
Ted Nyman
70bc0b3b77 Merge pull request #584 from jayphelps/patch-1
Added common alternative Handlebars extensions
2013-10-16 23:50:48 -07:00
Ted Nyman
057ea80582 Merge pull request #663 from acgetchell/patch-2
Add LaTeX and BibTeX
2013-10-16 23:44:41 -07:00
Ted Nyman
a046e1c380 Merge pull request #721 from olivergondza/patch-1
Highlight .jelly files as XML
2013-10-16 23:30:17 -07:00
Ted Nyman
e3c2a5e510 Merge pull request #701 from thorn0/patch-1
TypeScript: fixed syntax, it wasn't TypeScript at all
2013-10-16 23:27:31 -07:00
Charlie Somerville
a00967ddc4 Merge pull request #728 from github/fix-dup-primary-extension-for-mumps
Ensure no two languages have the same primary extension
2013-10-16 09:58:52 -07:00
Charlie Somerville
599e1e2a51 update samples.json 2013-10-16 12:43:20 -04:00
Charlie Somerville
44638e1c6b add .m as an alternate extension for MUMPS so it's still picked up 2013-10-16 12:43:20 -04:00
Charlie Somerville
cabd4fb4c5 change MUMPS' primary extension from .m to .mumps
.m currently clashes with Objective-C, and MUMPS is chosen before
Objective-C when creating .m files in Gist. Even if .mumps isn't a
legitimate MUMPS extension, I think we should change it anyway since
Objective-C is far more likely to be intended when a user uses .m
2013-10-16 12:43:20 -04:00
Charlie Somerville
086845f189 use @primary_extension_index in find_by_filename 2013-10-16 12:43:16 -04:00
Charlie Somerville
413c881af8 add @primary_extension_index to ensure we don't have duped primary exts 2013-10-16 11:41:19 -04:00
Jaykul
74eb60a354 Saying my ABCs
Fixing https://travis-ci.org/github/linguist/jobs/12531345
I swear: I am not normally dyslexic
2013-10-14 14:22:11 -04:00
Jaykul
85e54e9af2 Fix alphabetical order of ps1xml 2013-10-14 13:52:40 -04:00
Oliver Gondža
245521db22 Highlight .jelly files as XML
[Jelly](http://commons.apache.org/proper/commons-jelly/) is used heavily by stapler/stapler and jenkinsci/jenkins.
2013-10-14 19:38:56 +02:00
Jaykul
5b8ad31d75 Add psc1, fix order in PowerShell
I had omitted .psc1 because I wasn't confident it was xml
And I have now sorted psd1/psm1 correctly
2013-10-14 13:20:16 -04:00
Ted Nyman
80321272b1 Merge pull request #695 from github/detect-prose
Detect prose documents
2013-10-12 18:12:27 -07:00
Garen Torikian
02500d3830 Merge pull request #713 from aclements/master
Recognize jQuery >= 1.10.0
2013-10-12 17:22:09 -07:00
Jaykul
921ceaa221 Update PowerShell File Extensions
The three core PowerShell language extensions are .psd1, .ps1 and .psm1 -- plus two xml file extensions: .ps1xml and .clixml which are for formatting rules and serialization.
.psm1 modules files use exactly the same syntax as scripts, but are imported rather than executed.
.psd1 are metadata files which use a subset of the same syntax (they can be highlighted using the same highlighter, it's just some commands, variables, and types aren't allowed in data files)
2013-10-12 16:37:49 -04:00
Austin Clements
5232a45d1f Recognize jQuery >= 1.10.0
jQuery recently passed 1.10, but the vendor regexp assumed each
component of the version number would be only one digit.  Allow
multiple digits for the minor and micro versions.
2013-10-11 21:21:43 -04:00
Andy Grunwald
fdf000ec62 Add decimal places to statistic output
If you analyze a project sometimes the statistic outputs a
language with 0%. At first it seems that the language is not
part of this project, but there are only some decimal places
missing.
2013-10-02 20:40:12 +02:00
Ted Nyman
c7933537b1 Merge pull request #705 from glts/add-j-language
Added J to languages
2013-10-01 22:02:48 -07:00
glts
e194d7238e Added "Text only" lexer for J
There is no lexer for J so far, use "Text only" to make the build pass.
2013-09-30 23:28:35 +02:00
glts
33d777a6ff Added support for J to languages.yml 2013-09-30 21:20:28 +02:00
thorn0
1b90dfedf9 TypeScript: fixed syntax, it wasn't TypeScript at all 2013-09-30 13:32:41 +03:00
Robert Stoll
b0b7d75bcd ANTLR was in the wrong order. corrected this mistake 2013-09-27 18:43:11 +02:00
Robert Stoll
74ba0f9c39 Added ANTLR to the list, Pygments should have a lexer for ANTLR 2013-09-27 18:21:39 +02:00
Ted Nyman
8465961e72 Merge pull request #574 from alaingilbert/patch-1
There is a lexer for TypeScript in the pygments project
2013-09-26 02:27:19 -07:00
Ted Nyman
f5cb6e035d Merge pull request #696 from aaronpuchert/rprofile
Add .Rprofile files
2013-09-26 02:26:09 -07:00
Aaron Puchert
7946de7116 Add .Rprofile to filenames for R 2013-09-25 12:43:22 +02:00
Garen Torikian
9e65eb35e7 💄 2013-09-24 21:47:30 -07:00
Garen Torikian
76a1369932 Alphabetize detectable markup 2013-09-24 21:28:39 -07:00
Garen Torikian
12e92f127b Add samples of prose content 2013-09-24 21:28:09 -07:00
Garen Torikian
50f3b2e398 Add Mediawiki 2013-09-24 16:34:46 -07:00
Garen Torikian
a716151c83 Add Creole 2013-09-24 16:33:09 -07:00
Garen Torikian
3c70fffb67 Add Org 2013-09-24 16:32:02 -07:00
Garen Torikian
a0a879a3a3 Add RDoc 2013-09-24 16:30:02 -07:00
Garen Torikian
5d35d18634 Add AsciiDoc as an option 2013-09-24 16:28:54 -07:00
Garen Torikian
a3a6c2f8b3 Alphabetize, foo' 2013-09-24 16:23:05 -07:00
Garen Torikian
8eef1c33b8 Start adding prose content to detector 2013-09-24 16:13:04 -07:00
Joshua Moerman
6182b0fbc2 Clean has no lexer
The haskell lexer doesn't work nice, as the comments are totally different.
2013-09-21 22:55:29 +02:00
Joshua Moerman
4e4f3c6e17 Replaced tabs with spaces
Didn't see the difference in the github editor...
2013-09-21 21:27:17 +02:00
Triangle717
800f445b22 Update languages.yml
Detect Inno Setup installer scripts (http://www.jrsoftware.org/isinfo.php)
2013-09-20 16:24:13 -04:00
Ted Nyman
a9db25cc5b Fix grammar 2013-09-19 17:33:26 -07:00
Tim Baumann
b5e1bda3e4 Merge branch 'master' of https://github.com/github/linguist 2013-09-19 15:47:18 +02:00
Ted Nyman
f06167eaca Bump to 2.9.5 2013-09-18 19:02:12 -07:00
Ted Nyman
687e82307e .cfg is used by too many non INI files 2013-09-18 19:00:15 -07:00
Joshua Moerman
d9358d8af3 Added Clean language
With the blue color from: http://wiki.clean.cs.ru.nl/Clean
2013-09-13 16:41:38 +02:00
Ted Nyman
b3fbd42786 Merge pull request #536 from zzet/patch-1
Added .podsl extension for Common Lisp language
2013-09-09 09:33:20 -07:00
Andrew Kumanyaev
590ed26f7b Alphabetized list for Common Lisp
Via comment https://github.com/github/linguist/pull/536#issuecomment-24046315
2013-09-09 11:58:24 +04:00
Ted Nyman
12093ee1f0 Merge pull request #672 from Bulwersator/master
Add squirrel
2013-09-09 00:54:31 -07:00
Ted Nyman
ee840321d1 Add test_data 2013-09-09 00:39:34 -07:00
Ted Nyman
0cbc44677c Merge pull request #606 from kethomassen/patch-1
Change YAML type to data
2013-09-09 00:38:04 -07:00
kethomassen
a9b944ac36 Make markup tests pass
Yaml ain't Markup Language!
2013-09-09 16:44:00 +10:00
Ted Nyman
759860d866 Merge pull request #549 from oubiwann/add-lfe-support
Added support for LFE (Lisp Flavored Erlang).
2013-09-08 23:27:32 -07:00
Ted Nyman
c4b21f51e4 Merge pull request #677 from github/rename-delphi-to-pascal
Rename Delphi to Pascal
2013-09-08 23:13:43 -07:00
Charlie Somerville
fdccffddfc rename Delphi to Pascal 2013-09-07 03:17:25 +10:00
Charlie Somerville
e610789d38 rake samples 2013-09-07 03:10:56 +10:00
Ted Nyman
3bd8ed45e4 Merge pull request #676 from faithanne/master
Adding .x3d to the list of .xml extensions.
2013-09-05 19:13:27 -07:00
Faith-Anne L. Kocadag
42311e1bf3 Adding .x3d to the list of .xml extensions.
Adding .x3d to the list of .xml extensions: [x3d specifications] (http://www.web3d.org/x3d/specifications/x3d_specification.html)
2013-09-05 22:05:21 -04:00
Bulwersator
9e9aae1d83 really fix sorting 2013-09-05 22:10:49 +02:00
Bulwersator
229ab3a268 fix sorting 2013-09-05 22:02:09 +02:00
Bulwersator
85840aadc2 Create Squirrel.nut 2013-09-05 20:54:59 +02:00
Bulwersator
32b7b3e1b1 Update languages.yml 2013-09-05 20:50:08 +02:00
Ted Nyman
5e7eeae98e Merge pull request #669 from r0man/cljx
Add the ".cljx" file extension to the list of Clojure languages.
2013-09-04 10:23:39 -07:00
Roman Scherer
de8c4daa45 Add the ".cljx" file extension to the list of Clojure languages.
Some people start writing portable Clojure/Clojurescript code and use
the ".cljx" file extension for that. This is driven by this project:

https://github.com/lynaghk/cljx
2013-09-03 21:34:06 +02:00
Ted Nyman
64990a00b8 Merge pull request #656 from liluo/dev
Remove unused else
2013-09-03 09:26:47 -07:00
Ted Nyman
33434f08f4 Documentation 2013-09-02 00:02:15 -07:00
Ted Nyman
d5730f6fd1 Merge pull request #664 from Mouq/master
Add .nqp to the list of Perl file extensions
2013-09-02 00:01:05 -07:00
Mouq
2305496f94 Fix order of extensions 2013-09-01 22:06:08 -04:00
Mouq
d031392507 Add .nqp to the list of Perl file extensions 2013-09-01 10:14:14 -04:00
Adam Getchell
7e251d7345 Add definitions to TeX 2013-08-31 18:21:14 -07:00
Adam Getchell
e33e76a1a7 Update languages.yml 2013-08-31 18:12:16 -07:00
Adam Getchell
9064369517 Update languages.yml 2013-08-31 18:08:39 -07:00
Adam Getchell
a9c86d5453 Add LaTeX and BibTeX 2013-08-31 17:11:11 -07:00
Michael Hendricks
4b0c975426 Add misclassified Prolog file
This file was incorrectly identified as Perl.
2013-08-30 08:31:02 -06:00
liluo
6ec22a1674 remove unused else 2013-08-30 11:04:46 +08:00
Aditya Mahajan
71b48eaf55 Also add .mkvi file extension 2013-08-29 19:41:48 -04:00
Ted Nyman
694f51d09e Merge pull request #642 from Dav1dde/volt
Added Volt language
2013-08-27 22:36:15 -07:00
Ted Nyman
79040d00c7 Merge pull request #519 from mmullis/master
Add COBOL language support
2013-08-27 02:37:47 -07:00
Ted Nyman
7dfc1644ce Merge pull request #628 from pmoura/master
Add alternative Logtalk source file extension
2013-08-26 14:14:03 -07:00
zulus
777f1d27d1 Exclude ExtJS library 2013-08-23 02:53:26 +02:00
David
c4a90bbbcd Added Volt language 2013-08-22 13:48:26 +02:00
Ted Nyman
ac0920a11b Merge pull request #599 from larsbrinkhoff/glsl
Add OpenGL Shading Language.
2013-08-19 22:11:41 -07:00
Paulo Moura
a4bdca6d6b Add alternative Logtalk source file extension 2013-08-19 02:31:33 +01:00
Ted Nyman
f9bfcceba9 Merge pull request #512 from mark-otaris/patch-1
Add the .rbxs extension for Lua files
2013-08-18 17:10:09 -07:00
Ted Nyman
e0ceccc0c6 Merge pull request #626 from dzidzitop/patch-1
COPYING is added to excludes as a file that contains copyright informati...
2013-08-18 04:06:39 -07:00
dzidzitop
cd3e88fe8b COPYING is added to excludes as a file that contains copyright information. 2013-08-18 14:03:16 +03:00
Lars Brinkhoff
254b6de1d3 Add OpenGL Shading Language. 2013-08-17 14:31:28 +02:00
Julik Tarkhanov
a93e9493e2 Add OpenGL shading language (GLSL)
This is the language used for writing OpenGL shaders, which are becoming much more mainstream lately.
2013-08-17 14:31:28 +02:00
Ted Nyman
53b356deee Only looking at root dir 2013-08-16 15:02:01 -07:00
Ted Nyman
9dca6fa9cc Vendor READMEs 2013-08-16 15:00:34 -07:00
Ted Nyman
7226aa18de Merge pull request #559 from sethvargo/add_berksfile
Add Berksfile to the list of Ruby types
2013-08-15 12:36:24 -07:00
Seth Vargo
ce97865bd2 Add 'Berksfile' to the list of Ruby files 2013-08-15 10:29:39 -04:00
Ted Nyman
424fa0f56b Merge pull request #621 from abahgat/xmi-as-xml
Recognize Umbrello .xmi files as XML
2013-08-14 14:47:37 -07:00
Alessandro Bahgat
007fc9ebd0 Added .xmi to the extensions for XML 2013-08-14 17:33:23 -04:00
Ted Nyman
e0104c8d12 2.9.4 2013-08-14 08:57:07 -07:00
Ted Nyman
1a98a1f938 Add an F# alias 2013-08-14 08:55:58 -07:00
Ted Nyman
e005893d4c Syntax 2013-08-14 08:37:03 -07:00
Ted Nyman
6c74d854ec 2.9.3, update samples, update test 2013-08-14 08:35:53 -07:00
Ted Nyman
06b185f725 Fix F# search term 2013-08-14 08:32:50 -07:00
Robin Dupret
64ec42cf4a Lex .mspec files like Ruby 2013-08-09 21:47:47 +02:00
Aditya Mahajan
c4b24d9ae1 Add extensions for ConTeXt
[ConTeXt] is a macro language build on TeX (just as LaTeX is build on
TeX). It tends to use `.mkii` and `.mkiv` extensions to represent files used in
Mark II (MkII) version and Mark IV (MkIV) version of ConTeXt.

[ConTeXt]: http://wiki.contextgarden.net/
2013-08-07 20:37:40 -04:00
Ted Nyman
e6f38cbf45 2.9.2 2013-08-05 23:17:33 -07:00
Ted Nyman
1764674a13 Remove old test 2013-08-05 23:02:19 -07:00
Ted Nyman
4741a47d21 Bring in @upsuper's css generated check 2013-08-05 23:00:04 -07:00
kethomassen
432bffe3ec Change YAML type to data 2013-08-05 16:13:18 +10:00
Gonzalo HQ063
090216df2a Update languages.yml
Alphabetize the recently added .frm extension
2013-08-04 12:40:45 -03:00
Ted Nyman
7b2fec88d1 Update samples 2013-08-03 13:25:47 -07:00
Ted Nyman
76128ccb37 Add make samples and shebang for make 2013-08-03 13:21:58 -07:00
Ted Nyman
f5ede0d0f9 Update samples 2013-08-03 13:13:49 -07:00
Siraaj Khandkar
70d1649b45 Added an R script sample. 2013-08-02 17:17:00 -04:00
Ted Nyman
de706a2eb9 Merge pull request #588 from techy1157/patch-1
Vendored .DS_Store (for mac users)
2013-08-01 21:48:20 -07:00
Ted Nyman
33ebee0f6a Merge pull request #598 from github/let-css-soar
css detection
2013-07-30 16:23:06 -07:00
Ted Nyman
51a989d5f1 Update CSS color 2013-07-30 15:30:47 -07:00
Ted Nyman
3fc208b4ce Bump to 2.9.0 2013-07-30 14:43:30 -07:00
Ted Nyman
0fa54a85d8 Include .scss file samples 2013-07-30 14:21:04 -07:00
Ted Nyman
96e8a5d2cc Start detecting CSS 2013-07-30 13:30:39 -07:00
Ted Nyman
838fbc5626 Drop less from vendor.yml 2013-07-30 03:14:51 -07:00
Gonzalo HQ063
486af800b5 Update languages.yml
- Add .frm extension as VB file
2013-07-30 00:11:40 -03:00
j-jorge
f0b9b3a35a Add common file extensions to the c++ language
.hpp and .tpp extensions are of common use respectively for headers files and separate implementation of template classes/methods.
2013-07-29 17:27:25 +02:00
Ted Nyman
80780ab042 2.8.12 2013-07-26 16:11:56 -07:00
David Calavera
bd19f6ed17 Merge pull request #594 from github/docker_syntax
Docker syntax
2013-07-26 16:10:30 -07:00
Ted Nyman
2ae76842a0 0.8.11 2013-07-26 16:08:24 -07:00
Ted Nyman
750804876e Docs for the linguist script 2013-07-26 16:08:24 -07:00
Ted Nyman
1d8da964e2 More color tweaks 2013-07-26 16:08:24 -07:00
David Calavera
d21d0f281a Put the Dockerfile sample into the right directory. 2013-07-26 15:47:45 -07:00
David Calavera
a8e337e0eb Add Dockerfile sample. 2013-07-26 15:35:00 -07:00
David Calavera
37429d91a0 Add Docker files to the list of shell formatted files. 2013-07-26 15:33:23 -07:00
Ted Nyman
4b9c6fdf62 Merge pull request #592 from vszakats/patch-1
ignore more Git local config files to avoid them being misidentified as ...
2013-07-25 19:46:26 -07:00
Viktor Szakáts
4130825a43 ignore more Git local config files to avoid them being misidentified as Racket, Ruby or else 2013-07-24 11:46:01 +02:00
James Socol
8c42e61271 Update languages.yml
Add .adp for AOL Server Tcl files.
2013-07-22 14:54:37 -04:00
Ted Nyman
6d95590861 2.8.10 2013-07-20 23:15:13 -07:00
Ted Nyman
dbc36a5e63 More color tweaking 2013-07-20 23:14:53 -07:00
Ted Nyman
32a106cedd 2.8.9 2013-07-20 23:02:54 -07:00
Ted Nyman
78ed103f90 Fix casing 2013-07-20 23:02:12 -07:00
Ted Nyman
3e26b2a0a7 Update samples file 2013-07-20 15:10:46 -07:00
Ted Nyman
4fb910533f Add another php script sample 2013-07-20 15:05:49 -07:00
Ted Nyman
b71cab6add Give go language a color closer to its website 2013-07-19 22:51:59 -07:00
Phil Windley
394fb528cc getting pendantic about ASCII sorting order 2013-07-19 16:32:54 -06:00
Phil Windley
d2f4eec397 added lexer for KRL 2013-07-19 16:29:51 -06:00
Phil Windley
5f29bf3bb4 added KRL config and sample 2013-07-19 13:46:42 -06:00
Lazersmoke
5a9d35917f Vendored .DS_Store
In case a Mac user uploads lots of folders with .DS_Stores and changes their project language as such.
2013-07-19 00:02:39 -05:00
Jay Phelps
0029183078 Needed handlebars extensions in alpha-numeric order. 2013-07-18 02:00:32 -07:00
Jay Phelps
0978258f57 Added common alternative Handlebars extensions 2013-07-18 01:47:14 -07:00
Ted Nyman
772a9a582c 2.8.8 2013-07-17 03:01:53 -07:00
Ted Nyman
e633d565a9 Update samples 2013-07-17 02:59:28 -07:00
Ted Nyman
7d6ee108c4 2.8.7 2013-07-17 02:51:07 -07:00
Ted Nyman
dd64c3b545 Merge pull request #583 from github/add-slash
Add Slash
2013-07-17 02:49:50 -07:00
Charlie Somerville
b462e29e1d update pygments 2013-07-17 02:35:38 -07:00
Charlie Somerville
43f4f5bd32 add Slash to linguist 2013-07-17 01:51:34 -07:00
Ted Nyman
904e86d901 2.8.6 2013-07-15 12:07:33 -07:00
Ted Nyman
374164a299 Merge pull request #582 from github/gitignore
Don't bother with .gitignore files
2013-07-15 12:07:00 -07:00
Ted Nyman
f73f309595 Don't bother with .gitignore files 2013-07-15 12:06:07 -07:00
Ted Nyman
f23110a98d Bump to 2.8.5 2013-07-13 23:23:41 -07:00
Ted Nyman
3193fc90f9 Merge pull request #579 from github/fix-escript
Add another escript script
2013-07-13 23:21:49 -07:00
Ted Nyman
cc04519520 Add another escript script 2013-07-13 23:16:36 -07:00
Ted Nyman
f51c5e3159 Update documentation related to Pygments 2013-07-12 14:10:55 -07:00
Tim Menzies
a75d918b93 Update languages.yml 2013-07-12 14:33:23 -04:00
Ted Nyman
c439ca5f97 Merge pull request #575 from github/suppress-generated-cs
Hide .designer.cs files
2013-07-11 18:53:39 -07:00
Paul Betts
4d45f13783 Hide .designer.cs files
VS creates a bunch of files that, while important to version, are often also
huge and boring. We should suppress them.
2013-07-11 15:54:54 -07:00
Alain Gilbert
2a56719378 There is a lexer for TypeScript in the pygments project 2013-07-11 11:59:44 -04:00
Aaron Hill
23289d8901 Added REALbasic 2013-07-10 18:56:41 -04:00
Ted Nyman
7d594b55e4 2.8.4 2013-07-07 23:38:11 -07:00
Ted Nyman
5268a93fa4 Merge pull request #443 from stuartpb/patch-1
Add Erlang rebar escript bundles to vendor.yml
2013-07-07 23:32:58 -07:00
Ted Nyman
ae44530a66 Regenerate samples 2013-07-07 21:26:33 -07:00
Ted Nyman
285216a258 Merge pull request #568 from cono/master
add .fcgi file with proper shebang as Perl sample
2013-07-07 21:26:07 -07:00
cono
4d3720745e Add Perl's index.fcgi to samples 2013-07-08 04:01:50 +03:00
Ted Nyman
a681a252d4 Merge pull request #548 from talentdeficit/master
change erlang css color to something less horrendous
2013-07-07 15:42:53 -07:00
Ted Nyman
22de40f5f6 2.8.3 2013-07-07 15:27:59 -07:00
Ted Nyman
7fbfe0a4b4 Silence this warning 2013-07-07 15:26:29 -07:00
Ted Nyman
29f5ea591f 2.8.2 2013-07-07 15:18:09 -07:00
Ted Nyman
438c0a4ec1 Remove unncessary xc extension 2013-07-07 15:17:32 -07:00
Ted Nyman
887933c86a Bump to 2.8.1 2013-07-07 14:54:24 -07:00
Ted Nyman
53340ddd4c Even more Python script support 2013-07-07 14:53:15 -07:00
Ted Nyman
72b70a11bc Bump to 2.8.0 2013-07-07 14:37:01 -07:00
Ted Nyman
d853864edb Merge pull request #567 from github/workspace
Better Python script support; update samples.json
2013-07-07 14:35:57 -07:00
Ted Nyman
62ad763933 Better Python script support; update samples.json 2013-07-07 14:33:18 -07:00
Ted Nyman
6a15ae47ee Some space here 2013-07-07 14:07:03 -07:00
Ted Nyman
1bebb50482 Exclude LICENSE files 2013-07-07 13:54:22 -07:00
Haralan Dobrev
d351d6091d added JSON extensions for Sublime Text
Signed-off-by: Haralan Dobrev <hkdobrev@gmail.com>
2013-07-02 00:10:42 +03:00
assassini
fae8f83f64 Added a test case for the "dependencies" folder exclusion pattern 2013-07-01 21:46:54 +03:00
assassini
d3d62726ae Added an exclusion pattern for a "dependencies" folder in the root directory 2013-07-01 21:38:22 +03:00
Tim Baumann
cf15832504 add agda and literate agda support 2013-06-29 12:28:43 +02:00
Ted Nyman
fdc81d8818 Update LICENSE 2013-06-24 14:32:07 -06:00
Poren Chiang
764df07450 Include Qt/QML lanuage 2013-06-23 03:20:07 +08:00
Duncan McGreggor
68dfff60b5 Fixed typo (removed capitalization). 2013-06-21 14:39:04 -07:00
Duncan McGreggor
479871f019 Added support for LFE (Lisp Flavored Erlang). 2013-06-21 14:23:31 -07:00
alisdair sullivan
10ec56e667 change css color representing erlang to slightly less horrendous
color
2013-06-20 14:48:51 -07:00
Ted Nyman
1e958a18f8 Merge pull request #541 from hkdobrev/patch-1
Added .tmTheme as XML extension
2013-06-19 13:31:26 -07:00
Ted Nyman
9fa0f6cd6f Merge pull request #543 from jasonbot/master
Treat `.pyt` files as Python source
2013-06-19 13:31:08 -07:00
Ted Nyman
4e339db911 Merge pull request #377 from rlsosborne/detect-xc-language
Add detection for the XC programming language.
2013-06-19 13:30:45 -07:00
Haralan Dobrev
c1469b25a1 Added .tmTheme as XML extension
Files with the `.tmTheme` extension similar to `.tmCommand`, `.tmLanguage`, `.tmPreferences` and `.tmSnippet` are configuration XML files for TextMate or SublimeText.

The `.tmTheme` extension was missing from this list.
2013-06-18 17:02:34 +03:00
Jason Scheirer
ef6abed81a Languages.yml entries must be in alphabetical order 2013-06-17 22:11:16 -06:00
"Jason Scheirer"
96473849e0 Add .pyt as an extension for Python 2013-06-17 17:30:10 -07:00
Paul Betts
b14e09af6b Merge pull request #215 from oxan/master
Add jQuery UI and more ASP.NET MVC files to vendor.yml
2013-06-17 11:01:04 -07:00
Andrew Kumanyaev
42050c4d12 Update languages.yml
Added .podsl extension for Common Lisp language
2013-06-17 21:48:39 +04:00
Joshua Peek
84dc918729 Linguist 2.7.0 2013-06-10 11:08:49 -05:00
Joshua Peek
032125b114 Axe indexable? 2013-06-10 11:06:18 -05:00
Joshua Peek
b1a137135e Axe colorize_without_wrapper 2013-06-10 10:58:33 -05:00
Joshua Peek
1a53d1973a ws 2013-06-10 10:39:59 -05:00
Joshua Peek
490afdddd1 some air 2013-06-10 10:37:55 -05:00
Joshua Peek
9822b153eb ws 2013-06-10 10:36:56 -05:00
Chris Kuehl
1af71c8945 Add tests for PhoneGap/Cordova vendor exceptions. 2013-06-10 01:09:23 -04:00
Chris Kuehl
acc1a56da4 Add Cordova's/PhoneGap's JS device library as vendor exclusion. 2013-06-10 01:05:20 -04:00
Ted Nyman
bf4596c26d Merge pull request #530 from github/not-really-mac
Less clever newline detection
2013-06-09 21:48:11 -07:00
Joshua Peek
3e3fb0cdfe Say why 2013-06-09 21:02:55 -05:00
Joshua Peek
d907ab9940 Kill mac_format check, buggy 2013-06-09 21:02:11 -05:00
Joshua Peek
9c1d6e154c Always split lines on \n or \r 2013-06-09 21:01:03 -05:00
Joshua Peek
b5681ca559 Correct count 2013-06-09 21:00:20 -05:00
Joshua Peek
4b8f362eb7 Merge test cases 2013-06-09 20:53:48 -05:00
Joshua Peek
2e39d1d582 Rebuild samples 2013-06-09 20:53:33 -05:00
Joshua Peek
fa797df0c7 Note that BlobHelper is a turd 2013-06-09 20:51:26 -05:00
Joshua Peek
c7100be139 Make mac_format? private 2013-06-09 20:48:45 -05:00
Joshua Peek
91284e5530 Add failing test bad mac format 2013-06-09 20:45:59 -05:00
Patrick Reynolds
e5cf7ac764 bump version to include the new sample files 2013-06-06 22:46:11 -05:00
Patrick Reynolds
3ae785605e Merge pull request #529 from github/more-samples
More samples
2013-06-06 20:40:12 -07:00
Patrick Reynolds
e7ac4e0a29 helpful comments 2013-06-06 17:04:28 -05:00
Patrick Reynolds
b275e53b08 use LINGUIST_DEBUG to debug the Bayesian filter 2013-06-06 16:54:18 -05:00
Patrick Reynolds
f363b198e1 more and better samples for Nu, Racket, Scala
- 99 bottles of beer is more substantial than hello world
 - also fixed chmod 755 on several .script! files
2013-06-06 16:53:16 -05:00
Ted Nyman
37c5570cec Merge pull request #528 from github/erlang-samples
Erlang samples
2013-06-06 13:46:53 -07:00
Patrick Reynolds
2db2f5a46d add erlang, more-complex shell examples
- some Erlang and escript files
 - .escript extension
 - .erlang extension
 - shell script with %, ##, name tokens
2013-06-06 15:41:44 -05:00
Patrick Reynolds
e33f4ca96e remove redundant OCaml extensions entry 2013-06-06 15:21:49 -05:00
Ted Nyman
246580fb43 Update README.md 2013-05-31 15:27:00 -06:00
Michael Mullis
f2b80a239f COBOL: move up in the sort order 2013-05-30 01:33:04 +00:00
Michael Mullis
c6d38ab647 COBOL comes before Clojure and extensions must be sorted 2013-05-30 01:29:36 +00:00
Michael Mullis
420594874a add COBOL language support 2013-05-30 01:17:07 +00:00
Ted Nyman
912f635d2a Merge pull request #515 from Turbo87/jinja
Added .jinja extension to HTML+Django language
2013-05-27 13:38:13 -07:00
Tobias Bieniek
4ae5dd360f Added .jinja extension to HTML+Django language 2013-05-27 22:05:17 +02:00
Mark Otaris
407c40f7d3 Add '.rbxs' extension for Lua files 2013-05-23 19:34:04 -03:00
Ted Nyman
329f9a0fc8 Merge pull request #503 from Drup/patch-1
Add .eliom to ocaml extensions
2013-05-21 23:24:30 -07:00
Ted Nyman
d62257b149 Merge pull request #504 from wjlroe/riemann-configs-are-clojure
Recognise riemann.config files as Clojure files
2013-05-21 23:22:56 -07:00
Ted Nyman
19539404a4 Merge pull request #510 from Gozala/wisp
Add wisp language support.
2013-05-21 23:20:37 -07:00
Irakli Gozalishvili
9ee0523cad Add wisp language support. 2013-05-21 14:03:16 -07:00
Adam Ferguson
89bc82d9df Add samples for Jade and Scaml 2013-05-16 13:21:58 -04:00
Adam Ferguson
30aa3fd5d6 Add Jade and Scaml 2013-05-16 10:26:01 -04:00
William Roe
846e84fc8c Recognise riemann.config files as Clojure files 2013-05-13 18:25:27 +01:00
Drup
cd006487b3 Add .eliom to ocaml extensions 2013-05-13 17:10:07 +02:00
Ted Nyman
597ce9adc3 Add Clojure and just use the existing Bash record 2013-05-11 00:26:13 -06:00
Ted Nyman
61040402df Actually remove the languages 2013-05-11 00:23:10 -06:00
Ted Nyman
8013cd081a Based on current stats, add Shell, Coffeescript to popular; drop TeX, XML 2013-05-11 00:22:27 -06:00
Leushenko
22cdb9ee90 Added BlitzBasic 2013-05-09 14:58:41 +01:00
Pat Pannuto
df448c0761 Add support for nesC
nesC is an embedded systems language. It it is a stable product (~10
years old) primarily used for TinyOS, an embedded operating system.
Development has recently moved to github (https://github.com/tinyos/nesc).

Pygments has now pulled the nesC lexer as of 2013/5/6:
  https://bitbucket.org/birkenfeld/pygments-main/pull-request/166/

Please let me know if I need to do anything else / add more information.
2013-05-06 18:06:43 -04:00
Ted Nyman
99c296264a Merge pull request #483 from KevinT/master
Added scriptcs language detection
2013-05-06 13:16:14 -07:00
Ted Nyman
ba51461604 Merge pull request #493 from josegonzalez/patch-1
Consider .reek files as yaml
2013-05-03 01:16:01 -07:00
Ted Nyman
6610d0dd46 Merge pull request #494 from josegonzalez/patch-2
Consider .factor-rc and .factor-boot-rc factor files. Closes #492
2013-05-03 01:15:12 -07:00
Jose Diaz-Gonzalez
3adc0e1b16 Reorder extensions in order to pass tests 2013-04-30 15:54:41 -03:00
Jose Diaz-Gonzalez
0a47b4865a Consider .factor-rc and .factor-boot-rc factor files. Closes #492 2013-04-30 15:50:51 -03:00
Jose Diaz-Gonzalez
13f1a1fc74 Consider .reek files as yaml 2013-04-30 15:49:00 -03:00
Ted Nyman
3ad129e6e6 Update samples.json 2013-04-28 22:38:07 -07:00
Kevin Trethewey
475e865809 Added scriptcs file extention to C# section 2013-04-28 07:40:12 +03:00
marc hoffman
1e93e98d30 Merge branch 'master' of git://github.com/github/linguist 2013-04-27 22:57:33 +02:00
marc hoffman
d0034b4fb9 Oxygene language detection — drop lexer setting, as we now have a proper Oxygene lexer in pigments.rb 2013-04-27 22:55:57 +02:00
Ted Nyman
0c3dcb0a9b Update color for UPC 2013-04-27 21:01:33 +08:00
Ted Nyman
3138fa79a0 Merge pull request #484 from waltherg/patch-1
Added support for Unified Parallel C
2013-04-27 06:00:17 -07:00
waltherg
c88170b6f6 Added support for Unified Parallel C
http://upc.gwu.edu/
2013-04-27 13:12:03 +02:00
Pointwise, Inc.
3b79cf3cf2 Add lexer 2013-04-24 11:30:00 -05:00
Ted Nyman
f3ee7072a6 Merge pull request #479 from CodeBlock/gemfile-https
Make Gemfile use https://rubygems.org
2013-04-24 09:09:25 -07:00
Ted Nyman
5b5d9da33c Merge pull request #477 from liluo/patch-1
added multi line comment flag for python
2013-04-24 09:07:28 -07:00
Ricky Elrod
dc1d17a051 Make Gemfile use https://rubygems.org 2013-04-21 00:35:13 -04:00
0bc28d9424 added multi line comment flag for python 2013-04-19 15:33:02 +08:00
Pointwise, Inc.
5b06a46451 Added Glyph scripting language 2013-04-18 16:11:50 -05:00
Ted Nyman
8b5b8a9760 Merge pull request #471 from mihaip/master
Detect source files generated by the Protocol Buffer compiler
2013-04-16 23:20:24 -07:00
Mihai Parparita
6c98bbf02c Detect source files generated by the Protocol Buffer compiler 2013-04-16 22:14:50 -07:00
Ted Nyman
9f0964cd7d Merge pull request #461 from github/detect-csv
Add `csv?` BlobHelper
2013-04-04 14:36:08 -07:00
Yaroslav Shirokov
b68732f0c7 Add detection for CSV 2013-04-04 14:01:09 -07:00
marc hoffman
15a746650c Merge branch 'master' of https://github.com/github/linguist 2013-04-03 13:20:50 +02:00
Ted Nyman
b99abba27f Merge pull request #455 from github/axml
Add axml extension to xml
2013-04-01 19:57:47 -07:00
Ted Nyman
9c12823d38 Add axml extension to xml 2013-04-01 19:56:38 -07:00
Ted Nyman
28bee50e6a Merge pull request #451 from github/pdfs
Add PDF detection
2013-03-25 21:14:16 -07:00
Garen Torikian
4148ff1c29 Add PDF detection 2013-03-25 15:45:58 -07:00
Giacomand
e408b5fbaa * Trying this. 2013-03-25 16:26:14 +00:00
Giacomand
e26bf5a0d2 - Moving diff to after DM. 2013-03-25 16:14:06 +00:00
Giacomand
465d60ba86 * Missed setting the lexer to Text Only. 2013-03-25 16:07:19 +00:00
Giacomand
d5c3978a6e * Fixed a mis-formating. 2013-03-25 10:13:38 +00:00
Giacomand
d4312c05bf - Updated sample file. 2013-03-25 09:54:23 +00:00
Giacomand
7efad57176 Added:
* DM (Dream Maker) language.
 * Sample DM file.

The DM language is used in an engine known as BYOND which allows users to easily create their own games in a language that is designed to be accessible for newcomers. I do not know how much a language has to be used on the site to be considered but searching for "BYOND" does show a lot of people using the language. I am also still learning git so if I have missed something then please let me know.
2013-03-25 09:49:00 +00:00
Ted Nyman
009bff6cc2 Merge pull request #448 from github/update-db
Update samples
2013-03-22 21:37:22 -07:00
Ted Nyman
c918c5b742 Update samples 2013-03-22 21:35:02 -07:00
Ted Nyman
4a33b7ae8e Merge pull request #150 from lparenteau/master
Add detection for the M programming language (aka MUMPS).
2013-03-22 21:32:50 -07:00
Ted Nyman
777952adcb Merge pull request #446 from github/ceylon-as-ceylon-not-textonly
Render Ceylon as Ceylon since it is now in Pygments
2013-03-18 17:40:17 -07:00
Matthew McCullough
ef4c47347d Render Ceylon as Ceylon since it is now in Pygments 2013-03-18 15:34:37 -07:00
Ted Nyman
5e34315bb3 Merge pull request #349 from tucnak/master
Support of Qt Designer .ui files
2013-03-18 12:50:19 -07:00
Illya Kovalevskyy
4f5624cd5f Order is fixed 2013-03-18 01:40:40 +02:00
Illya Kovalevskyy
f76d64f9aa Merge branch 'master' of github.com:github/linguist
Conflicts:
	lib/linguist/languages.yml
2013-03-18 01:36:19 +02:00
Ted Nyman
4444b6daa1 Merge pull request #441 from rdeltour/xml-group
Remove XProc and XSLT from the group XML
2013-03-17 16:14:59 -07:00
Stuart P. Bentley
ec786b73bc Add Erlang rebar escript bundles to vendor.yml
Fixes #236
2013-03-16 13:51:08 -07:00
Romain Deltour
7ca58f8dd9 Remove XProc and XSLT from the group XML 2013-03-15 12:40:59 +01:00
Laurent Parenteau
58420f62d9 Merged with upstream. Updated M (aka MUMPS) detection to use the new bayesian / samples method. 2013-03-14 11:33:09 -04:00
Ted Nyman
a20631af04 Merge pull request #373 from vincentwoo/patch-1
Add extension support for Iced Coffeescript
2013-03-13 23:10:33 -07:00
Ted Nyman
44995d6f62 Merge pull request #438 from richo/bugs/sample_db
Bugs/sample db
2013-03-12 23:32:31 -07:00
richo
2d7dea2d97 Don't emit the diff if samples db is out of date
There's a warning message emitted with instructions, a 2000 line diff
does nothing to help the user track down the issue.
2013-03-13 17:29:05 +11:00
richo
2cdbe64b66 Update samples db 2013-03-13 15:09:51 +11:00
Ted Nyman
030ad89a14 Bump to 2.6.8 2013-03-12 01:09:28 -07:00
Ted Nyman
a34ee513c0 Merge pull request #436 from github/ignore-test-fixtures
Vendor test/fixtures
2013-03-12 01:07:50 -07:00
Ted Nyman
96d29b7662 Vendor test/fixtures 2013-03-12 01:06:26 -07:00
Ted Nyman
3f077ea71e Merge pull request #383 from REAS/master
Update to include Processing as a new language
2013-03-11 18:39:03 -07:00
Ted Nyman
de94b85c0d Merge pull request #295 from yandy/patch-1
downcase extname when we determin whether it's a image
2013-03-10 15:39:55 -07:00
Ted Nyman
1c771cc27d Remove sample for now until test structure changes 2013-03-10 15:36:49 -07:00
Ted Nyman
a41ec3a801 Merge pull request #321 from mndrix/patch-1
Add a misclassified Prolog file
2013-03-10 15:34:34 -07:00
Ted Nyman
d9d9e01242 Update samples database 2013-03-10 15:26:46 -07:00
Ted Nyman
04abb5310a Add .pluginspec sample 2013-03-10 15:25:02 -07:00
Ted Nyman
c7ed9bd7b3 Better regex 2013-03-10 15:23:14 -07:00
Ted Nyman
8aadb5eeaa Merge pull request #312 from HerbertKoelman/master
Added to vendor.yml dependencies related to automake and autoconf
2013-03-10 15:22:17 -07:00
Casey Reas
e4b5593728 Add Processing to languages.yml, includes lexer: Java 2013-03-08 16:10:34 -08:00
marc hoffman
14d363b942 Oxygene language detection — trying if making .pas not the primary extension (which Delphi also has) fixes the build fail 2013-03-08 12:48:39 +01:00
marc hoffman
f8c6277946 Oxygene language detection — now with "text only" lexer for now (why do we need this, other languages don't specify one) 2013-03-08 12:21:51 +01:00
marc hoffman
8254bcc3ac Oxygene language detection 2013-03-08 12:13:56 +01:00
Ted Nyman
f8389f0d93 Bump to 2.6.7 2013-03-07 20:18:44 -08:00
Ted Nyman
af12db9276 Update samples database 2013-03-07 20:18:07 -08:00
Ted Nyman
688a6bb581 Don't include .inc.
Format is used by too many other non lasso repos
2013-03-07 20:15:17 -08:00
Ted Nyman
5d5935965a Merge pull request #423 from gentoo90/nsis-lexer
Add NSIS installer scripting language
2013-03-07 17:26:14 -08:00
Ted Nyman
f795b20582 Merge pull request #391 from bfontaine/forth-samples
More Forth samples
2013-03-07 17:07:07 -08:00
Ted Nyman
c2023d33b9 Merge pull request #363 from dveeden/master
Add DOT language
2013-03-07 14:12:07 -08:00
gentoo90
d9c375b74a Add .nsh extension 2013-03-07 22:39:16 +02:00
gentoo90
7179ec56ef Add NSIS installer scripting language 2013-03-07 21:39:37 +02:00
Ted Nyman
26c850c37f Update samples.json to latest data 2013-03-06 19:59:33 -08:00
Ted Nyman
2023f35af7 Merge pull request #396 from elehcim/master
Added Matlab code samples
2013-03-06 19:58:42 -08:00
Ted Nyman
c0a57dbd1b Merge pull request #386 from rdeltour/xproc
New language: XProc - an XML Pipeline language (W3C)
2013-03-06 19:57:45 -08:00
Ted Nyman
78f072b46a 2.6.6 2013-03-06 15:29:25 -08:00
Ted Nyman
da51510597 Nix this generated check for now 2013-03-06 15:28:55 -08:00
Ted Nyman
47389cc827 Update samples and bump to 2.6.5 2013-03-06 14:50:50 -08:00
Ted Nyman
f035203e1c Bump to 2.6.4 2013-03-06 14:49:30 -08:00
Ted Nyman
083f6fc3b4 Merge pull request #421 from rvanmil/master
Add ABAP
2013-03-06 14:47:44 -08:00
Ted Nyman
d5bfe40f37 Fix deprecation warning 2013-03-06 14:47:01 -08:00
Ted Nyman
0b350defb5 Merge pull request #422 from brson/rust
Turn on Rust lexing. Add a bigger sample
2013-03-06 14:44:02 -08:00
Ted Nyman
88d0408875 Merge pull request #294 from DHowett/master
Add support for the Logos language.
2013-03-06 14:42:46 -08:00
Brian Anderson
c7a155efef Turn on Rust lexing. Add a bigger sample 2013-03-06 12:40:31 -08:00
Dustin L. Howett
9187fffc48 Update samples.json to include Logos. 2013-03-06 12:34:42 -08:00
Dustin L. Howett
7d2603ceb7 Add support for the Logos language. 2013-03-06 12:30:06 -08:00
René
c5bb287c74 Add ABAP 2013-03-06 09:24:42 +01:00
Ted Nyman
6b6f5eaaff Remove out of date notes 2013-03-04 13:31:05 -08:00
Ted Nyman
f3fa2317a6 Update samples.json, bump to 2.6.3 2013-03-04 13:19:40 -08:00
Ted Nyman
d096187196 Remove extra Forth extension 2013-03-04 12:40:10 -08:00
Ted Nyman
c5a3b34546 Merge pull request #419 from pborreli/typos
Fixed typos
2013-03-04 12:17:42 -08:00
Pascal Borreli
70eafb2ffc Fixed typos 2013-03-03 21:26:31 +00:00
Ted Nyman
983a3e6073 Minor README fixes 2013-03-02 23:19:10 -08:00
Xidorn Quan
fc8d2f641c Add samples and tests for minified CSS detection. 2013-03-02 14:27:51 +08:00
Xidorn Quan
9a5f9a5e9b Use space rate to distinguish minified files.
Minified JS files usually contain less than 2% spaces, while minified
CSS files may contain about 4% spaces. However, an unminified CSS file
may also have as low as 6% spaces, especially when it includes some
resources inline. Consequently, the division might not be appreciate
for CSS files. Even though, it will only mis-recognize a normal file
as minified for a few special cases.
2013-03-02 13:15:42 +08:00
Xidorn Quan
806369ce7f Merge minified files detecting methods. 2013-03-01 20:50:49 +08:00
Xidorn Quan
4398cda9a5 Detect minified CSS files 2013-03-01 16:15:56 +08:00
Ted Nyman
cf6eeec22a Merge pull request #408 from soimort/master
Add support for Literate CoffeeScript
2013-02-26 22:29:46 -08:00
Mort Yao
583e6fe2e8 Add sample file for Literate CoffeeScript 2013-02-27 05:32:51 +01:00
Brian Lopez
500f8cd869 bump version to 2.6.2 2013-02-26 17:43:24 -08:00
Brian Lopez
2e5866e6d8 Merge pull request #413 from github/bump-escape-utils
Bump escape_utils
2013-02-26 17:42:33 -08:00
Brian Lopez
600648c8af bump escape_utils 2013-02-26 17:41:04 -08:00
Ted Nyman
1ac51d2261 Merge pull request #410 from skalnik/remove-obj
Remove OBJ from supported solids
2013-02-26 14:20:45 -08:00
Mike Skalnik
1766123448 Fix typo in comment 2013-02-26 14:00:42 -08:00
Mike Skalnik
5ea039a74e Remove OBJ files as support solids 2013-02-26 14:00:29 -08:00
Michele Mastropietro
0af1a49cbd Added one more file 2013-02-26 09:23:19 +01:00
Mort Yao
151b7d53b0 Add support for Literate CoffeeScript 2013-02-26 02:51:42 +01:00
Ted Nyman
6e82d2a689 Merge pull request #354 from mrorii/master
Detect Cython-generated C/C++ files
2013-02-25 17:11:17 -08:00
Ted Nyman
b02c6c1e54 Bump to 2.6.1 2013-02-25 15:47:48 -08:00
Ted Nyman
cd406cc6b9 Remove extra extensions.
This are covered by samples so we do not
need to mention them here
2013-02-25 15:46:18 -08:00
Ted Nyman
52d46ddc8c Merge pull request #385 from rdeltour/xslt
XSLT as a programming language
2013-02-25 15:03:27 -08:00
Ted Nyman
188fad1814 Update samples database 2013-02-25 15:01:13 -08:00
Ted Nyman
a86ff11084 Merge pull request #405 from github/new-pygments
New pygments
2013-02-25 00:54:59 -08:00
Ted Nyman
6630f3bc4a Just name 2013-02-25 00:53:56 -08:00
Ted Nyman
2164f285f5 Bump version, add toml 2013-02-25 00:52:58 -08:00
Ted Nyman
086855fcce Merge pull request #404 from github/new-pygments
Bump to latest pygments.rb
2013-02-25 00:20:51 -08:00
Ted Nyman
33b421ff0b Bump pygments 2013-02-25 00:19:02 -08:00
Ted Nyman
36e8fe1b25 Begin 2.6.0 series 2013-02-25 00:13:57 -08:00
Ted Nyman
9696ee589e Bump to pygments.rb 0.4.0 2013-02-25 00:13:21 -08:00
Romain Deltour
f66da93e64 Remove extension from the XML (it is declared in XSLT) 2013-02-25 09:12:31 +01:00
Daniël van Eeden
d766c14305 Update lib/linguist/languages.yml
Set lexer to Text only for DOT. This hopefully fixed the failure on Travis.
2013-02-25 08:15:37 +01:00
Daniël van Eeden
5b749060a4 Update lib/linguist/languages.yml
Change sort order
2013-02-25 08:08:06 +01:00
Ted Nyman
9c76078b4f Remove extra extension list 2013-02-24 22:53:49 -08:00
Ted Nyman
c54ffa78f4 Alphabetize Pike 2013-02-24 22:53:06 -08:00
Ted Nyman
dde1addced Merge pull request #170 from johan/detect-pike-language
Added detection for the Pike language.
2013-02-24 22:50:43 -08:00
Ted Nyman
6108d53eb2 Merge pull request #400 from kevinjalbert/add-txl
Add TXL language
2013-02-24 22:49:39 -08:00
Casey Reas
7ae475a811 Put Processing language into alphabetical order, re: #383 2013-02-23 19:27:05 -08:00
Ted Nyman
c3c2c9c7fe Merge pull request #402 from PulsarBlow/language-typescript
TypeScript language support
2013-02-23 15:22:29 -08:00
Ted Nyman
f8955e919b Merge pull request #401 from jdutil/patch-2
Add deface extension support.
2013-02-23 15:21:44 -08:00
PulsarBlow
dc9ad22ec4 TypeScript language support
Signed-off-by: PulsarBlow <pulsarblow@gmail.com>
2013-02-23 23:40:40 +01:00
Jeff Dutil
e33cf5f933 Add deface extension support. 2013-02-23 16:03:51 -05:00
Kevin Jalbert
4c7b432090 Rename sample file's extension to match languages.yml 2013-02-23 13:32:36 -05:00
Ted Nyman
8afd6a1bd8 Merge pull request #342 from svenefftinge/master
languages.yml: add Xtend
2013-02-23 10:21:01 -08:00
Kevin Jalbert
7725bbb36b Add TXL language
Add:
 * TXL language
 * Sample TXL file
2013-02-23 13:19:10 -05:00
Ted Nyman
333d9cfffb Merge pull request #399 from BPScott/add-editorconfig
Add .editorconfig as an INI file
2013-02-23 10:18:40 -08:00
Ben Scott
495b50cbda Add .editorconfig as an INI file
See http://editorconfig.org
2013-02-23 16:27:24 +00:00
Sven Efftinge
fe8dbd662b Update lib/linguist/languages.yml
added primary_extension: .xtend
2013-02-23 13:50:04 +01:00
Illya
cdde73f5ee The extension list is alphabetized 2013-02-23 12:51:59 +02:00
Ted Nyman
05c49245b0 Fix whitespace 2013-02-23 02:39:44 -08:00
Ted Nyman
0955dd2ef0 Merge pull request #278 from DrItanium/master
Add support for the CLIPS programming language
2013-02-23 02:38:50 -08:00
Ted Nyman
6c5a9e97fe Merge pull request #376 from evanmiller/detect-opencl
Treat .opencl files as OpenCL
2013-02-23 02:37:44 -08:00
Ted Nyman
e5d2795ec0 Alphabetize 2013-02-23 02:29:17 -08:00
Ted Nyman
61aa378c45 Remove extra lexer 2013-02-23 02:26:10 -08:00
Ted Nyman
db296bee80 Merge pull request #318 from stuarthalloway/master
Datomic DTM files
2013-02-23 02:25:27 -08:00
Ted Nyman
3e091eacc2 Merge pull request #397 from unnali/rouge
Rouge
2013-02-22 19:47:40 -08:00
Arlen Christian Mart Cuss
b2303eac1e Add Rouge. 2013-02-23 14:13:12 +11:00
Arlen Christian Mart Cuss
c01e347bc0 Correct documentation, README grammar. 2013-02-23 14:13:12 +11:00
Ted Nyman
6d8583a0b4 Merge pull request #395 from featurist/master
add PogoScript language (no samples.json!)
2013-02-22 11:30:36 -08:00
Michele Mastropietro
c85255c5af Added matlab code samples.
All of these code samples currently are mis-identified in my repositories. I'm
donating them to the cause.
2013-02-22 10:57:51 +01:00
Tim Macfarlane
5fac67cea5 add PogoScript detection 2013-02-22 09:31:06 +00:00
Johan Sundström
7b9e0afef9 Reverted pike tests until such time as we have a pike lexer here. 2013-02-21 23:23:00 -08:00
Ted Nyman
b45c4f5379 Merge pull request #335 from rofl0r/dpryml
languages.yml: add .dpr and .dfm extension to Delphi
2013-02-21 22:58:44 -08:00
Ted Nyman
1fa4ed6bc2 Merge pull request #255 from seanupton/master
Syntax highlighting (XML) for Zope .zcml and .pt files
2013-02-21 22:49:18 -08:00
Ted Nyman
2d16f863f7 Revert "Merge pull request #171 from ianmjones/patch-1"
This reverts commit f5ebbd42d3, reversing
changes made to b998a5c282.
2013-02-21 22:09:59 -08:00
Ted Nyman
f5ebbd42d3 Merge pull request #171 from ianmjones/patch-1
Added REALbasic language.
2013-02-21 22:04:53 -08:00
Ted Nyman
b998a5c282 Merge pull request #239 from db0company/master
Add .eliom extension for Ocsigen (OCaml web framework)
2013-02-21 22:01:03 -08:00
Ted Nyman
58a9b56f4d Merge pull request #253 from Tass/master
Binary mime type override if languages.yml says so
2013-02-21 21:49:09 -08:00
Ted Nyman
3ceae6b5c1 Merge pull request #164 from michaelmior/master
Add Awk lexer
2013-02-21 21:41:56 -08:00
Ted Nyman
2612ea35bc Merge pull request #259 from afronski/master
Adding vendor files for django (admin_media) and SyntaxHightlighter JavaScript library
2013-02-21 21:28:37 -08:00
Ted Nyman
5bf2299461 Alphabetize python extensions 2013-02-20 16:38:55 -08:00
Kevin Sawicki
b26e4a7556 Add .gyp to Python extensions 2013-02-20 16:36:10 -08:00
Ted Nyman
c9bd6096b9 Merge pull request #364 from zacstewart/ragel-ruby
Add Ragel Ruby to languages
2013-02-20 16:27:32 -08:00
Ted Nyman
7d50697701 Merge pull request #390 from boredomist/patch-1
Add ASDF files to Common Lisp
2013-02-17 21:02:31 -08:00
Erik Price
e2314b57fe Alphabetize Common Lisp extensions. 2013-02-17 22:59:44 -06:00
Baptiste Fontaine
055743f886 More Forth samples. 2013-02-18 00:21:46 +01:00
Erik Price
152151bd44 Add ASDF files to Common Lisp 2013-02-17 13:50:48 -06:00
Ted Nyman
2431f2120c Merge pull request #388 from tinnet/master
Added Monkey Language
2013-02-16 18:08:29 -08:00
Tinnet Coronam
6a8e14dcf3 added monkey language (new in pygments 1.6) 2013-02-16 18:01:47 +01:00
Ted Nyman
a07d6f82ee Bump to 2.5.1 2013-02-15 18:48:32 -08:00
Ted Nyman
116d158336 Update samples.json 2013-02-15 18:48:05 -08:00
Ted Nyman
4863d16657 Bump to 2.5.0 2013-02-15 17:35:03 -08:00
Romain Deltour
da97f1af28 added XML lexer 2013-02-15 11:27:59 +01:00
Romain Deltour
6a03ea048b New language: XProc - an XML Pipeline language (W3C) 2013-02-15 11:22:35 +01:00
Romain Deltour
7924d0d8f8 XSLT as a programming language 2013-02-15 11:05:45 +01:00
Ted Nyman
781cd4069c Merge pull request #384 from ruv/more-forth-extenstions
Add .4th as alternate Forth file extension
2013-02-14 15:54:50 -08:00
ruv
505a361d98 '.4th' is also often used for the Forth language 2013-02-15 01:51:37 +04:00
Kevin Sawicki
c493c436da Register TextMate extensions as XML 2013-02-13 10:32:25 -08:00
Casey Reas
fb7c97c83f Samples for Processing language, changes to languages.yml 2013-02-13 09:12:30 -08:00
Sven Efftinge
b13001c5cc Added samples for Xtend 2013-02-13 08:33:22 +01:00
Ted Nyman
4e916ce94b Merge pull request #380 from github/tml
Add tapesty (.tml) to XML
2013-02-11 16:11:56 -08:00
Ted Nyman
1fad3be12a Add tapesty (.tml) to XML 2013-02-11 16:10:31 -08:00
Ted Nyman
6b688ba696 Merge pull request #251 from ptrv/add-scd-supercollider-extension
Add .scd extension to SuperCollider.
2013-02-11 16:05:53 -08:00
Ted Nyman
48d8919043 Merge pull request #359 from ntkme/master
Add fish support (.fish)
2013-02-11 16:03:46 -08:00
Richard Osborne
0479f72a93 Add detection for the XC programming language. 2013-02-09 13:13:21 +00:00
Michael Mior
1877c8c383 Add Awk lexer and sample 2013-02-08 14:19:26 -05:00
Evan Miller
5f6d74d849 Treat .opencl files as OpenCL 2013-02-07 18:24:36 -06:00
なつき
72ae6cd8ca Add fish support 2013-02-04 02:08:50 +08:00
Vincent Woo
8457f6397d Add extension support for Iced Coffeescript 2013-02-03 04:23:37 -08:00
Ted Nyman
24820ed935 Merge pull request #372 from github/more-shell-extensions
Add .bash and .tmux as alternate shell extensions
2013-02-01 15:00:41 -08:00
Ted Nyman
ad6947eeb4 Add .bash and .tmux as alternate shell extensions 2013-02-01 22:58:58 +00:00
Ted Nyman
9c27ec0313 Alphabetize verilog extension list 2013-02-01 22:30:01 +00:00
Ted Nyman
7a21d66877 Merge pull request #360 from skalnik/add-solid-support
Add Blob#solid? helper
2013-02-01 14:24:36 -08:00
Ted Nyman
7c1265cd2d Merge pull request #368 from cjdrake/master
Add Verilog (.vh) and SystemVerilog (.sv, .svh) filename extensions
2013-02-01 14:23:17 -08:00
Ted Nyman
6d73ae58b6 Regenerate samples.json 2013-02-01 22:17:44 +00:00
Ted Nyman
2d9d6f5669 Merge pull request #367 from moorepants/matlab-samples
Added matlab code samples.
2013-02-01 14:13:01 -08:00
Chris Drake
0a49062a02 Add Verilog/SystemVerilog filename extensions
Most Verilog files use the *.vh extension for header files.

Since the IEEE 1800-2009 SystemVerilog standard, it is common for
hardware and verification files written using the newer language
constructs to use the *.sv extension for design elements, and *.svh for
headers.
2013-01-30 22:02:31 -08:00
Jason Moore
04bab94c89 Removed copyrighted file. 2013-01-30 13:36:33 -08:00
Jason Moore
9bb230d7c8 Added matlab code samples.
All of these code samples currently are mis-identified in my repositories. I'm
donating them to the cause.
2013-01-30 13:12:45 -08:00
Ted Nyman
121f096173 Merge pull request #357 from uo-hrsys/patch-2
Add dita file extention to the XML type
2013-01-27 22:39:17 -08:00
Ted Nyman
c06f3fbc57 Merge pull request #358 from nicolasdanet/maxmsp
Added Max/MSP extensions in languages.yml
2013-01-27 22:31:21 -08:00
Ted Nyman
831f8a1f1f Merge pull request #361 from mattdbridges/patch-1
Adding homepage to gemspec
2013-01-27 22:19:34 -08:00
Zac Stewart
5e4623a44a Rename ragel ruby samples to match language name 2013-01-22 17:43:08 -05:00
Zac Stewart
1a60a00d3e Add Ragel Ruby to languages 2013-01-21 21:38:40 -05:00
Daniël van Eeden
08eef5f110 Update lib/linguist/languages.yml
Add .gv (GraphViz) file extension to DOT language.
2013-01-21 12:18:16 +01:00
Daniël van Eeden
0e2d3a2ac1 Update lib/linguist/languages.yml
Add DOT language: http://www.graphviz.org/content/dot-language
2013-01-20 14:06:36 +01:00
Matt Bridges
f852df397b Adding homepage to gemspec 2013-01-18 12:58:44 -06:00
Mike Skalnik
041ab041ae Add binary & ascii STLs and OBJs 2013-01-17 14:15:01 -08:00
nicolasdanet
ad9a57f8f9 Added Max/MSP extensions in languages.yml 2013-01-17 08:08:10 +01:00
Human Resources
b2bf4b0bd9 Add dita file extention to the XML type
2nd try. Add dita file extention to the XML markup.
DITA is the OASIS Darwin Information Typing Architecture used for technical documentation.
@see https://www.oasis-open.org/committees/tc_home.php?wg_abbrev=dita
2013-01-16 10:21:09 -05:00
Kevin Sawicki
c625642845 Add .cson to CoffeeScript extensions 2013-01-15 09:44:04 -08:00
Naoki Orii
35e077ce86 Detect cython-generated files 2013-01-12 23:48:04 -05:00
herbertkoelman
7839459607 Merge branch 'master' of https://github.com/github/linguist 2013-01-11 00:43:24 +01:00
Illya
212be40710 .ui file extension added for XML language
Qt uses .ui files to store qtdesinger ui in xml
2013-01-10 02:06:40 +02:00
Stuart Halloway
dc8685f918 remove redundant specification 2013-01-09 08:18:10 -05:00
Ted Nyman
75072ae5cc README code fencing 2013-01-08 17:11:16 -08:00
Ted Nyman
3edd765076 Merge pull request #232 from strangewarp/patch-1
Add .pd_lua extension for Lua
2013-01-08 16:25:55 -08:00
Ted Nyman
1d66e593e2 Merge pull request #346 from github/remove-extra-extensions
Remove extra extensions
2013-01-08 16:14:49 -08:00
C.D. Madsen
b8bafd246e Add examples of .pd_lua files
Added examples of .pd_lua files, which create Lua objects that are
interpreted by PureData.
2013-01-08 15:50:31 -07:00
Ted Nyman
95c822457a Merge pull request #231 from bfontaine/master
Detection added for Forth & Omgrofl
2013-01-08 04:37:18 -08:00
Ted Nyman
26df1034ec Merge pull request #221 from fkg/master
Add new extensions to lib/linguist/languages.yml
2013-01-08 04:31:44 -08:00
Ted Nyman
c495d19540 Merge pull request #222 from tiwe-de/master
ignore Debian packaging
2013-01-08 04:23:17 -08:00
Ted Nyman
b405847573 Merge pull request #261 from justinclift/typofixes
Trivial typo fixes.
2013-01-08 04:21:28 -08:00
Ted Nyman
1abcb2edb7 Merge pull request #246 from leafo/master
Add MoonScript
2013-01-08 04:15:37 -08:00
Ted Nyman
e3669d2bb6 Keep bash alias 2013-01-07 19:08:29 -08:00
Ted Nyman
1b9a49e226 Add field for Ada 2013-01-07 19:04:56 -08:00
Ted Nyman
0ee716b1e9 Fix up batchfile extension 2013-01-07 19:03:40 -08:00
Ted Nyman
9469f481f3 Keep cmake extensions field 2013-01-07 19:00:44 -08:00
Ted Nyman
acc190bb04 Remove extensions if we already have the primary_extension 2013-01-07 18:59:18 -08:00
leaf corcoran
5953e22efb drop extra extension information for MoonScript 2013-01-07 18:57:36 -08:00
Ted Nyman
2c26486588 Merge pull request #324 from paulmillr/topics/livescript
Add LiveScript support.
2013-01-07 18:50:01 -08:00
leaf corcoran
e9d2c0cf28 add MoonScript sample 2013-01-07 18:49:02 -08:00
Paul Miller
a35c3ca739 Change LiveScript colour. 2013-01-08 04:43:54 +02:00
Ted Nyman
0ee2f17a61 Merge pull request #344 from BPScott/add-less
Add LESS support (.less)
2013-01-07 18:39:16 -08:00
Ben Scott
83ce189a82 Add LESS support (.less)
Cheating slightly as it uses the CSS lexer, as pygments currently does
not have a dedicated less lexer. But I figure language recognition and
90% percent correct syntax highlighting is better than neither.
2013-01-07 15:52:09 +00:00
Sven Efftinge
c97e112c72 Added Xtend (xtend-lang.org) to languages.yml 2013-01-06 19:39:56 +01:00
Paul Miller
eee124f6c6 Add LiveScript support. 2013-01-03 22:45:08 +02:00
Ted Nyman
adc9246f66 Merge branch 'lasso' 2013-01-02 14:12:58 -08:00
Steve Piercy
560555bcd8 sorted extensions for Lasso in lib/linguist/languages.yml 2013-01-02 14:09:09 -08:00
Steve Piercy
900a6bc2b8 add extensions for Lasso in lib/linguist/languages.yml 2013-01-02 14:09:09 -08:00
Steve Piercy
3613d09c38 add Ecl to lib/linguist/languages.yml 2013-01-02 14:09:08 -08:00
Ted Nyman
02749dd5cf Merge pull request #331 from github/latest-pygments
Pessimistic versioning for pygments.rb, and bump to latest
2013-01-02 13:56:31 -08:00
Ted Nyman
abda879d5a Merge pull request #325 from greghendershott/racket-lexer
Use new Racket lexer from pygments.rb 0.3.3
2013-01-02 13:48:49 -08:00
rofl0r
d2e909677b languages.yml: rearrange .dpr and .dfm 2013-01-02 15:27:28 +01:00
rofl0r
baa42daae8 languages.yml: add .dpr and .dfm extension to Delphi
.dfm is Delphi formulars
.dpr is the main source file, before any .pas.

if your Delphi app does not use any formulars or units
(e.g. console app), there is basically only one .dpr file.
2013-01-02 15:03:24 +01:00
Greg Hendershott
0b2465482a Update test: Racket language uses Racket lexer.
This is https://github.com/greghendershott/linguist/pull/1 from @tnm.
That pull request is onto my master branch, not my `racket-lexer`
topic branch. If there is a way to accept the pull request onto my
topic branch, I don't have time to figure it out right now. As a
result I'm making my own commit.
2013-01-02 07:46:58 -05:00
Ted Nyman
453a097c22 Pessimistic versioning for pygments, and bump to latest 2013-01-02 02:42:12 -08:00
Steve Piercy
4b26a56e64 Merge remote branch 'upstream/master' into lasso 2013-01-02 02:33:35 -08:00
Steve Piercy
c1d54db2cc One more try to pass Travis build. Crossing fingers... 2013-01-02 02:20:09 -08:00
Ted Nyman
bcaeb5d464 Fix readme link 2013-01-02 02:12:17 -08:00
Ted Nyman
d65bbfbe8d Update README.md 2013-01-02 01:28:37 -08:00
Steve Piercy
4c9b16aa08 Forcing another Travis build, now that GitHub's pygments.rb is at v 0.3.5. See https://github.com/github/linguist/pull/325#issuecomment-11802593 2013-01-02 01:23:31 -08:00
Greg Hendershott
8355f5031a Use new Racket lexer from pygments.rb 0.3.3
Racket files had been using the Scheme lexer.
2012-12-28 22:28:22 -05:00
Michael Hendricks
c794c6e24b Add a misclassified Prolog file
This Prolog file was misclassified as Perl.  I assume linguist
was confused because the file has many comments.  Nevertheless,
there are plenty of Prolog-distinguishing tokens such as `:-`,
`module`, `%%`, capitalized variables names, `foo/2`, etc.
2012-12-22 13:08:06 -08:00
Herbert Koelman
611b790a2c Merge remote-tracking branch 'upstream/master' 2012-12-20 22:21:20 +01:00
Stuart Halloway
78708df79d better: edn is generic 2012-12-18 09:16:59 -05:00
Stuart Halloway
54a4af75b5 (BFDD) build-system failure driven development 2012-12-18 08:59:39 -05:00
Stuart Halloway
72d698ebaa Datomic dtm files 2012-12-18 08:11:44 -05:00
Steve Piercy
209f9f0072 Force Travis run 2012-12-18 00:50:48 -08:00
Steve Piercy
93457746ac Merge remote branch 'upstream/master' into lasso 2012-12-18 00:49:00 -08:00
Joshua Peek
2696a9c5e7 Linguist 2.4.0 2012-12-10 09:47:42 -06:00
Joshua Peek
7c170972a0 Add shell samples 2012-12-10 09:45:54 -06:00
Joshua Peek
d00dfd82c1 Add samples for apache and nginx confs 2012-12-10 09:37:42 -06:00
Joshua Peek
9003139119 Can't have 2 same primary extensions 2012-12-10 09:30:55 -06:00
Joshua Peek
36e867ec76 Require newer pygments 2012-12-10 09:18:35 -06:00
Joshua Peek
cf4813979c Remove already defined extensions 2012-12-10 09:14:19 -06:00
Joshua Peek
7e12c3eff1 Update samples 2012-12-10 09:13:14 -06:00
Joshua Peek
281cc985bf Merge pull request #288 from wagenet/handlebars
Add Handlebars
2012-12-10 07:06:59 -08:00
Joshua Peek
dcc2be0781 Merge branch 'master' into dont-explode-on-invalid-shebang
Conflicts:
	lib/linguist/samples.json
	test/test_tokenizer.rb
2012-12-10 09:02:24 -06:00
Joshua Peek
161d076bfd Remove duplicate extension 2012-12-10 09:00:17 -06:00
Joshua Peek
09fbcc9a72 Merge pull request #298 from johanatan/master
Adds Elm.
2012-12-10 06:58:32 -08:00
Joshua Peek
ee2b92cf82 Merge pull request #307 from mislav/aliases
A couple of useful language aliases
2012-12-10 06:55:09 -08:00
Herbert Koelman
3511380c72 Added to vendor.yml the following dependencies related to automake and autoconf:
- (^|/)configure
- (^|/)configure.ac
- (^|/)config.guess
- (^|/)config.sub

Before changing:
[herbert@vps11071 linguist]$ bundle exec linguist ../atmi++/
75%  Shell
15%  C++
10%  C
0%   Perl

After changing:
54%  C++
37%  C
9%   Shell
0%   Perl
2012-12-10 00:07:20 +01:00
Steve Piercy
38736a2db9 force travis update 2012-12-07 02:33:23 -08:00
Mislav Marohnić
720914b290 add filename tests for shell config files 2012-12-06 23:54:22 +01:00
Daniel Micay
16f8e54ed7 detect common shell config files 2012-12-06 23:53:55 +01:00
Andy Li
50ecb63058 haXe is now "Haxe"
According to https://groups.google.com/forum/#!topic/haxelang/O7PB-ZrX4i4/discussion

The lexer in Pygments is not renamed yet, so just stay as is at the moment.
2012-12-06 23:42:04 +01:00
Tobin Fricke
586650f01c add .C and .H as file extensions for C++
"C" and "H" are two file extensions recognized by gcc as indicating C++
source code. The full list may be found here:
http://gcc.gnu.org/onlinedocs/gcc-4.4.1/gcc/Overall-Options.html#index-file-name-suffix-71
2012-12-06 23:28:32 +01:00
Mislav Marohnić
ae753e6e88 add Nginx language 2012-12-06 23:25:54 +01:00
Mislav Marohnić
04a2845e91 add ApacheConf language
Recognizes httpd/apache2.conf and .htaccess files
2012-12-06 23:25:29 +01:00
Mislav Marohnić
acb20d95ca "coffee-script" ☞ CoffeeScript 2012-12-06 23:04:53 +01:00
Steve Piercy
5a9ef5eac2 Merge remote branch 'upstream/master' into lasso
Conflicts:
	lib/linguist/languages.yml
2012-12-05 12:55:30 -08:00
Steve Piercy
287e1b855d Forcing travis check 2012-12-05 12:30:06 -08:00
Mislav Marohnić
d3ebe1844d add HTTP language
Useful for `curl -i` dumps. Had to add primary_extension although this
data is usually not saved in files, but shown as code blocks.
2012-12-04 16:26:11 +01:00
Mislav Marohnić
fc8492e8f7 "yml" ☞ YAML 2012-12-04 16:11:52 +01:00
Mislav Marohnić
ff5ffd0482 "rss/xsd/xsl/wsdl" ☞ XML 2012-12-04 16:11:52 +01:00
Mislav Marohnić
50db6d0150 "latex" ☞ TeX 2012-12-04 16:11:52 +01:00
Mislav Marohnić
2e0b854428 "obj-j" ☞ Objective-J 2012-12-04 16:11:52 +01:00
Mislav Marohnić
1dfb44cff7 "obj-c/objc" ☞ Objective-C 2012-12-04 16:11:51 +01:00
Mislav Marohnić
0a8fad2040 "make" ☞ Makefile 2012-12-04 16:11:51 +01:00
Mislav Marohnić
9b97d3ac8a "erb" ☞ RHTML 2012-12-04 16:11:51 +01:00
Mislav Marohnić
26e78c0c1b "xhtml" ☞ HTML 2012-12-04 16:11:51 +01:00
Joshua Peek
b036e8d3c2 Merge pull request #305 from DominikTo/php-cli
Fixed detection of PHP CLI scripts (added samples)
2012-12-02 07:54:14 -08:00
Dominik Tobschall
f84a904ad8 fixed typo 2012-12-02 14:11:04 +01:00
Dominik Tobschall
b1684037d6 added php cli samples 2012-12-02 14:05:52 +01:00
Jonathan Leonard
1c85d0b38a Added Elm. 2012-11-25 20:39:58 -08:00
Michael Ding
97c998946b determine image with downcase extname 2012-11-22 20:30:59 +08:00
Michael Ding
8529c90a4d use downcase string for extname 2012-11-22 17:14:45 +08:00
Ben Lavender
ec3434cf1d Don't explode on invalid shebang 2012-11-18 20:56:06 -06:00
Peter Wagenet
0e20f6d454 Added Handlebars language 2012-11-12 17:16:18 -08:00
Joshua Scoggins
696573b14c Fixed an issue where the lexer was not explicitly stated for CLIPS 2012-10-22 00:00:08 -07:00
Joshua Scoggins
fbb31f018c Added support for the CLIPS programming language
CLIPS or C language integrated production system is a tool for writing expert
systems.
2012-10-21 23:46:09 -07:00
Joshua Peek
d92d208a45 Fix tests for pygments.rb 0.3.x 2012-10-07 15:39:02 -05:00
Joshua Peek
b798e28bfb No warnings 2012-10-07 15:37:09 -05:00
Joshua Peek
ebd6077cd7 Add wrap flag to text languages 2012-10-07 15:34:13 -05:00
Joshua Peek
9e9500dfa9 Linguist 2.3.4 2012-09-24 10:54:17 -05:00
Joshua Peek
04cc100fba Rebuild samples db 2012-09-24 10:52:05 -05:00
Joshua Peek
31e33f99f2 Ensure lang is skipped on any binary file 2012-09-24 10:51:39 -05:00
Joshua Peek
7c51b90586 Skip empty sample 2012-09-24 10:50:49 -05:00
Joshua Peek
2b36f73da6 Some comments are triggering charlock binary 2012-09-24 10:48:22 -05:00
Joshua Peek
d96dd473b8 Rebuild samples db 2012-09-24 10:12:18 -05:00
Joshua Peek
f9066ffb7b Sort exts and filenames 2012-09-24 10:12:05 -05:00
Joshua Peek
945941d529 Update samples db 2012-09-24 10:07:58 -05:00
Joshua Peek
10e875e899 Print out samples db diffs 2012-09-24 10:07:08 -05:00
Justin Clift
7f87d22d78 Trivial typo fixes. 2012-09-22 20:32:56 +10:00
Wojciech Gawroński
d890b73c2f Adding vendor files for SyntaxHighlighter and django (admin_media directory). 2012-09-21 13:57:08 +02:00
Justin Palmer
d24e5c938e sample directory needs uppercase E 2012-09-20 15:23:58 -07:00
Justin Palmer
aa069a336f add color to ecl language 2012-09-20 15:16:06 -07:00
Justin Palmer
662fc2ee9d Merge remote-tracking branch 'rengolin/ecl' 2012-09-20 15:07:41 -07:00
Sean Upton
eca1f61dab Merge branch 'master' of github.com:seanupton/linguist 2012-09-18 14:28:01 -06:00
Sean Upton
4126d0e445 Added extensions to languages.yml for XML highlighting of Zope Page Templates (.pt) and Zope Configuration Markup Language (.zcml). 2012-09-18 14:27:36 -06:00
Sean Upton
1d3cffc6dd Added extensions to languages.xml for XML highlighting of Zope Page Templates (.pt) and Zope Configuration Markup Language (.zcml). 2012-09-18 14:24:35 -06:00
Simon Hafner
675d0865da fixed typo 2012-09-13 14:56:44 -05:00
Simon Hafner
b954d22eba Override for binary mime type based on languages.yml
If the extension already exists in languages.yml, it's probably not a
binary, but code.
2012-09-13 14:55:31 -05:00
Ryan Tomayko
567cd6ef68 Merge pull request #250 from github/mac-format
Handle Mac Format when splitting lines
2012-09-11 14:17:21 -07:00
ptrv
01981c310d Add .scd extension to SuperCollider. 2012-09-11 00:26:54 +02:00
Ryan Tomayko
887a050db9 Only search the first 4K chars for \r 2012-09-10 01:56:08 -07:00
Ryan Tomayko
bda895eaae Test Mac Format detection and line splitting 2012-09-10 01:52:30 -07:00
Ryan Tomayko
2e49c06f47 Handle Mac Format when splitting lines 2012-09-10 01:05:48 -07:00
Joshua Peek
ae137847b4 Linguist 2.3.3 2012-09-04 09:32:21 -05:00
Scott J. Goldman
5443dc50a3 Merge pull request #247 from github/check-size-first
When testing if a blob is indexable or safe to colorize, check size first
2012-09-02 00:09:51 -07:00
Scott J. Goldman
fc435a2541 Linguist 2.3.2 2012-09-02 00:08:37 -07:00
Scott J. Goldman
04394750e7 When testing if a blob is safe to colorize, check size first
Similar to e415a13
2012-09-02 00:08:37 -07:00
Scott J. Goldman
e415a1351b When testing if a blob is indexable, check size first
Otherwise, charlock_holmes will allocate another large binary
buffer for testing the encoding, which is a problem if the binary
blob is many hundreds of MB large. It'll just fail and crash ruby.
2012-08-31 22:47:19 -07:00
leaf corcoran
0ff50a6b02 add MoonScript (again) 2012-08-29 21:18:50 -07:00
Joshua Peek
6ec907a915 Merge pull request #245 from jcazevedo/master
Add Shell sample
2012-08-28 10:55:11 -07:00
Joao Azevedo
1f55f01fa9 Add Shell sample 2012-08-28 18:01:46 +01:00
Joshua Peek
5d79b88875 Linguist 2.3.1 2012-08-27 11:34:55 -05:00
Joshua Peek
458890b4b9 Add C++ sample 2012-08-27 11:33:28 -05:00
Joshua Peek
89267f792d Rebuild samples db 2012-08-27 11:30:44 -05:00
Joshua Peek
b183fcca05 Only read up to 100KB 2012-08-27 11:30:38 -05:00
Joshua Peek
684a57dbc0 Add another C sample 2012-08-27 11:21:57 -05:00
db0
e857b23429 .eliom extension in OCaml extensions properly sorted 2012-08-27 12:16:47 +02:00
db0
09c76246f6 Add .eliom extension for Ocsigen (OCaml web framework) 2012-08-27 11:41:43 +02:00
Joshua Peek
400086a5c8 Add more C samples
Closes #237
2012-08-23 13:38:16 -05:00
Joshua Peek
38b966a554 Linguist 2.3.0 2012-08-20 11:50:35 -05:00
Joshua Peek
31b0df67b7 Require newer mime-type gem 2012-08-20 11:42:04 -05:00
Joshua Peek
cfe496e9fc Drop mime type module
Closes #206
2012-08-20 11:40:32 -05:00
Joshua Peek
b85aeaad3e Inline mime type lookup into blob helper 2012-08-20 11:33:16 -05:00
Joshua Peek
64f3509222 Remove other mime type hacks 2012-08-20 11:29:22 -05:00
Joshua Peek
f8df871d85 Only double check binary mime type when lazy loading blob 2012-08-20 11:20:37 -05:00
Joshua Peek
620150d188 Only double check with binary mime type when lazy loading blob 2012-08-20 11:14:45 -05:00
Joshua Peek
630dca515a Trim down mime type overrides that are old or now pushed upstream
Related #206
2012-08-20 11:11:42 -05:00
Joshua Peek
d2de997fcc Add more Prolog samples
Closes #233
2012-08-20 10:48:36 -05:00
Joshua Peek
b8711f8ccf Merge pull request #228 from github/cpp-samples
Add more C++ samples
2012-08-20 08:36:10 -07:00
Joshua Peek
34aaab19b2 Rebuild samples db 2012-08-20 10:34:37 -05:00
Joshua Peek
220108857c Skip emiting comment tokens 2012-08-20 10:34:07 -05:00
Steve Piercy
31d6b110d2 Add more samples with listed extensions. Remove extension specification. Clarify comments at top of languages.yml. 2012-08-19 16:49:20 -07:00
Steve Piercy
29a0db402c Lasso lexer name added 2012-08-19 06:47:02 -07:00
Steve Piercy
21a7fe9f12 Lasso extentions sorted 2012-08-19 06:40:15 -07:00
Steve Piercy
3b558db518 adding Lasso language and sample files 2012-08-19 06:29:16 -07:00
C.D. Madsen
44066fbb0b Add .pd_lua extension for Lua
.pd_lua is the required extension for any Lua files written to directly communicate with Puredata, via the pdlua library.
2012-08-18 06:14:41 -06:00
Baptiste Fontaine
0c2794e9de Forth extensions sorted 2012-08-17 18:09:06 +02:00
Baptiste Fontaine
69a9ac9366 Forth & Omgrofl lexers set to Text Only 2012-08-17 18:03:09 +02:00
Baptiste Fontaine
59e199d0c3 Detection added for Forth & Omgrofl 2012-08-17 16:52:00 +02:00
Joshua Peek
657adaabec Add more C++ samples
Closes #225
2012-08-15 11:57:55 -07:00
Joshua Peek
a41f40a30e Remove extname from bin out 2012-08-15 09:31:01 -07:00
Timo Weingärtner
a572b467b4 testcase for 90f1ba9 2012-08-15 02:11:15 +03:00
Timo Weingärtner
90f1ba95a4 lib/linguist/vendor.yml: ignore Debian packaging
This should prevent files like debian/$package.cron.d from being recognized as D source.
2012-08-15 02:07:53 +03:00
fkg
286c8a1b4a Added .ccxml, .grxml, .scxml, .vxml to the XML syntax group 2012-08-14 12:00:07 -07:00
Joshua Peek
080cd097ba Merge branch 'brcooley-master' 2012-08-13 18:18:04 -07:00
Joshua Peek
866e446dbe Rebuild samples db 2012-08-13 18:17:47 -07:00
Joshua Peek
897f39083d Rename to magic .script! ext 2012-08-13 18:17:44 -07:00
brc
f8a7d11808 Adding extensionless script to Shell samples 2012-08-13 18:07:28 -07:00
Oxan van Leeuwen
0f006af583 Improve detection for ASP.NET validation jQuery plugins 2012-08-10 01:09:54 +02:00
Oxan van Leeuwen
2bbf92d5f8 Update vendor.yml to include jQuery UI 2012-08-10 01:04:29 +02:00
Renato Golin
da6cf8dbb4 Add ECL programming language and test 2012-07-12 09:09:32 +01:00
Ian M. Jones
a41631d9fa Added REALbasic language. 2012-06-06 23:37:47 +02:00
Johan Sundström
645f4d6194 Added detection for the Pike language:
http://pike.ida.liu.se/
2012-06-06 00:02:47 -07:00
Laurent Parenteau
46cde87c09 Fixed M lexer name. Merged with upstream's latest changes. 2012-05-22 13:43:47 -04:00
Laurent Parenteau
91364a9769 Improved comment. 2012-05-14 09:56:00 -04:00
Laurent Parenteau
23b6b4c499 Use Common Lisp lexer for M syntax highlighting, which gives pretty good results. 2012-04-27 10:09:37 -04:00
Laurent Parenteau
1e34faa920 Improved M detection to be more specific. 2012-03-28 20:30:24 -04:00
Laurent Parenteau
e0190a5a6e Added detection for the new M (aka MUMPS) language. 2012-03-27 11:47:52 -04:00
290 changed files with 108128 additions and 8525 deletions

View File

@@ -3,6 +3,7 @@ rvm:
- 1.8.7
- 1.9.2
- 1.9.3
- 2.0.0
- ree
notifications:
disabled: true

View File

@@ -1,2 +1,7 @@
source :rubygems
source 'https://rubygems.org'
gemspec
if RUBY_VERSION < "1.9.3"
# escape_utils 1.0.0 requires 1.9.3 and above
gem "escape_utils", "0.3.2"
end

View File

@@ -1,4 +1,4 @@
Copyright (c) 2011 GitHub, Inc.
Copyright (c) 2011-2013 GitHub, Inc.
Permission is hereby granted, free of charge, to any person
obtaining a copy of this software and associated documentation

View File

@@ -10,13 +10,16 @@ Linguist defines the list of all languages known to GitHub in a [yaml file](http
Most languages are detected by their file extension. This is the fastest and most common situation.
For disambiguating between files with common extensions, we use a [bayesian classifier](https://github.com/github/linguist/blob/master/lib/linguist/classifier.rb). For an example, this helps us tell the difference between `.h` files which could be either C, C++, or Obj-C.
For disambiguating between files with common extensions, we use a [Bayesian classifier](https://github.com/github/linguist/blob/master/lib/linguist/classifier.rb). For an example, this helps us tell the difference between `.h` files which could be either C, C++, or Obj-C.
In the actual GitHub app we deal with `Grit::Blob` objects. For testing, there is a simple `FileBlob` API.
Linguist::FileBlob.new("lib/linguist.rb").language.name #=> "Ruby"
```ruby
Linguist::FileBlob.new("bin/linguist").language.name #=> "Ruby"
Linguist::FileBlob.new("lib/linguist.rb").language.name #=> "Ruby"
Linguist::FileBlob.new("bin/linguist").language.name #=> "Ruby"
```
See [lib/linguist/language.rb](https://github.com/github/linguist/blob/master/lib/linguist/language.rb) and [lib/linguist/languages.yml](https://github.com/github/linguist/blob/master/lib/linguist/languages.yml).
@@ -24,20 +27,22 @@ See [lib/linguist/language.rb](https://github.com/github/linguist/blob/master/li
The actual syntax highlighting is handled by our Pygments wrapper, [pygments.rb](https://github.com/tmm1/pygments.rb). It also provides a [Lexer abstraction](https://github.com/tmm1/pygments.rb/blob/master/lib/pygments/lexer.rb) that determines which highlighter should be used on a file.
We typically run on a prerelease version of Pygments, [pygments.rb](https://github.com/tmm1/pygments.rb), to get early access to new lexers. The [lexers.yml](https://github.com/github/linguist/blob/master/lib/linguist/lexers.yml) file is a dump of the lexers we have available on our server.
We typically run on a pre-release version of Pygments, [pygments.rb](https://github.com/tmm1/pygments.rb), to get early access to new lexers. The [languages.yml](https://github.com/github/linguist/blob/master/lib/linguist/languages.yml) file is a dump of the lexers we have available on our server.
### Stats
The Language Graph you see on every repository is built by aggregating the languages of all repo's blobs. The top language in the graph determines the project's primary language. Collectively, these stats make up the [Top Languages](https://github.com/languages) page.
The Language Graph you see on every repository is built by aggregating the languages of each file in that repository.
The top language in the graph determines the project's primary language. Collectively, these stats make up the [Top Languages](https://github.com/languages) page.
The repository stats API can be used on a directory:
The repository stats API, accessed through `#languages`, can be used on a directory:
project = Linguist::Repository.from_directory(".")
project.language.name #=> "Ruby"
project.languages #=> { "Ruby" => 0.98,
"Shell" => 0.02 }
```ruby
project = Linguist::Repository.from_directory(".")
project.language.name #=> "Ruby"
project.languages #=> { "Ruby" => 0.98, "Shell" => 0.02 }
```
These stats are also printed out by the binary. Try running `linguist` on itself:
These stats are also printed out by the `linguist` binary. Try running `linguist` on itself:
$ bundle exec linguist lib/
100% Ruby
@@ -46,17 +51,21 @@ These stats are also printed out by the binary. Try running `linguist` on itself
Checking other code into your git repo is a common practice. But this often inflates your project's language stats and may even cause your project to be labeled as another language. We are able to identify some of these files and directories and exclude them.
Linguist::FileBlob.new("vendor/plugins/foo.rb").vendored? # => true
```ruby
Linguist::FileBlob.new("vendor/plugins/foo.rb").vendored? # => true
```
See [Linguist::BlobHelper#vendored?](https://github.com/github/linguist/blob/master/lib/linguist/blob_helper.rb) and [lib/linguist/vendor.yml](https://github.com/github/linguist/blob/master/lib/linguist/vendor.yml).
#### Generated file detection
Not all plain text files are true source files. Generated files like minified js and compiled CoffeeScript can be detected and excluded from language stats. As an extra bonus, these files are suppressed in Diffs.
Not all plain text files are true source files. Generated files like minified js and compiled CoffeeScript can be detected and excluded from language stats. As an extra bonus, these files are suppressed in diffs.
Linguist::FileBlob.new("underscore.min.js").generated? # => true
```ruby
Linguist::FileBlob.new("underscore.min.js").generated? # => true
```
See [Linguist::BlobHelper#generated?](https://github.com/github/linguist/blob/master/lib/linguist/blob_helper.rb).
See [Linguist::Generated#generated?](https://github.com/github/linguist/blob/master/lib/linguist/generated.rb).
## Installation
@@ -74,12 +83,18 @@ To run the tests:
## Contributing
The majority of patches won't need to touch any Ruby code at all. The [master language list](https://github.com/github/linguist/blob/master/lib/linguist/languages.yml) is just a configuration file.
The majority of contributions won't need to touch any Ruby code at all. The [master language list](https://github.com/github/linguist/blob/master/lib/linguist/languages.yml) is just a YAML configuration file.
We try to only add languages once they have some usage on GitHub, so please note in-the-wild usage examples in your pull request.
Almost all bug fixes or new language additions should come with some additional code samples. Just drop them under [`samples/`](https://github.com/github/linguist/tree/master/samples) in the correct subdirectory and our test suite will automatically test them. In most cases you shouldn't need to add any new assertions.
To update the `samples.json` after adding new files to [`samples/`](https://github.com/github/linguist/tree/master/samples):
bundle exec rake samples
### Testing
Sometimes getting the tests running can be to much work especially if you don't have much Ruby experience. Its okay, be lazy and let our build bot [Travis](http://travis-ci.org/#!/github/linguist) run the tests for you. Just open a pull request and the bot will start cranking away.
Sometimes getting the tests running can be too much work, especially if you don't have much Ruby experience. It's okay: be lazy and let our build bot [Travis](http://travis-ci.org/#!/github/linguist) run the tests for you. Just open a pull request and the bot will start cranking away.
Heres our current build status, which is hopefully green: [![Build Status](https://secure.travis-ci.org/github/linguist.png?branch=master)](http://travis-ci.org/github/linguist)
Here's our current build status, which is hopefully green: [![Build Status](https://secure.travis-ci.org/github/linguist.png?branch=master)](http://travis-ci.org/github/linguist)

View File

@@ -1,11 +1,11 @@
require 'rake/clean'
require 'rake/testtask'
require 'yaml'
require 'json'
task :default => :test
Rake::TestTask.new do |t|
t.warning = true
end
Rake::TestTask.new
task :samples do
require 'linguist/samples'
@@ -15,6 +15,13 @@ task :samples do
File.open('lib/linguist/samples.json', 'w') { |io| io.write json }
end
task :build_gem do
languages = YAML.load_file("lib/linguist/languages.yml")
File.write("lib/linguist/languages.json", JSON.dump(languages))
`gem build github-linguist.gemspec`
File.delete("lib/linguist/languages.json")
end
namespace :classifier do
LIMIT = 1_000

View File

@@ -1,5 +1,9 @@
#!/usr/bin/env ruby
# linguist — detect language type for a file, or, given a directory, determine language breakdown
#
# usage: linguist <path>
require 'linguist/file_blob'
require 'linguist/repository'
@@ -8,8 +12,9 @@ path = ARGV[0] || Dir.pwd
if File.directory?(path)
repo = Linguist::Repository.from_directory(path)
repo.languages.sort_by { |_, size| size }.reverse.each do |language, size|
percentage = ((size / repo.size.to_f) * 100).round
puts "%-4s %s" % ["#{percentage}%", language]
percentage = ((size / repo.size.to_f) * 100)
percentage = sprintf '%.2f' % percentage
puts "%-7s %s" % ["#{percentage}%", language]
end
elsif File.file?(path)
blob = Linguist::FileBlob.new(path, Dir.pwd)
@@ -23,7 +28,6 @@ elsif File.file?(path)
puts "#{blob.name}: #{blob.loc} lines (#{blob.sloc} sloc)"
puts " type: #{type}"
puts " extension: #{blob.pathname.extname}"
puts " mime type: #{blob.mime_type}"
puts " language: #{blob.language}"

View File

@@ -1,18 +1,21 @@
Gem::Specification.new do |s|
s.name = 'github-linguist'
s.version = '2.2.1'
s.version = '2.10.5'
s.summary = "GitHub Language detection"
s.authors = "GitHub"
s.authors = "GitHub"
s.homepage = "https://github.com/github/linguist"
s.files = Dir['lib/**/*']
s.executables << 'linguist'
s.add_dependency 'charlock_holmes', '~> 0.6.6'
s.add_dependency 'escape_utils', '~> 0.2.3'
s.add_dependency 'mime-types', '~> 1.18'
s.add_dependency 'pygments.rb', '>= 0.2.13'
s.add_dependency 'escape_utils', '>= 0.3.1'
s.add_dependency 'mime-types', '~> 1.19'
s.add_dependency 'pygments.rb', '~> 0.5.4'
s.add_development_dependency 'json'
s.add_development_dependency 'mocha'
s.add_development_dependency 'rake'
s.add_development_dependency 'yajl-ruby'
end

View File

@@ -1,6 +1,5 @@
require 'linguist/blob_helper'
require 'linguist/generated'
require 'linguist/language'
require 'linguist/mime'
require 'linguist/repository'
require 'linguist/samples'

View File

@@ -1,13 +1,19 @@
require 'linguist/generated'
require 'linguist/language'
require 'linguist/mime'
require 'charlock_holmes'
require 'escape_utils'
require 'mime/types'
require 'pygments'
require 'yaml'
module Linguist
# DEPRECATED Avoid mixing into Blob classes. Prefer functional interfaces
# like `Language.detect` over `Blob#language`. Functions are much easier to
# cache and compose.
#
# Avoid adding additional bloat to this module.
#
# BlobHelper is a mixin for Blobish classes that respond to "name",
# "data" and "size" such as Grit::Blob.
module BlobHelper
@@ -23,6 +29,22 @@ module Linguist
File.extname(name.to_s)
end
# Internal: Lookup mime type for extension.
#
# Returns a MIME::Type
def _mime_type
if defined? @_mime_type
@_mime_type
else
guesses = ::MIME::Types.type_for(extname.to_s)
# Prefer text mime types over binary
@_mime_type = guesses.detect { |type| type.ascii? } ||
# Otherwise use the first guess
guesses.first
end
end
# Public: Get the actual blob mime type
#
# Examples
@@ -32,7 +54,23 @@ module Linguist
#
# Returns a mime type String.
def mime_type
@mime_type ||= Mime.mime_for(extname.to_s)
_mime_type ? _mime_type.to_s : 'text/plain'
end
# Internal: Is the blob binary according to its mime type
#
# Return true or false
def binary_mime_type?
_mime_type ? _mime_type.binary? : false
end
# Internal: Is the blob binary according to its mime type,
# overriding it if we have better data from the languages.yml
# database.
#
# Return true or false
def likely_binary?
binary_mime_type? && !Language.find_by_filename(name)
end
# Public: Get the Content-Type header value
@@ -83,15 +121,6 @@ module Linguist
@detect_encoding ||= CharlockHolmes::EncodingDetector.new.detect(data) if data
end
# Public: Is the blob binary according to its mime type
#
# Return true or false
def binary_mime_type?
if mime_type = Mime.lookup_mime_type_for(extname)
mime_type.binary?
end
end
# Public: Is the blob binary?
#
# Return true or false
@@ -125,7 +154,28 @@ module Linguist
#
# Return true or false
def image?
['.png', '.jpg', '.jpeg', '.gif'].include?(extname)
['.png', '.jpg', '.jpeg', '.gif'].include?(extname.downcase)
end
# Public: Is the blob a supported 3D model format?
#
# Return true or false
def solid?
extname.downcase == '.stl'
end
# Public: Is this blob a CSV file?
#
# Return true or false
def csv?
text? && extname.downcase == '.csv'
end
# Public: Is the blob a PDF?
#
# Return true or false
def pdf?
extname.downcase == '.pdf'
end
MEGABYTE = 1024 * 1024
@@ -139,14 +189,13 @@ module Linguist
# Public: Is the blob safe to colorize?
#
# We use Pygments.rb for syntax highlighting blobs, which
# has some quirks and also is essentially 'un-killable' via
# normal timeout. To workaround this we try to
# carefully handling Pygments.rb anything it can't handle.
# We use Pygments for syntax highlighting blobs. Pygments
# can be too slow for very large blobs or for certain
# corner-case blobs.
#
# Return true or false
def safe_to_colorize?
text? && !large? && !high_ratio_of_long_lines?
!large? && text? && !high_ratio_of_long_lines?
end
# Internal: Does the blob have a ratio of long lines?
@@ -190,7 +239,12 @@ module Linguist
#
# Returns an Array of lines
def lines
@lines ||= (viewable? && data) ? data.split("\n", -1) : []
@lines ||=
if viewable? && data
data.split(/\r\n|\r|\n/, -1)
else
[]
end
end
# Public: Get number of lines of code
@@ -213,7 +267,7 @@ module Linguist
# Public: Is the blob a generated file?
#
# Generated source code is supressed in diffs and is ignored by
# Generated source code is suppressed in diffs and is ignored by
# language statistics.
#
# May load Blob#data
@@ -223,47 +277,21 @@ module Linguist
@_generated ||= Generated.generated?(name, lambda { data })
end
# Public: Should the blob be indexed for searching?
#
# Excluded:
# - Files over 0.1MB
# - Non-text files
# - Langauges marked as not searchable
# - Generated source files
#
# Please add additional test coverage to
# `test/test_blob.rb#test_indexable` if you make any changes.
#
# Return true or false
def indexable?
if binary?
false
elsif extname == '.txt'
true
elsif language.nil?
false
elsif !language.searchable?
false
elsif generated?
false
elsif size > 100 * 1024
false
else
true
end
end
# Public: Detects the Language of the blob.
#
# May load Blob#data
#
# Returns a Language or nil if none is detected
def language
if defined? @language
@language
elsif !binary_mime_type?
@language = Language.detect(name.to_s, lambda { data }, mode)
return @language if defined? @language
if defined?(@data) && @data.is_a?(String)
data = @data
else
data = lambda { (binary_mime_type? || binary?) ? "" : self.data }
end
@language = Language.detect(name.to_s, data, mode)
end
# Internal: Get the lexer of the blob.
@@ -284,19 +312,5 @@ module Linguist
options[:options][:encoding] ||= encoding
lexer.highlight(data, options)
end
# Public: Highlight syntax of blob without the outer highlight div
# wrapper.
#
# options - A Hash of options (defaults to {})
#
# Returns html String
def colorize_without_wrapper(options = {})
if text = colorize(options)
text[%r{<div class="highlight"><pre>(.*?)</pre>\s*</div>}m, 1]
else
''
end
end
end
end

View File

@@ -14,6 +14,9 @@ module Linguist
# Classifier.train(db, 'Ruby', "def hello; end")
#
# Returns nothing.
#
# Set LINGUIST_DEBUG=1 or =2 to see probabilities per-token or
# per-language. See also #dump_all_tokens, below.
def self.train!(db, language, data)
tokens = Tokenizer.tokenize(data)
@@ -40,7 +43,7 @@ module Linguist
# Public: Guess language of data.
#
# db - Hash of classifer tokens database.
# db - Hash of classifier tokens database.
# data - Array of tokens or String data to analyze.
# languages - Array of language name Strings to restrict to.
#
@@ -75,17 +78,19 @@ module Linguist
def classify(tokens, languages)
return [] if tokens.nil?
tokens = Tokenizer.tokenize(tokens) if tokens.is_a?(String)
scores = {}
debug_dump_all_tokens(tokens, languages) if verbosity >= 2
languages.each do |language|
scores[language] = tokens_probability(tokens, language) +
language_probability(language)
debug_dump_probabilities(tokens, language) if verbosity >= 1
scores[language] = tokens_probability(tokens, language) + language_probability(language)
end
scores.sort { |a, b| b[1] <=> a[1] }.map { |score| [score[0], score[1]] }
end
# Internal: Probably of set of tokens in a language occuring - P(D | C)
# Internal: Probably of set of tokens in a language occurring - P(D | C)
#
# tokens - Array of String tokens.
# language - Language to check.
@@ -97,7 +102,7 @@ module Linguist
end
end
# Internal: Probably of token in language occuring - P(F | C)
# Internal: Probably of token in language occurring - P(F | C)
#
# token - String token.
# language - Language to check.
@@ -111,7 +116,7 @@ module Linguist
end
end
# Internal: Probably of a language occuring - P(C)
# Internal: Probably of a language occurring - P(C)
#
# language - Language to check.
#
@@ -119,5 +124,48 @@ module Linguist
def language_probability(language)
Math.log(@languages[language].to_f / @languages_total.to_f)
end
private
def verbosity
@verbosity ||= (ENV['LINGUIST_DEBUG'] || 0).to_i
end
def debug_dump_probabilities
printf("%10s = %10.3f + %7.3f = %10.3f\n",
language, tokens_probability(tokens, language), language_probability(language), scores[language])
end
# Internal: show a table of probabilities for each <token,language> pair.
#
# The number in each table entry is the number of "points" that each
# token contributes toward the belief that the file under test is a
# particular language. Points are additive.
#
# Points are the number of times a token appears in the file, times
# how much more likely (log of probability ratio) that token is to
# appear in one language vs. the least-likely language. Dashes
# indicate the least-likely language (and zero points) for each token.
def debug_dump_all_tokens(tokens, languages)
maxlen = tokens.map { |tok| tok.size }.max
printf "%#{maxlen}s", ""
puts " #" + languages.map { |lang| sprintf("%10s", lang) }.join
token_map = Hash.new(0)
tokens.each { |tok| token_map[tok] += 1 }
token_map.sort.each { |tok, count|
arr = languages.map { |lang| [lang, token_probability(tok, lang)] }
min = arr.map { |a,b| b }.min
minlog = Math.log(min)
if !arr.inject(true) { |result, n| result && n[1] == arr[0][1] }
printf "%#{maxlen}s%5d", tok, count
puts arr.map { |ent|
ent[1] == min ? " -" : sprintf("%10.3f", count * (Math.log(ent[1]) - minlog))
}.join
end
}
end
end
end

View File

@@ -43,7 +43,7 @@ module Linguist
# Internal: Is the blob a generated file?
#
# Generated source code is supressed in diffs and is ignored by
# Generated source code is suppressed in diffs and is ignored by
# language statistics.
#
# Please add additional test coverage to
@@ -52,11 +52,16 @@ module Linguist
# Return true or false
def generated?
name == 'Gemfile.lock' ||
minified_javascript? ||
minified_files? ||
compiled_coffeescript? ||
xcode_project_file? ||
generated_parser? ||
generated_net_docfile? ||
generated_parser?
generated_net_designer_file? ||
generated_protocol_buffer? ||
generated_jni_header? ||
composer_lock? ||
node_modules?
end
# Internal: Is the blob an XCode project file?
@@ -69,16 +74,18 @@ module Linguist
['.xib', '.nib', '.storyboard', '.pbxproj', '.xcworkspacedata', '.xcuserstate'].include?(extname)
end
# Internal: Is the blob minified JS?
# Internal: Is the blob minified files?
#
# Consider JS minified if the average line length is
# greater then 100c.
# Consider a file minified if the average line length is
# greater then 110c.
#
# Currently, only JS and CSS files are detected by this method.
#
# Returns true or false.
def minified_javascript?
return unless extname == '.js'
def minified_files?
return unless ['.js', '.css'].include? extname
if lines.any?
(lines.inject(0) { |n, l| n += l.length } / lines.length) > 100
(lines.inject(0) { |n, l| n += l.length } / lines.length) > 110
else
false
end
@@ -86,7 +93,7 @@ module Linguist
# Internal: Is the blob of JS generated by CoffeeScript?
#
# CoffeScript is meant to output JS that would be difficult to
# CoffeeScript is meant to output JS that would be difficult to
# tell if it was generated or not. Look for a number of patterns
# output by the CS compiler.
#
@@ -142,6 +149,16 @@ module Linguist
lines[-2].include?("</doc>")
end
# Internal: Is this a codegen file for a .NET project?
#
# Visual Studio often uses code generation to generate partial classes, and
# these files can be quite unwieldy. Let's hide them.
#
# Returns true or false
def generated_net_designer_file?
name.downcase =~ /\.designer\.cs$/
end
# Internal: Is the blob of JS a parser generated by PEG.js?
#
# PEG.js-generated parsers are not meant to be consumed by humans.
@@ -158,5 +175,43 @@ module Linguist
false
end
# Internal: Is the blob a C++, Java or Python source file generated by the
# Protocol Buffer compiler?
#
# Returns true of false.
def generated_protocol_buffer?
return false unless ['.py', '.java', '.h', '.cc', '.cpp'].include?(extname)
return false unless lines.count > 1
return lines[0].include?("Generated by the protocol buffer compiler. DO NOT EDIT!")
end
# Internal: Is the blob a C/C++ header generated by the Java JNI tool javah?
#
# Returns true of false.
def generated_jni_header?
return false unless extname == '.h'
return false unless lines.count > 2
return lines[0].include?("/* DO NOT EDIT THIS FILE - it is machine generated */") &&
lines[1].include?("#include <jni.h>")
end
# node_modules/ can contain large amounts of files, in general not meant
# for humans in pull requests.
#
# Returns true or false.
def node_modules?
!!name.match(/node_modules\//)
end
# the php composer tool generates a lock file to represent a specific dependency state.
# In general not meant for humans in pull requests.
#
# Returns true or false.
def composer_lock?
!!name.match(/composer.lock/)
end
end
end

View File

@@ -1,6 +1,10 @@
require 'escape_utils'
require 'pygments'
require 'yaml'
begin
require 'json'
rescue LoadError
end
require 'linguist/classifier'
require 'linguist/samples'
@@ -15,11 +19,30 @@ module Linguist
@index = {}
@name_index = {}
@alias_index = {}
@extension_index = Hash.new { |h,k| h[k] = [] }
@filename_index = Hash.new { |h,k| h[k] = [] }
@extension_index = Hash.new { |h,k| h[k] = [] }
@interpreter_index = Hash.new { |h,k| h[k] = [] }
@filename_index = Hash.new { |h,k| h[k] = [] }
@primary_extension_index = {}
# Valid Languages types
TYPES = [:data, :markup, :programming]
TYPES = [:data, :markup, :programming, :prose]
# Names of non-programming languages that we will still detect
#
# Returns an array
def self.detectable_markup
["CSS", "Less", "Sass", "Stylus", "TeX"]
end
# Detect languages by a specific type
#
# type - A symbol that exists within TYPES
#
# Returns an array
def self.by_type(type)
all.select { |h| h.type == type }
end
# Internal: Create a new Language object
#
@@ -56,6 +79,16 @@ module Linguist
@extension_index[extension] << language
end
if @primary_extension_index.key?(language.primary_extension)
raise ArgumentError, "Duplicate primary extension: #{language.primary_extension}"
end
@primary_extension_index[language.primary_extension] = language
language.interpreters.each do |interpreter|
@interpreter_index[interpreter] << language
end
language.filenames.each do |filename|
@filename_index[filename] << language
end
@@ -73,7 +106,7 @@ module Linguist
#
# Returns Language or nil.
def self.detect(name, data, mode = nil)
# A bit of an elegant hack. If the file is exectable but extensionless,
# A bit of an elegant hack. If the file is executable but extensionless,
# append a "magic" extension so it can be classified with other
# languages that have shebang scripts.
if File.extname(name).empty? && mode && (mode.to_i(8) & 05) == 05
@@ -84,8 +117,13 @@ module Linguist
if possible_languages.length > 1
data = data.call() if data.respond_to?(:call)
if result = Classifier.classify(Samples::DATA, data, possible_languages.map(&:name)).first
Language[result[0]]
if data.nil? || data == ""
nil
elsif (result = find_by_shebang(data)) && !result.empty?
result.first
elsif classified = Classifier.classify(Samples::DATA, data, possible_languages.map(&:name)).first
Language[classified[0]]
end
else
possible_languages.first
@@ -139,7 +177,24 @@ module Linguist
# Returns all matching Languages or [] if none were found.
def self.find_by_filename(filename)
basename, extname = File.basename(filename), File.extname(filename)
@filename_index[basename] + @extension_index[extname]
langs = [@primary_extension_index[extname]] +
@filename_index[basename] +
@extension_index[extname]
langs.compact.uniq
end
# Public: Look up Languages by shebang line.
#
# data - Array of tokens or String data to analyze.
#
# Examples
#
# Language.find_by_shebang("#!/bin/bash\ndate;")
# # => [#<Language name="Bash">]
#
# Returns the matching Language
def self.find_by_shebang(data)
@interpreter_index[Linguist.interpreter_from_shebang(data)]
end
# Public: Look up Language by its name or lexer.
@@ -220,12 +275,14 @@ module Linguist
raise(ArgumentError, "#{@name} is missing lexer")
@ace_mode = attributes[:ace_mode]
@wrap = attributes[:wrap] || false
# Set legacy search term
@search_term = attributes[:search_term] || default_alias_name
# Set extensions or default to [].
@extensions = attributes[:extensions] || []
@interpreters = attributes[:interpreters] || []
@filenames = attributes[:filenames] || []
unless @primary_extension = attributes[:primary_extension]
@@ -310,6 +367,11 @@ module Linguist
# Returns a String name or nil
attr_reader :ace_mode
# Public: Should language lines be wrapped
#
# Returns true or false
attr_reader :wrap
# Public: Get extensions
#
# Examples
@@ -321,7 +383,7 @@ module Linguist
# Deprecated: Get primary extension
#
# Defaults to the first extension but can be overriden
# Defaults to the first extension but can be overridden
# in the languages.yml.
#
# The primary extension can not be nil. Tests should verify this.
@@ -333,6 +395,15 @@ module Linguist
# Returns the extension String.
attr_reader :primary_extension
# Public: Get interpreters
#
# Examples
#
# # => ['awk', 'gawk', 'mawk' ...]
#
# Returns the interpreters Array
attr_reader :interpreters
# Public: Get filenames
#
# Examples
@@ -426,19 +497,40 @@ module Linguist
end
extensions = Samples::DATA['extnames']
interpreters = Samples::DATA['interpreters']
filenames = Samples::DATA['filenames']
popular = YAML.load_file(File.expand_path("../popular.yml", __FILE__))
YAML.load_file(File.expand_path("../languages.yml", __FILE__)).each do |name, options|
languages_yml = File.expand_path("../languages.yml", __FILE__)
languages_json = File.expand_path("../languages.json", __FILE__)
if File.exist?(languages_json) && defined?(JSON)
languages = JSON.load(File.read(languages_json))
else
languages = YAML.load_file(languages_yml)
end
languages.each do |name, options|
options['extensions'] ||= []
options['interpreters'] ||= []
options['filenames'] ||= []
if extnames = extensions[name]
extnames.each do |extname|
if !options['extensions'].include?(extname)
options['extensions'] << extname
else
warn "#{name} #{extname.inspect} is already defined in samples/. Remove from languages.yml."
end
end
end
if interpreters == nil
interpreters = {}
end
if interpreter_names = interpreters[name]
interpreter_names.each do |interpreter|
if !options['interpreters'].include?(interpreter)
options['interpreters'] << interpreter
end
end
end
@@ -447,8 +539,6 @@ module Linguist
fns.each do |filename|
if !options['filenames'].include?(filename)
options['filenames'] << filename
else
warn "#{name} #{filename.inspect} is already defined in samples/. Remove from languages.yml."
end
end
end
@@ -460,10 +550,12 @@ module Linguist
:aliases => options['aliases'],
:lexer => options['lexer'],
:ace_mode => options['ace_mode'],
:wrap => options['wrap'],
:group_name => options['group'],
:searchable => options.key?('searchable') ? options['searchable'] : true,
:search_term => options['search_term'],
:extensions => options['extensions'].sort,
:interpreters => options['interpreters'].sort,
:primary_extension => options['primary_extension'],
:filenames => options['filenames'],
:popular => popular.include?(name)

File diff suppressed because it is too large Load Diff

View File

@@ -4,7 +4,7 @@ module Linguist
module MD5
# Public: Create deep nested digest of value object.
#
# Useful for object comparsion.
# Useful for object comparison.
#
# obj - Object to digest.
#

View File

@@ -1,91 +0,0 @@
require 'mime/types'
require 'yaml'
class MIME::Type
attr_accessor :override
end
# Register additional mime type extensions
#
# Follows same format as mime-types data file
# https://github.com/halostatue/mime-types/blob/master/lib/mime/types.rb.data
File.read(File.expand_path("../mimes.yml", __FILE__)).lines.each do |line|
# Regexp was cargo culted from mime-types lib
next unless line =~ %r{^
#{MIME::Type::MEDIA_TYPE_RE}
(?:\s@([^\s]+))?
(?:\s:(#{MIME::Type::ENCODING_RE}))?
}x
mediatype = $1
subtype = $2
extensions = $3
encoding = $4
# Lookup existing mime type
mime_type = MIME::Types["#{mediatype}/#{subtype}"].first ||
# Or create a new instance
MIME::Type.new("#{mediatype}/#{subtype}")
if extensions
extensions.split(/,/).each do |extension|
mime_type.extensions << extension
end
end
if encoding
mime_type.encoding = encoding
end
mime_type.override = true
# Kind of hacky, but we need to reindex the mime type after making changes
MIME::Types.add_type_variant(mime_type)
MIME::Types.index_extensions(mime_type)
end
module Linguist
module Mime
# Internal: Look up mime type for extension.
#
# ext - The extension String. May include leading "."
#
# Examples
#
# Mime.mime_for('.html')
# # => 'text/html'
#
# Mime.mime_for('txt')
# # => 'text/plain'
#
# Return mime type String otherwise falls back to 'text/plain'.
def self.mime_for(ext)
mime_type = lookup_mime_type_for(ext)
mime_type ? mime_type.to_s : 'text/plain'
end
# Internal: Lookup mime type for extension or mime type
#
# ext_or_mime_type - A file extension ".txt" or mime type "text/plain".
#
# Returns a MIME::Type
def self.lookup_mime_type_for(ext_or_mime_type)
ext_or_mime_type ||= ''
if ext_or_mime_type =~ /\w+\/\w+/
guesses = ::MIME::Types[ext_or_mime_type]
else
guesses = ::MIME::Types.type_for(ext_or_mime_type)
end
# Use custom override first
guesses.detect { |type| type.override } ||
# Prefer text mime types over binary
guesses.detect { |type| type.ascii? } ||
# Otherwise use the first guess
guesses.first
end
end
end

View File

@@ -1,62 +0,0 @@
# Additional types to add to MIME::Types
#
# MIME types are used to set the Content-Type of raw binary blobs. All text
# blobs are served as text/plain regardless of their type to ensure they
# open in the browser rather than downloading.
#
# The encoding helps determine whether a file should be treated as plain
# text or binary. By default, a mime type's encoding is base64 (binary).
# These types will show a "View Raw" link. To force a type to render as
# plain text, set it to 8bit for UTF-8. text/* types will be treated as
# text by default.
#
# <type> @<extensions> :<encoding>
#
# type - mediatype/subtype
# extensions - comma seperated extension list
# encoding - base64 (binary), 7bit (ASCII), 8bit (UTF-8), or
# quoted-printable (Printable ASCII).
#
# Follows same format as mime-types data file
# https://github.com/halostatue/mime-types/blob/master/lib/mime/types.rb.data
#
# Any additions or modifications (even trivial) should have corresponding
# test change in `test/test_mime.rb`.
# TODO: Lookup actual types
application/octet-stream @a,blend,gem,graffle,ipa,lib,mcz,nib,o,ogv,otf,pfx,pigx,plgx,psd,sib,spl,sqlite3,swc,ucode,xpi
# Please keep this list alphabetized
application/java-archive @ear,war
application/netcdf :8bit
application/ogg @ogg
application/postscript :base64
application/vnd.adobe.air-application-installer-package+zip @air
application/vnd.mozilla.xul+xml :8bit
application/vnd.oasis.opendocument.presentation @odp
application/vnd.oasis.opendocument.spreadsheet @ods
application/vnd.oasis.opendocument.text @odt
application/vnd.openofficeorg.extension @oxt
application/vnd.openxmlformats-officedocument.presentationml.presentation @pptx
application/x-chrome-extension @crx
application/x-iwork-keynote-sffkey @key
application/x-iwork-numbers-sffnumbers @numbers
application/x-iwork-pages-sffpages @pages
application/x-ms-xbap @xbap :8bit
application/x-parrot-bytecode @pbc
application/x-shockwave-flash @swf
application/x-silverlight-app @xap
application/x-supercollider @sc :8bit
application/x-troff-ms :8bit
application/x-wais-source :8bit
application/xaml+xml @xaml :8bit
application/xslt+xml @xslt :8bit
image/x-icns @icns
text/cache-manifest @manifest
text/plain @cu,cxx
text/x-logtalk @lgt
text/x-nemerle @n
text/x-nimrod @nim
text/x-ocaml @ml,mli,mll,mly,sig,sml
text/x-rust @rs,rc
text/x-scheme @rkt,scm,sls,sps,ss

View File

@@ -8,6 +8,8 @@
- C#
- C++
- CSS
- Clojure
- CoffeeScript
- Common Lisp
- Diff
- Emacs Lisp
@@ -25,5 +27,3 @@
- SQL
- Scala
- Scheme
- TeX
- XML

View File

@@ -67,14 +67,14 @@ module Linguist
return if @computed_stats
@enum.each do |blob|
# Skip binary file extensions
next if blob.binary_mime_type?
# Skip files that are likely binary
next if blob.likely_binary?
# Skip vendored or generated blobs
next if blob.vendored? || blob.generated? || blob.language.nil?
# Only include programming languages
if blob.language.type == :programming
# Only include programming languages and acceptable markup languages
if blob.language.type == :programming || Language.detectable_markup.include?(blob.language.name)
@sizes[blob.language.group] += blob.size
end
end

File diff suppressed because it is too large Load Diff

View File

@@ -1,4 +1,8 @@
require 'yaml'
begin
require 'json'
rescue LoadError
require 'yaml'
end
require 'linguist/md5'
require 'linguist/classifier'
@@ -14,7 +18,8 @@ module Linguist
# Hash of serialized samples object
if File.exist?(PATH)
DATA = YAML.load_file(PATH)
serializer = defined?(JSON) ? JSON : YAML
DATA = serializer.load(File.read(PATH))
end
# Public: Iterate over each sample.
@@ -52,6 +57,7 @@ module Linguist
yield({
:path => File.join(dirname, filename),
:language => category,
:interpreter => File.exist?(filename) ? Linguist.interpreter_from_shebang(File.read(filename)) : nil,
:extname => File.extname(filename)
})
end
@@ -67,6 +73,7 @@ module Linguist
def self.data
db = {}
db['extnames'] = {}
db['interpreters'] = {}
db['filenames'] = {}
each do |sample|
@@ -76,12 +83,22 @@ module Linguist
db['extnames'][language_name] ||= []
if !db['extnames'][language_name].include?(sample[:extname])
db['extnames'][language_name] << sample[:extname]
db['extnames'][language_name].sort!
end
end
if sample[:interpreter]
db['interpreters'][language_name] ||= []
if !db['interpreters'][language_name].include?(sample[:interpreter])
db['interpreters'][language_name] << sample[:interpreter]
db['interpreters'][language_name].sort!
end
end
if sample[:filename]
db['filenames'][language_name] ||= []
db['filenames'][language_name] << sample[:filename]
db['filenames'][language_name].sort!
end
data = File.read(sample[:path])
@@ -93,4 +110,40 @@ module Linguist
db
end
end
# Used to retrieve the interpreter from the shebang line of a file's
# data.
def self.interpreter_from_shebang(data)
lines = data.lines.to_a
if lines.any? && (match = lines[0].match(/(.+)\n?/)) && (bang = match[0]) =~ /^#!/
bang.sub!(/^#! /, '#!')
tokens = bang.split(' ')
pieces = tokens.first.split('/')
if pieces.size > 1
script = pieces.last
else
script = pieces.first.sub('#!', '')
end
script = script == 'env' ? tokens[1] : script
# "python2.6" -> "python"
if script =~ /((?:\d+\.?)+)/
script.sub! $1, ''
end
# Check for multiline shebang hacks that call `exec`
if script == 'sh' &&
lines[0...5].any? { |l| l.match(/exec (\w+).+\$0.+\$@/) }
script = $1
end
script
else
nil
end
end
end

View File

@@ -16,21 +16,28 @@ module Linguist
new.extract_tokens(data)
end
# Read up to 100KB
BYTE_LIMIT = 100_000
# Start state on token, ignore anything till the next newline
SINGLE_LINE_COMMENTS = [
'//', # C
'#', # Ruby
'%', # Tex
]
# Start state on opening token, ignore anything until the closing
# token is reached.
MULTI_LINE_COMMENTS = [
['/*', '*/'], # C
['<!--', '-->'], # XML
['{-', '-}'], # Haskell
['(*', '*)'] # Coq
['(*', '*)'], # Coq
['"""', '"""'] # Python
]
START_SINGLE_LINE_COMMENT = Regexp.compile(SINGLE_LINE_COMMENTS.map { |c|
"^\s*#{Regexp.escape(c)} "
"\s*#{Regexp.escape(c)} "
}.join("|"))
START_MULTI_LINE_COMMENT = Regexp.compile(MULTI_LINE_COMMENTS.map { |c|
@@ -52,22 +59,24 @@ module Linguist
tokens = []
until s.eos?
break if s.pos >= BYTE_LIMIT
if token = s.scan(/^#!.+$/)
if name = extract_shebang(token)
tokens << "SHEBANG#!#{name}"
end
# Single line comment
elsif token = s.scan(START_SINGLE_LINE_COMMENT)
tokens << token.strip
elsif s.beginning_of_line? && token = s.scan(START_SINGLE_LINE_COMMENT)
# tokens << token.strip
s.skip_until(/\n|\Z/)
# Multiline comments
elsif token = s.scan(START_MULTI_LINE_COMMENT)
tokens << token
# tokens << token
close_token = MULTI_LINE_COMMENTS.assoc(token)[1]
s.skip_until(Regexp.compile(Regexp.escape(close_token)))
tokens << close_token
# tokens << close_token
# Skip single or double quoted strings
elsif s.scan(/"/)
@@ -130,7 +139,7 @@ module Linguist
s.scan(/\s+/)
script = s.scan(/\S+/)
end
script = script[/[^\d]+/, 0]
script = script[/[^\d]+/, 0] if script
return script
end

View File

@@ -12,23 +12,43 @@
# Caches
- cache/
# Dependencies
- ^[Dd]ependencies/
# C deps
# https://github.com/joyent/node
- ^deps/
- ^tools/
- (^|/)configure$
- (^|/)configure.ac$
- (^|/)config.guess$
- (^|/)config.sub$
# Node depedencies
# Node dependencies
- node_modules/
# Vendored depedencies
- vendor/
# Erlang bundles
- ^rebar$
# Bootstrap minified css and js
- (^|/)bootstrap([^.]*)(\.min)\.(js|css)$
# Vendored dependencies
- thirdparty/
- vendors?/
# Debian packaging
- ^debian/
## Commonly Bundled JavaScript frameworks ##
# jQuery
- (^|/)jquery([^.]*)(\.min)?\.js$
- (^|/)jquery\-\d\.\d(\.\d)?(\.min)?\.js$
- (^|/)jquery\-\d\.\d+(\.\d+)?(\.min)?\.js$
# jQuery UI
- (^|/)jquery\-ui(\-\d\.\d+(\.\d+)?)?(\.\w+)?(\.min)?\.(js|css)$
- (^|/)jquery\.(ui|effects)\.([^.]*)(\.min)?\.(js|css)$
# Prototype
- (^|/)prototype(.*)\.js$
@@ -49,10 +69,6 @@
- (^|/)yahoo-([^.]*)\.js$
- (^|/)yui([^.]*)\.js$
# LESS css
- (^|/)less([^.]*)(\.min)?\.js$
- (^|/)less\-\d+\.\d+\.\d+(\.min)?\.js$
# WYS editors
- (^|/)ckeditor\.js$
- (^|/)tiny_mce([^.]*)\.js$
@@ -61,14 +77,24 @@
# MathJax
- (^|/)MathJax/
# SyntaxHighlighter - http://alexgorbatchev.com/
- (^|/)shBrush([^.]*)\.js$
- (^|/)shCore\.js$
- (^|/)shLegacy\.js$
## Python ##
# django
- (^|/)admin_media/
# Fabric
- ^fabfile\.py$
# WAF
- ^waf$
# .osx
- ^.osx$
## Obj-C ##
@@ -81,7 +107,8 @@
- -vsdoc\.js$
# jQuery validation plugin (MS bundles this with asp.net mvc)
- (^|/)jquery([^.]*)\.validate(\.min)?\.js$
- (^|/)jquery([^.]*)\.validate(\.unobtrusive)?(\.min)?\.js$
- (^|/)jquery([^.]*)\.unobtrusive\-ajax(\.min)?\.js$
# Microsoft Ajax
- (^|/)[Mm]icrosoft([Mm]vc)?([Aa]jax|[Vv]alidation)(\.debug)?\.js$
@@ -90,7 +117,44 @@
- ^[Pp]ackages/
# ExtJS
- (^|/)extjs/
- (^|/)extjs/.*?\.js$
- (^|/)extjs/.*?\.xml$
- (^|/)extjs/.*?\.txt$
- (^|/)extjs/.*?\.html$
- (^|/)extjs/.*?\.properties$
- (^|/)extjs/.sencha/
- (^|/)extjs/docs/
- (^|/)extjs/builds/
- (^|/)extjs/cmd/
- (^|/)extjs/examples/
- (^|/)extjs/locale/
- (^|/)extjs/packages/
- (^|/)extjs/plugins/
- (^|/)extjs/resources/
- (^|/)extjs/src/
- (^|/)extjs/welcome/
# Samples folders
- ^[Ss]amples/
# LICENSE, README, git config files
- ^COPYING$
- LICENSE$
- gitattributes$
- gitignore$
- gitmodules$
- ^README$
- ^readme$
# Test fixtures
- ^[Tt]est/fixtures/
# PhoneGap/Cordova
- (^|/)cordova([^.]*)(\.min)?\.js$
- (^|/)cordova\-\d\.\d(\.\d)?(\.min)?\.js$
# Vagrant
- ^Vagrantfile$
# .DS_Store's
- .[Dd][Ss]_[Ss]tore$

View File

@@ -0,0 +1,219 @@
*/**
* The MIT License (MIT)
* Copyright (c) 2012 René van Mil
*
* Permission is hereby granted, free of charge, to any person obtaining
* a copy of this software and associated documentation files (the
* "Software"), to deal in the Software without restriction, including
* without limitation the rights to use, copy, modify, merge, publish,
* distribute, sublicense, and/or sell copies of the Software, and to
* permit persons to whom the Software is furnished to do so, subject to
* the following conditions:
*
* The above copyright notice and this permission notice shall be
* included in all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
* EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
* MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.
* IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY
* CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT,
* TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE
* SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
*/
*----------------------------------------------------------------------*
* CLASS CL_CSV_PARSER DEFINITION
*----------------------------------------------------------------------*
*
*----------------------------------------------------------------------*
class cl_csv_parser definition
public
inheriting from cl_object
final
create public .
public section.
*"* public components of class CL_CSV_PARSER
*"* do not include other source files here!!!
type-pools abap .
methods constructor
importing
!delegate type ref to if_csv_parser_delegate
!csvstring type string
!separator type c
!skip_first_line type abap_bool .
methods parse
raising
cx_csv_parse_error .
protected section.
*"* protected components of class CL_CSV_PARSER
*"* do not include other source files here!!!
private section.
*"* private components of class CL_CSV_PARSER
*"* do not include other source files here!!!
constants _textindicator type c value '"'. "#EC NOTEXT
data _delegate type ref to if_csv_parser_delegate .
data _csvstring type string .
data _separator type c .
type-pools abap .
data _skip_first_line type abap_bool .
methods _lines
returning
value(returning) type stringtab .
methods _parse_line
importing
!line type string
returning
value(returning) type stringtab
raising
cx_csv_parse_error .
endclass. "CL_CSV_PARSER DEFINITION
*----------------------------------------------------------------------*
* CLASS CL_CSV_PARSER IMPLEMENTATION
*----------------------------------------------------------------------*
*
*----------------------------------------------------------------------*
class cl_csv_parser implementation.
* <SIGNATURE>---------------------------------------------------------------------------------------+
* | Instance Public Method CL_CSV_PARSER->CONSTRUCTOR
* +-------------------------------------------------------------------------------------------------+
* | [--->] DELEGATE TYPE REF TO IF_CSV_PARSER_DELEGATE
* | [--->] CSVSTRING TYPE STRING
* | [--->] SEPARATOR TYPE C
* | [--->] SKIP_FIRST_LINE TYPE ABAP_BOOL
* +--------------------------------------------------------------------------------------</SIGNATURE>
method constructor.
super->constructor( ).
_delegate = delegate.
_csvstring = csvstring.
_separator = separator.
_skip_first_line = skip_first_line.
endmethod. "constructor
* <SIGNATURE>---------------------------------------------------------------------------------------+
* | Instance Public Method CL_CSV_PARSER->PARSE
* +-------------------------------------------------------------------------------------------------+
* | [!CX!] CX_CSV_PARSE_ERROR
* +--------------------------------------------------------------------------------------</SIGNATURE>
method parse.
data msg type string.
if _csvstring is initial.
message e002(csv) into msg.
raise exception type cx_csv_parse_error
exporting
message = msg.
endif.
" Get the lines
data is_first_line type abap_bool value abap_true.
data lines type standard table of string.
lines = _lines( ).
field-symbols <line> type string.
loop at lines assigning <line>.
" Should we skip the first line?
if _skip_first_line = abap_true and is_first_line = abap_true.
is_first_line = abap_false.
continue.
endif.
" Parse the line
data values type standard table of string.
values = _parse_line( <line> ).
" Send values to delegate
_delegate->values_found( values ).
endloop.
endmethod. "parse
* <SIGNATURE>---------------------------------------------------------------------------------------+
* | Instance Private Method CL_CSV_PARSER->_LINES
* +-------------------------------------------------------------------------------------------------+
* | [<-()] RETURNING TYPE STRINGTAB
* +--------------------------------------------------------------------------------------</SIGNATURE>
method _lines.
split _csvstring at cl_abap_char_utilities=>cr_lf into table returning.
endmethod. "_lines
* <SIGNATURE>---------------------------------------------------------------------------------------+
* | Instance Private Method CL_CSV_PARSER->_PARSE_LINE
* +-------------------------------------------------------------------------------------------------+
* | [--->] LINE TYPE STRING
* | [<-()] RETURNING TYPE STRINGTAB
* | [!CX!] CX_CSV_PARSE_ERROR
* +--------------------------------------------------------------------------------------</SIGNATURE>
method _parse_line.
data msg type string.
data csvvalue type string.
data csvvalues type standard table of string.
data char type c.
data pos type i value 0.
data len type i.
len = strlen( line ).
while pos < len.
char = line+pos(1).
if char <> _separator.
if char = _textindicator.
data text_ended type abap_bool.
text_ended = abap_false.
while text_ended = abap_false.
pos = pos + 1.
if pos < len.
char = line+pos(1).
if char = _textindicator.
text_ended = abap_true.
else.
if char is initial. " Space
concatenate csvvalue ` ` into csvvalue.
else.
concatenate csvvalue char into csvvalue.
endif.
endif.
else.
" Reached the end of the line while inside a text value
" This indicates an error in the CSV formatting
text_ended = abap_true.
message e003(csv) into msg.
raise exception type cx_csv_parse_error
exporting
message = msg.
endif.
endwhile.
" Check if next character is a separator, otherwise the CSV formatting is incorrect
data nextpos type i.
nextpos = pos + 1.
if nextpos < len and line+nextpos(1) <> _separator.
message e003(csv) into msg.
raise exception type cx_csv_parse_error
exporting
message = msg.
endif.
else.
if char is initial. " Space
concatenate csvvalue ` ` into csvvalue.
else.
concatenate csvvalue char into csvvalue.
endif.
endif.
else.
append csvvalue to csvvalues.
clear csvvalue.
endif.
pos = pos + 1.
endwhile.
append csvvalue to csvvalues. " Don't forget the last value
returning = csvvalues.
endmethod. "_parse_line
endclass. "CL_CSV_PARSER IMPLEMENTATION

39
samples/Agda/NatCat.agda Normal file
View File

@@ -0,0 +1,39 @@
module NatCat where
open import Relation.Binary.PropositionalEquality
-- If you can show that a relation only ever has one inhabitant
-- you get the category laws for free
module
EasyCategory
(obj : Set)
(_⟶_ : obj obj Set)
(_∘_ : {x y z} x y y z x z)
(id : x x x)
(single-inhabitant : (x y : obj) (r s : x y) r s)
where
idʳ : x y (r : x y) r id y r
idʳ x y r = single-inhabitant x y (r id y) r
idˡ : x y (r : x y) id x r r
idˡ x y r = single-inhabitant x y (id x r) r
∘-assoc : w x y z (r : w x) (s : x y) (t : y z) (r s) t r (s t)
∘-assoc w x y z r s t = single-inhabitant w z ((r s) t) (r (s t))
open import Data.Nat
same : (x y : ) (r s : x y) r s
same .0 y z≤n z≤n = refl
same .(suc m) .(suc n) (s≤s {m} {n} r) (s≤s s) = cong s≤s (same m n r s)
≤-trans : x y z x y y z x z
≤-trans .0 y z z≤n s = z≤n
≤-trans .(suc m) .(suc n) .(suc n₁) (s≤s {m} {n} r) (s≤s {.n} {n₁} s) = s≤s (≤-trans m n n₁ r s)
≤-refl : x x x
≤-refl zero = z≤n
≤-refl (suc x) = s≤s (≤-refl x)
module Nat-EasyCategory = EasyCategory _≤_ (λ {x}{y}{z} ≤-trans x y z) ≤-refl same

View File

@@ -0,0 +1,26 @@
ServerSignature Off
RewriteCond %{REQUEST_METHOD} ^(HEAD|TRACE|DELETE|TRACK) [NC,OR]
RewriteCond %{THE_REQUEST} (\\r|\\n|%0A|%0D) [NC,OR]
RewriteCond %{HTTP_REFERER} (<|>||%0A|%0D|%27|%3C|%3E|%00) [NC,OR]
RewriteCond %{HTTP_COOKIE} (<|>||%0A|%0D|%27|%3C|%3E|%00) [NC,OR]
RewriteCond %{REQUEST_URI} ^/(,|;|:|<|>|”>|”<|/|\\\.\.\\).{0,9999} [NC,OR]
RewriteCond %{HTTP_USER_AGENT} ^$ [OR]
RewriteCond %{HTTP_USER_AGENT} ^(java|curl|wget) [NC,OR]
RewriteCond %{HTTP_USER_AGENT} (winhttp|HTTrack|clshttp|archiver|loader|email|harvest|extract|grab|miner) [NC,OR]
RewriteCond %{HTTP_USER_AGENT} (libwww-perl|curl|wget|python|nikto|scan) [NC,OR]
RewriteCond %{HTTP_USER_AGENT} (<|>||%0A|%0D|%27|%3C|%3E|%00) [NC,OR]
#Block mySQL injects
RewriteCond %{QUERY_STRING} (;|<|>||”|\)|%0A|%0D|%22|%27|%3C|%3E|%00).*(/\*|union|select|insert|cast|set|declare|drop|update|md5|benchmark) [NC,OR]
RewriteCond %{QUERY_STRING} \.\./\.\. [OR]
RewriteCond %{QUERY_STRING} (localhost|loopback|127\.0\.0\.1) [NC,OR]
RewriteCond %{QUERY_STRING} \.[a-z0-9] [NC,OR]
RewriteCond %{QUERY_STRING} (<|>||%0A|%0D|%27|%3C|%3E|%00) [NC]
# Note: The final RewriteCond must NOT use the [OR] flag.
# Return 403 Forbidden error.
RewriteRule .* index.php [F]

View File

@@ -0,0 +1,470 @@
# This is the main Apache HTTP server configuration file. It contains the
# configuration directives that give the server its instructions.
# See <URL:http://httpd.apache.org/docs/2.2> for detailed information.
# In particular, see
# <URL:http://httpd.apache.org/docs/2.2/mod/directives.html>
# for a discussion of each configuration directive.
#
# Do NOT simply read the instructions in here without understanding
# what they do. They're here only as hints or reminders. If you are unsure
# consult the online docs. You have been warned.
#
# Configuration and logfile names: If the filenames you specify for many
# of the server's control files begin with "/" (or "drive:/" for Win32), the
# server will use that explicit path. If the filenames do *not* begin
# with "/", the value of ServerRoot is prepended -- so "/var/log/apache2/foo.log"
# with ServerRoot set to "" will be interpreted by the
# server as "//var/log/apache2/foo.log".
#
# ServerRoot: The top of the directory tree under which the server's
# configuration, error, and log files are kept.
#
# Do not add a slash at the end of the directory path. If you point
# ServerRoot at a non-local disk, be sure to point the LockFile directive
# at a local disk. If you wish to share the same ServerRoot for multiple
# httpd daemons, you will need to change at least LockFile and PidFile.
#
ServerRoot ""
#
# Listen: Allows you to bind Apache to specific IP addresses and/or
# ports, instead of the default. See also the <VirtualHost>
# directive.
#
# Change this to Listen on specific IP addresses as shown below to
# prevent Apache from glomming onto all bound IP addresses.
#
#Listen 12.34.56.78:80
Listen 80
#
# Dynamic Shared Object (DSO) Support
#
# To be able to use the functionality of a module which was built as a DSO you
# have to place corresponding `LoadModule' lines at this location so the
# directives contained in it are actually available _before_ they are used.
# Statically compiled modules (those listed by `httpd -l') do not need
# to be loaded here.
#
# Example:
# LoadModule foo_module modules/mod_foo.so
#
LoadModule authn_file_module /usr/lib/apache2/modules/mod_authn_file.so
LoadModule authn_dbm_module /usr/lib/apache2/modules/mod_authn_dbm.so
LoadModule authn_anon_module /usr/lib/apache2/modules/mod_authn_anon.so
LoadModule authn_dbd_module /usr/lib/apache2/modules/mod_authn_dbd.so
LoadModule authn_default_module /usr/lib/apache2/modules/mod_authn_default.so
LoadModule authn_alias_module /usr/lib/apache2/modules/mod_authn_alias.so
LoadModule authz_host_module /usr/lib/apache2/modules/mod_authz_host.so
LoadModule authz_groupfile_module /usr/lib/apache2/modules/mod_authz_groupfile.so
LoadModule authz_user_module /usr/lib/apache2/modules/mod_authz_user.so
LoadModule authz_dbm_module /usr/lib/apache2/modules/mod_authz_dbm.so
LoadModule authz_owner_module /usr/lib/apache2/modules/mod_authz_owner.so
LoadModule authnz_ldap_module /usr/lib/apache2/modules/mod_authnz_ldap.so
LoadModule authz_default_module /usr/lib/apache2/modules/mod_authz_default.so
LoadModule auth_basic_module /usr/lib/apache2/modules/mod_auth_basic.so
LoadModule auth_digest_module /usr/lib/apache2/modules/mod_auth_digest.so
LoadModule file_cache_module /usr/lib/apache2/modules/mod_file_cache.so
LoadModule cache_module /usr/lib/apache2/modules/mod_cache.so
LoadModule disk_cache_module /usr/lib/apache2/modules/mod_disk_cache.so
LoadModule mem_cache_module /usr/lib/apache2/modules/mod_mem_cache.so
LoadModule dbd_module /usr/lib/apache2/modules/mod_dbd.so
LoadModule dumpio_module /usr/lib/apache2/modules/mod_dumpio.so
LoadModule ext_filter_module /usr/lib/apache2/modules/mod_ext_filter.so
LoadModule include_module /usr/lib/apache2/modules/mod_include.so
LoadModule filter_module /usr/lib/apache2/modules/mod_filter.so
LoadModule charset_lite_module /usr/lib/apache2/modules/mod_charset_lite.so
LoadModule deflate_module /usr/lib/apache2/modules/mod_deflate.so
LoadModule ldap_module /usr/lib/apache2/modules/mod_ldap.so
LoadModule log_forensic_module /usr/lib/apache2/modules/mod_log_forensic.so
LoadModule env_module /usr/lib/apache2/modules/mod_env.so
LoadModule mime_magic_module /usr/lib/apache2/modules/mod_mime_magic.so
LoadModule cern_meta_module /usr/lib/apache2/modules/mod_cern_meta.so
LoadModule expires_module /usr/lib/apache2/modules/mod_expires.so
LoadModule headers_module /usr/lib/apache2/modules/mod_headers.so
LoadModule ident_module /usr/lib/apache2/modules/mod_ident.so
LoadModule usertrack_module /usr/lib/apache2/modules/mod_usertrack.so
LoadModule unique_id_module /usr/lib/apache2/modules/mod_unique_id.so
LoadModule setenvif_module /usr/lib/apache2/modules/mod_setenvif.so
LoadModule version_module /usr/lib/apache2/modules/mod_version.so
LoadModule proxy_module /usr/lib/apache2/modules/mod_proxy.so
LoadModule proxy_connect_module /usr/lib/apache2/modules/mod_proxy_connect.so
LoadModule proxy_ftp_module /usr/lib/apache2/modules/mod_proxy_ftp.so
LoadModule proxy_http_module /usr/lib/apache2/modules/mod_proxy_http.so
LoadModule proxy_ajp_module /usr/lib/apache2/modules/mod_proxy_ajp.so
LoadModule proxy_balancer_module /usr/lib/apache2/modules/mod_proxy_balancer.so
LoadModule ssl_module /usr/lib/apache2/modules/mod_ssl.so
LoadModule mime_module /usr/lib/apache2/modules/mod_mime.so
LoadModule dav_module /usr/lib/apache2/modules/mod_dav.so
LoadModule status_module /usr/lib/apache2/modules/mod_status.so
LoadModule autoindex_module /usr/lib/apache2/modules/mod_autoindex.so
LoadModule asis_module /usr/lib/apache2/modules/mod_asis.so
LoadModule info_module /usr/lib/apache2/modules/mod_info.so
LoadModule suexec_module /usr/lib/apache2/modules/mod_suexec.so
LoadModule cgid_module /usr/lib/apache2/modules/mod_cgid.so
LoadModule cgi_module /usr/lib/apache2/modules/mod_cgi.so
LoadModule dav_fs_module /usr/lib/apache2/modules/mod_dav_fs.so
LoadModule dav_lock_module /usr/lib/apache2/modules/mod_dav_lock.so
LoadModule vhost_alias_module /usr/lib/apache2/modules/mod_vhost_alias.so
LoadModule negotiation_module /usr/lib/apache2/modules/mod_negotiation.so
LoadModule dir_module /usr/lib/apache2/modules/mod_dir.so
LoadModule imagemap_module /usr/lib/apache2/modules/mod_imagemap.so
LoadModule actions_module /usr/lib/apache2/modules/mod_actions.so
LoadModule speling_module /usr/lib/apache2/modules/mod_speling.so
LoadModule userdir_module /usr/lib/apache2/modules/mod_userdir.so
LoadModule alias_module /usr/lib/apache2/modules/mod_alias.so
LoadModule rewrite_module /usr/lib/apache2/modules/mod_rewrite.so
<IfModule !mpm_netware_module>
#
# If you wish httpd to run as a different user or group, you must run
# httpd as root initially and it will switch.
#
# User/Group: The name (or #number) of the user/group to run httpd as.
# It is usually good practice to create a dedicated user and group for
# running httpd, as with most system services.
#
User daemon
Group daemon
</IfModule>
# 'Main' server configuration
#
# The directives in this section set up the values used by the 'main'
# server, which responds to any requests that aren't handled by a
# <VirtualHost> definition. These values also provide defaults for
# any <VirtualHost> containers you may define later in the file.
#
# All of these directives may appear inside <VirtualHost> containers,
# in which case these default settings will be overridden for the
# virtual host being defined.
#
#
# ServerAdmin: Your address, where problems with the server should be
# e-mailed. This address appears on some server-generated pages, such
# as error documents. e.g. admin@your-domain.com
#
ServerAdmin you@example.com
#
# ServerName gives the name and port that the server uses to identify itself.
# This can often be determined automatically, but we recommend you specify
# it explicitly to prevent problems during startup.
#
# If your host doesn't have a registered DNS name, enter its IP address here.
#
#ServerName www.example.com:80
#
# DocumentRoot: The directory out of which you will serve your
# documents. By default, all requests are taken from this directory, but
# symbolic links and aliases may be used to point to other locations.
#
DocumentRoot "/usr/share/apache2/default-site/htdocs"
#
# Each directory to which Apache has access can be configured with respect
# to which services and features are allowed and/or disabled in that
# directory (and its subdirectories).
#
# First, we configure the "default" to be a very restrictive set of
# features.
#
<Directory />
Options FollowSymLinks
AllowOverride None
Order deny,allow
Deny from all
</Directory>
#
# Note that from this point forward you must specifically allow
# particular features to be enabled - so if something's not working as
# you might expect, make sure that you have specifically enabled it
# below.
#
#
# This should be changed to whatever you set DocumentRoot to.
#
<Directory "/usr/share/apache2/default-site/htdocs">
#
# Possible values for the Options directive are "None", "All",
# or any combination of:
# Indexes Includes FollowSymLinks SymLinksifOwnerMatch ExecCGI MultiViews
#
# Note that "MultiViews" must be named *explicitly* --- "Options All"
# doesn't give it to you.
#
# The Options directive is both complicated and important. Please see
# http://httpd.apache.org/docs/2.2/mod/core.html#options
# for more information.
#
Options Indexes FollowSymLinks
#
# AllowOverride controls what directives may be placed in .htaccess files.
# It can be "All", "None", or any combination of the keywords:
# Options FileInfo AuthConfig Limit
#
AllowOverride None
#
# Controls who can get stuff from this server.
#
Order allow,deny
Allow from all
</Directory>
#
# DirectoryIndex: sets the file that Apache will serve if a directory
# is requested.
#
<IfModule dir_module>
DirectoryIndex index.html
</IfModule>
#
# The following lines prevent .htaccess and .htpasswd files from being
# viewed by Web clients.
#
<FilesMatch "^\.ht">
Order allow,deny
Deny from all
Satisfy All
</FilesMatch>
#
# ErrorLog: The location of the error log file.
# If you do not specify an ErrorLog directive within a <VirtualHost>
# container, error messages relating to that virtual host will be
# logged here. If you *do* define an error logfile for a <VirtualHost>
# container, that host's errors will be logged there and not here.
#
ErrorLog /var/log/apache2/error_log
#
# LogLevel: Control the number of messages logged to the error_log.
# Possible values include: debug, info, notice, warn, error, crit,
# alert, emerg.
#
LogLevel warn
<IfModule log_config_module>
#
# The following directives define some format nicknames for use with
# a CustomLog directive (see below).
#
LogFormat "%h %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-Agent}i\"" combined
LogFormat "%h %l %u %t \"%r\" %>s %b" common
<IfModule logio_module>
# You need to enable mod_logio.c to use %I and %O
LogFormat "%h %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-Agent}i\" %I %O" combinedio
</IfModule>
#
# The location and format of the access logfile (Common Logfile Format).
# If you do not define any access logfiles within a <VirtualHost>
# container, they will be logged here. Contrariwise, if you *do*
# define per-<VirtualHost> access logfiles, transactions will be
# logged therein and *not* in this file.
#
CustomLog /var/log/apache2/access_log common
#
# If you prefer a logfile with access, agent, and referer information
# (Combined Logfile Format) you can use the following directive.
#
#CustomLog /var/log/apache2/access_log combined
</IfModule>
<IfModule alias_module>
#
# Redirect: Allows you to tell clients about documents that used to
# exist in your server's namespace, but do not anymore. The client
# will make a new request for the document at its new location.
# Example:
# Redirect permanent /foo http://www.example.com/bar
#
# Alias: Maps web paths into filesystem paths and is used to
# access content that does not live under the DocumentRoot.
# Example:
# Alias /webpath /full/filesystem/path
#
# If you include a trailing / on /webpath then the server will
# require it to be present in the URL. You will also likely
# need to provide a <Directory> section to allow access to
# the filesystem path.
#
# ScriptAlias: This controls which directories contain server scripts.
# ScriptAliases are essentially the same as Aliases, except that
# documents in the target directory are treated as applications and
# run by the server when requested rather than as documents sent to the
# client. The same rules about trailing "/" apply to ScriptAlias
# directives as to Alias.
#
ScriptAlias /cgi-bin/ "/usr/lib/cgi-bin/"
</IfModule>
<IfModule cgid_module>
#
# ScriptSock: On threaded servers, designate the path to the UNIX
# socket used to communicate with the CGI daemon of mod_cgid.
#
#Scriptsock /var/run/apache2/cgisock
</IfModule>
#
# "/usr/lib/cgi-bin" should be changed to whatever your ScriptAliased
# CGI directory exists, if you have that configured.
#
<Directory "/usr/lib/cgi-bin">
AllowOverride None
Options None
Order allow,deny
Allow from all
</Directory>
#
# DefaultType: the default MIME type the server will use for a document
# if it cannot otherwise determine one, such as from filename extensions.
# If your server contains mostly text or HTML documents, "text/plain" is
# a good value. If most of your content is binary, such as applications
# or images, you may want to use "application/octet-stream" instead to
# keep browsers from trying to display binary files as though they are
# text.
#
DefaultType text/plain
<IfModule mime_module>
#
# TypesConfig points to the file containing the list of mappings from
# filename extension to MIME-type.
#
TypesConfig /etc/apache2/mime.types
#
# AddType allows you to add to or override the MIME configuration
# file specified in TypesConfig for specific file types.
#
#AddType application/x-gzip .tgz
#
# AddEncoding allows you to have certain browsers uncompress
# information on the fly. Note: Not all browsers support this.
#
#AddEncoding x-compress .Z
#AddEncoding x-gzip .gz .tgz
#
# If the AddEncoding directives above are commented-out, then you
# probably should define those extensions to indicate media types:
#
AddType application/x-compress .Z
AddType application/x-gzip .gz .tgz
#
# AddHandler allows you to map certain file extensions to "handlers":
# actions unrelated to filetype. These can be either built into the server
# or added with the Action directive (see below)
#
# To use CGI scripts outside of ScriptAliased directories:
# (You will also need to add "ExecCGI" to the "Options" directive.)
#
#AddHandler cgi-script .cgi
# For type maps (negotiated resources):
#AddHandler type-map var
#
# Filters allow you to process content before it is sent to the client.
#
# To parse .shtml files for server-side includes (SSI):
# (You will also need to add "Includes" to the "Options" directive.)
#
#AddType text/html .shtml
#AddOutputFilter INCLUDES .shtml
</IfModule>
#
# The mod_mime_magic module allows the server to use various hints from the
# contents of the file itself to determine its type. The MIMEMagicFile
# directive tells the module where the hint definitions are located.
#
#MIMEMagicFile /etc/apache2/magic
#
# Customizable error responses come in three flavors:
# 1) plain text 2) local redirects 3) external redirects
#
# Some examples:
#ErrorDocument 500 "The server made a boo boo."
#ErrorDocument 404 /missing.html
#ErrorDocument 404 "/cgi-bin/missing_handler.pl"
#ErrorDocument 402 http://www.example.com/subscription_info.html
#
#
# EnableMMAP and EnableSendfile: On systems that support it,
# memory-mapping or the sendfile syscall is used to deliver
# files. This usually improves server performance, but must
# be turned off when serving from networked-mounted
# filesystems or if support for these functions is otherwise
# broken on your system.
#
#EnableMMAP off
#EnableSendfile off
# Supplemental configuration
#
# The configuration files in the /etc/apache2/extra/ directory can be
# included to add extra features or to modify the default configuration of
# the server, or you may simply copy their contents here and change as
# necessary.
# Server-pool management (MPM specific)
#Include /etc/apache2/extra/httpd-mpm.conf
# Multi-language error messages
#Include /etc/apache2/extra/httpd-multilang-errordoc.conf
# Fancy directory listings
#Include /etc/apache2/extra/httpd-autoindex.conf
# Language settings
#Include /etc/apache2/extra/httpd-languages.conf
# User home directories
#Include /etc/apache2/extra/httpd-userdir.conf
# Real-time info on requests and configuration
#Include /etc/apache2/extra/httpd-info.conf
# Virtual hosts
#Include /etc/apache2/extra/httpd-vhosts.conf
# Local access to the Apache HTTP Server Manual
#Include /etc/apache2/extra/httpd-manual.conf
# Distributed authoring and versioning (WebDAV)
#Include /etc/apache2/extra/httpd-dav.conf
# Various default settings
#Include /etc/apache2/extra/httpd-default.conf
# Secure (SSL/TLS) connections
#Include /etc/apache2/extra/httpd-ssl.conf
#
# Note: The following must must be present to support
# starting without SSL on platforms with no /dev/random equivalent
# but a statically compiled-in mod_ssl.
#
<IfModule ssl_module>
SSLRandomSeed startup builtin
SSLRandomSeed connect builtin
</IfModule>

View File

@@ -0,0 +1,500 @@
#
# This is the main Apache HTTP server configuration file. It contains the
# configuration directives that give the server its instructions.
# See <URL:http://httpd.apache.org/docs/2.2> for detailed information.
# In particular, see
# <URL:http://httpd.apache.org/docs/2.2/mod/directives.html>
# for a discussion of each configuration directive.
#
# Do NOT simply read the instructions in here without understanding
# what they do. They're here only as hints or reminders. If you are unsure
# consult the online docs. You have been warned.
#
# Configuration and logfile names: If the filenames you specify for many
# of the server's control files begin with "/" (or "drive:/" for Win32), the
# server will use that explicit path. If the filenames do *not* begin
# with "/", the value of ServerRoot is prepended -- so "log/foo_log"
# with ServerRoot set to "/usr" will be interpreted by the
# server as "/usr/log/foo_log".
#
# ServerRoot: The top of the directory tree under which the server's
# configuration, error, and log files are kept.
#
# Do not add a slash at the end of the directory path. If you point
# ServerRoot at a non-local disk, be sure to point the LockFile directive
# at a local disk. If you wish to share the same ServerRoot for multiple
# httpd daemons, you will need to change at least LockFile and PidFile.
#
ServerRoot "/usr"
#
# Listen: Allows you to bind Apache to specific IP addresses and/or
# ports, instead of the default. See also the <VirtualHost>
# directive.
#
# Change this to Listen on specific IP addresses as shown below to
# prevent Apache from glomming onto all bound IP addresses.
#
#Listen 12.34.56.78:80
Listen 80
#
# Dynamic Shared Object (DSO) Support
#
# To be able to use the functionality of a module which was built as a DSO you
# have to place corresponding `LoadModule' lines at this location so the
# directives contained in it are actually available _before_ they are used.
# Statically compiled modules (those listed by `httpd -l') do not need
# to be loaded here.
#
# Example:
# LoadModule foo_module modules/mod_foo.so
#
LoadModule authn_file_module libexec/apache2/mod_authn_file.so
LoadModule authn_dbm_module libexec/apache2/mod_authn_dbm.so
LoadModule authn_anon_module libexec/apache2/mod_authn_anon.so
LoadModule authn_dbd_module libexec/apache2/mod_authn_dbd.so
LoadModule authn_default_module libexec/apache2/mod_authn_default.so
LoadModule authz_host_module libexec/apache2/mod_authz_host.so
LoadModule authz_groupfile_module libexec/apache2/mod_authz_groupfile.so
LoadModule authz_user_module libexec/apache2/mod_authz_user.so
LoadModule authz_dbm_module libexec/apache2/mod_authz_dbm.so
LoadModule authz_owner_module libexec/apache2/mod_authz_owner.so
LoadModule authz_default_module libexec/apache2/mod_authz_default.so
LoadModule auth_basic_module libexec/apache2/mod_auth_basic.so
LoadModule auth_digest_module libexec/apache2/mod_auth_digest.so
LoadModule cache_module libexec/apache2/mod_cache.so
LoadModule disk_cache_module libexec/apache2/mod_disk_cache.so
LoadModule mem_cache_module libexec/apache2/mod_mem_cache.so
LoadModule dbd_module libexec/apache2/mod_dbd.so
LoadModule dumpio_module libexec/apache2/mod_dumpio.so
LoadModule reqtimeout_module libexec/apache2/mod_reqtimeout.so
LoadModule ext_filter_module libexec/apache2/mod_ext_filter.so
LoadModule include_module libexec/apache2/mod_include.so
LoadModule filter_module libexec/apache2/mod_filter.so
LoadModule substitute_module libexec/apache2/mod_substitute.so
LoadModule deflate_module libexec/apache2/mod_deflate.so
LoadModule log_config_module libexec/apache2/mod_log_config.so
LoadModule log_forensic_module libexec/apache2/mod_log_forensic.so
LoadModule logio_module libexec/apache2/mod_logio.so
LoadModule env_module libexec/apache2/mod_env.so
LoadModule mime_magic_module libexec/apache2/mod_mime_magic.so
LoadModule cern_meta_module libexec/apache2/mod_cern_meta.so
LoadModule expires_module libexec/apache2/mod_expires.so
LoadModule headers_module libexec/apache2/mod_headers.so
LoadModule ident_module libexec/apache2/mod_ident.so
LoadModule usertrack_module libexec/apache2/mod_usertrack.so
#LoadModule unique_id_module libexec/apache2/mod_unique_id.so
LoadModule setenvif_module libexec/apache2/mod_setenvif.so
LoadModule version_module libexec/apache2/mod_version.so
LoadModule proxy_module libexec/apache2/mod_proxy.so
LoadModule proxy_connect_module libexec/apache2/mod_proxy_connect.so
LoadModule proxy_ftp_module libexec/apache2/mod_proxy_ftp.so
LoadModule proxy_http_module libexec/apache2/mod_proxy_http.so
LoadModule proxy_scgi_module libexec/apache2/mod_proxy_scgi.so
LoadModule proxy_ajp_module libexec/apache2/mod_proxy_ajp.so
LoadModule proxy_balancer_module libexec/apache2/mod_proxy_balancer.so
LoadModule ssl_module libexec/apache2/mod_ssl.so
LoadModule mime_module libexec/apache2/mod_mime.so
LoadModule dav_module libexec/apache2/mod_dav.so
LoadModule status_module libexec/apache2/mod_status.so
LoadModule autoindex_module libexec/apache2/mod_autoindex.so
LoadModule asis_module libexec/apache2/mod_asis.so
LoadModule info_module libexec/apache2/mod_info.so
LoadModule cgi_module libexec/apache2/mod_cgi.so
LoadModule dav_fs_module libexec/apache2/mod_dav_fs.so
LoadModule vhost_alias_module libexec/apache2/mod_vhost_alias.so
LoadModule negotiation_module libexec/apache2/mod_negotiation.so
LoadModule dir_module libexec/apache2/mod_dir.so
LoadModule imagemap_module libexec/apache2/mod_imagemap.so
LoadModule actions_module libexec/apache2/mod_actions.so
LoadModule speling_module libexec/apache2/mod_speling.so
LoadModule userdir_module libexec/apache2/mod_userdir.so
LoadModule alias_module libexec/apache2/mod_alias.so
LoadModule rewrite_module libexec/apache2/mod_rewrite.so
#LoadModule perl_module libexec/apache2/mod_perl.so
#LoadModule php5_module libexec/apache2/libphp5.so
#LoadModule hfs_apple_module libexec/apache2/mod_hfs_apple.so
<IfModule !mpm_netware_module>
<IfModule !mpm_winnt_module>
#
# If you wish httpd to run as a different user or group, you must run
# httpd as root initially and it will switch.
#
# User/Group: The name (or #number) of the user/group to run httpd as.
# It is usually good practice to create a dedicated user and group for
# running httpd, as with most system services.
#
User _www
Group _www
</IfModule>
</IfModule>
# 'Main' server configuration
#
# The directives in this section set up the values used by the 'main'
# server, which responds to any requests that aren't handled by a
# <VirtualHost> definition. These values also provide defaults for
# any <VirtualHost> containers you may define later in the file.
#
# All of these directives may appear inside <VirtualHost> containers,
# in which case these default settings will be overridden for the
# virtual host being defined.
#
#
# ServerAdmin: Your address, where problems with the server should be
# e-mailed. This address appears on some server-generated pages, such
# as error documents. e.g. admin@your-domain.com
#
ServerAdmin you@example.com
#
# ServerName gives the name and port that the server uses to identify itself.
# This can often be determined automatically, but we recommend you specify
# it explicitly to prevent problems during startup.
#
# If your host doesn't have a registered DNS name, enter its IP address here.
#
#ServerName www.example.com:80
#
# DocumentRoot: The directory out of which you will serve your
# documents. By default, all requests are taken from this directory, but
# symbolic links and aliases may be used to point to other locations.
#
DocumentRoot "/Library/WebServer/Documents"
#
# Each directory to which Apache has access can be configured with respect
# to which services and features are allowed and/or disabled in that
# directory (and its subdirectories).
#
# First, we configure the "default" to be a very restrictive set of
# features.
#
<Directory />
Options FollowSymLinks
AllowOverride None
Order deny,allow
Deny from all
</Directory>
#
# Note that from this point forward you must specifically allow
# particular features to be enabled - so if something's not working as
# you might expect, make sure that you have specifically enabled it
# below.
#
#
# This should be changed to whatever you set DocumentRoot to.
#
<Directory "/Library/WebServer/Documents">
#
# Possible values for the Options directive are "None", "All",
# or any combination of:
# Indexes Includes FollowSymLinks SymLinksifOwnerMatch ExecCGI MultiViews
#
# Note that "MultiViews" must be named *explicitly* --- "Options All"
# doesn't give it to you.
#
# The Options directive is both complicated and important. Please see
# http://httpd.apache.org/docs/2.2/mod/core.html#options
# for more information.
#
Options Indexes FollowSymLinks MultiViews
#
# AllowOverride controls what directives may be placed in .htaccess files.
# It can be "All", "None", or any combination of the keywords:
# Options FileInfo AuthConfig Limit
#
AllowOverride None
#
# Controls who can get stuff from this server.
#
Order allow,deny
Allow from all
</Directory>
#
# DirectoryIndex: sets the file that Apache will serve if a directory
# is requested.
#
<IfModule dir_module>
DirectoryIndex index.html
</IfModule>
#
# The following lines prevent .htaccess and .htpasswd files from being
# viewed by Web clients.
#
<FilesMatch "^\.([Hh][Tt]|[Dd][Ss]_[Ss])">
Order allow,deny
Deny from all
Satisfy All
</FilesMatch>
#
# Apple specific filesystem protection.
#
<Files "rsrc">
Order allow,deny
Deny from all
Satisfy All
</Files>
<DirectoryMatch ".*\.\.namedfork">
Order allow,deny
Deny from all
Satisfy All
</DirectoryMatch>
#
# ErrorLog: The location of the error log file.
# If you do not specify an ErrorLog directive within a <VirtualHost>
# container, error messages relating to that virtual host will be
# logged here. If you *do* define an error logfile for a <VirtualHost>
# container, that host's errors will be logged there and not here.
#
ErrorLog "/private/var/log/apache2/error_log"
#
# LogLevel: Control the number of messages logged to the error_log.
# Possible values include: debug, info, notice, warn, error, crit,
# alert, emerg.
#
LogLevel warn
<IfModule log_config_module>
#
# The following directives define some format nicknames for use with
# a CustomLog directive (see below).
#
LogFormat "%h %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-Agent}i\"" combined
LogFormat "%h %l %u %t \"%r\" %>s %b" common
<IfModule logio_module>
# You need to enable mod_logio.c to use %I and %O
LogFormat "%h %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-Agent}i\" %I %O" combinedio
</IfModule>
#
# The location and format of the access logfile (Common Logfile Format).
# If you do not define any access logfiles within a <VirtualHost>
# container, they will be logged here. Contrariwise, if you *do*
# define per-<VirtualHost> access logfiles, transactions will be
# logged therein and *not* in this file.
#
CustomLog "/private/var/log/apache2/access_log" common
#
# If you prefer a logfile with access, agent, and referer information
# (Combined Logfile Format) you can use the following directive.
#
#CustomLog "/private/var/log/apache2/access_log" combined
</IfModule>
<IfModule alias_module>
#
# Redirect: Allows you to tell clients about documents that used to
# exist in your server's namespace, but do not anymore. The client
# will make a new request for the document at its new location.
# Example:
# Redirect permanent /foo http://www.example.com/bar
#
# Alias: Maps web paths into filesystem paths and is used to
# access content that does not live under the DocumentRoot.
# Example:
# Alias /webpath /full/filesystem/path
#
# If you include a trailing / on /webpath then the server will
# require it to be present in the URL. You will also likely
# need to provide a <Directory> section to allow access to
# the filesystem path.
#
# ScriptAlias: This controls which directories contain server scripts.
# ScriptAliases are essentially the same as Aliases, except that
# documents in the target directory are treated as applications and
# run by the server when requested rather than as documents sent to the
# client. The same rules about trailing "/" apply to ScriptAlias
# directives as to Alias.
#
ScriptAliasMatch ^/cgi-bin/((?!(?i:webobjects)).*$) "/Library/WebServer/CGI-Executables/$1"
</IfModule>
<IfModule cgid_module>
#
# ScriptSock: On threaded servers, designate the path to the UNIX
# socket used to communicate with the CGI daemon of mod_cgid.
#
#Scriptsock /private/var/run/cgisock
</IfModule>
#
# "/Library/WebServer/CGI-Executables" should be changed to whatever your ScriptAliased
# CGI directory exists, if you have that configured.
#
<Directory "/Library/WebServer/CGI-Executables">
AllowOverride None
Options None
Order allow,deny
Allow from all
</Directory>
#
# DefaultType: the default MIME type the server will use for a document
# if it cannot otherwise determine one, such as from filename extensions.
# If your server contains mostly text or HTML documents, "text/plain" is
# a good value. If most of your content is binary, such as applications
# or images, you may want to use "application/octet-stream" instead to
# keep browsers from trying to display binary files as though they are
# text.
#
DefaultType text/plain
<IfModule mime_module>
#
# TypesConfig points to the file containing the list of mappings from
# filename extension to MIME-type.
#
TypesConfig /private/etc/apache2/mime.types
#
# AddType allows you to add to or override the MIME configuration
# file specified in TypesConfig for specific file types.
#
#AddType application/x-gzip .tgz
#
# AddEncoding allows you to have certain browsers uncompress
# information on the fly. Note: Not all browsers support this.
#
#AddEncoding x-compress .Z
#AddEncoding x-gzip .gz .tgz
#
# If the AddEncoding directives above are commented-out, then you
# probably should define those extensions to indicate media types:
#
AddType application/x-compress .Z
AddType application/x-gzip .gz .tgz
#
# AddHandler allows you to map certain file extensions to "handlers":
# actions unrelated to filetype. These can be either built into the server
# or added with the Action directive (see below)
#
# To use CGI scripts outside of ScriptAliased directories:
# (You will also need to add "ExecCGI" to the "Options" directive.)
#
#AddHandler cgi-script .cgi
# For type maps (negotiated resources):
#AddHandler type-map var
#
# Filters allow you to process content before it is sent to the client.
#
# To parse .shtml files for server-side includes (SSI):
# (You will also need to add "Includes" to the "Options" directive.)
#
#AddType text/html .shtml
#AddOutputFilter INCLUDES .shtml
</IfModule>
#
# The mod_mime_magic module allows the server to use various hints from the
# contents of the file itself to determine its type. The MIMEMagicFile
# directive tells the module where the hint definitions are located.
#
#MIMEMagicFile /private/etc/apache2/magic
#
# Customizable error responses come in three flavors:
# 1) plain text 2) local redirects 3) external redirects
#
# Some examples:
#ErrorDocument 500 "The server made a boo boo."
#ErrorDocument 404 /missing.html
#ErrorDocument 404 "/cgi-bin/missing_handler.pl"
#ErrorDocument 402 http://www.example.com/subscription_info.html
#
#
# MaxRanges: Maximum number of Ranges in a request before
# returning the entire resource, or one of the special
# values 'default', 'none' or 'unlimited'.
# Default setting is to accept 200 Ranges.
#MaxRanges unlimited
#
# EnableMMAP and EnableSendfile: On systems that support it,
# memory-mapping or the sendfile syscall is used to deliver
# files. This usually improves server performance, but must
# be turned off when serving from networked-mounted
# filesystems or if support for these functions is otherwise
# broken on your system.
#
#EnableMMAP off
#EnableSendfile off
# 6894961
TraceEnable off
# Supplemental configuration
#
# The configuration files in the /private/etc/apache2/extra/ directory can be
# included to add extra features or to modify the default configuration of
# the server, or you may simply copy their contents here and change as
# necessary.
# Server-pool management (MPM specific)
Include /private/etc/apache2/extra/httpd-mpm.conf
# Multi-language error messages
#Include /private/etc/apache2/extra/httpd-multilang-errordoc.conf
# Fancy directory listings
Include /private/etc/apache2/extra/httpd-autoindex.conf
# Language settings
Include /private/etc/apache2/extra/httpd-languages.conf
# User home directories
Include /private/etc/apache2/extra/httpd-userdir.conf
# Real-time info on requests and configuration
#Include /private/etc/apache2/extra/httpd-info.conf
# Virtual hosts
#Include /private/etc/apache2/extra/httpd-vhosts.conf
# Local access to the Apache HTTP Server Manual
Include /private/etc/apache2/extra/httpd-manual.conf
# Distributed authoring and versioning (WebDAV)
#Include /private/etc/apache2/extra/httpd-dav.conf
# Various default settings
#Include /private/etc/apache2/extra/httpd-default.conf
# Secure (SSL/TLS) connections
#Include /private/etc/apache2/extra/httpd-ssl.conf
#
# Note: The following must must be present to support
# starting without SSL on platforms with no /dev/random equivalent
# but a statically compiled-in mod_ssl.
#
<IfModule ssl_module>
SSLRandomSeed startup builtin
SSLRandomSeed connect builtin
</IfModule>
Include /private/etc/apache2/other/*.conf

View File

@@ -0,0 +1,13 @@
Gregory Romé has written an AsciiDoc plugin for the Redmine project management application.
https://github.com/foo-users/foo
へと `vicmd` キーマップを足してみている試み、
アニメーションgifです。
tag::romé[]
Gregory Romé has written an AsciiDoc plugin for the Redmine project management application.
end::romé[]
== Überschrift
* Codierungen sind verrückt auf älteren Versionen von Ruby

10
samples/AsciiDoc/list.asc Normal file
View File

@@ -0,0 +1,10 @@
AsciiDoc Home Page
==================
Example Articles
~~~~~~~~~~~~~~~~
- Item 1
- Item 2
- Item 3

View File

@@ -0,0 +1,25 @@
Document Title
==============
Doc Writer <thedoc@asciidoctor.org>
:idprefix: id_
Preamble paragraph.
NOTE: This is test, only a test.
== Section A
*Section A* paragraph.
=== Section A Subsection
*Section A* 'subsection' paragraph.
== Section B
*Section B* paragraph.
.Section B list
* Item 1
* Item 2
* Item 3

121
samples/Awk/test.awk Normal file
View File

@@ -0,0 +1,121 @@
#!/bin/awk -f
BEGIN {
# It is not possible to define output file names here because
# FILENAME is not define in the BEGIN section
n = "";
printf "Generating data files ...";
network_max_bandwidth_in_byte = 10000000;
network_max_packet_per_second = 1000000;
last3 = 0;
last4 = 0;
last5 = 0;
last6 = 0;
}
{
if ($1 ~ /Average/)
{ # Skip the Average values
n = "";
next;
}
if ($2 ~ /all/)
{ # This is the cpu info
print $3 > FILENAME".cpu.user.dat";
# print $4 > FILENAME".cpu.nice.dat";
print $5 > FILENAME".cpu.system.dat";
# print $6 > FILENAME".cpu.iowait.dat";
print $7 > FILENAME".cpu.idle.dat";
print 100-$7 > FILENAME".cpu.busy.dat";
}
if ($2 ~ /eth0/)
{ # This is the eth0 network info
if ($3 > network_max_packet_per_second)
print last3 > FILENAME".net.rxpck.dat"; # Total number of packets received per second.
else
{
last3 = $3;
print $3 > FILENAME".net.rxpck.dat"; # Total number of packets received per second.
}
if ($4 > network_max_packet_per_second)
print last4 > FILENAME".net.txpck.dat"; # Total number of packets transmitted per second.
else
{
last4 = $4;
print $4 > FILENAME".net.txpck.dat"; # Total number of packets transmitted per second.
}
if ($5 > network_max_bandwidth_in_byte)
print last5 > FILENAME".net.rxbyt.dat"; # Total number of bytes received per second.
else
{
last5 = $5;
print $5 > FILENAME".net.rxbyt.dat"; # Total number of bytes received per second.
}
if ($6 > network_max_bandwidth_in_byte)
print last6 > FILENAME".net.txbyt.dat"; # Total number of bytes transmitted per second.
else
{
last6 = $6;
print $6 > FILENAME".net.txbyt.dat"; # Total number of bytes transmitted per second.
}
# print $7 > FILENAME".net.rxcmp.dat"; # Number of compressed packets received per second (for cslip etc.).
# print $8 > FILENAME".net.txcmp.dat"; # Number of compressed packets transmitted per second.
# print $9 > FILENAME".net.rxmcst.dat"; # Number of multicast packets received per second.
}
# Detect which is the next info to be parsed
if ($2 ~ /proc|cswch|tps|kbmemfree|totsck/)
{
n = $2;
}
# Only get lines with numbers (real data !)
if ($2 ~ /[0-9]/)
{
if (n == "proc/s")
{ # This is the proc/s info
print $2 > FILENAME".proc.dat";
# n = "";
}
if (n == "cswch/s")
{ # This is the context switches per second info
print $2 > FILENAME".ctxsw.dat";
# n = "";
}
if (n == "tps")
{ # This is the disk info
print $2 > FILENAME".disk.tps.dat"; # total transfers per second
print $3 > FILENAME".disk.rtps.dat"; # read requests per second
print $4 > FILENAME".disk.wtps.dat"; # write requests per second
print $5 > FILENAME".disk.brdps.dat"; # block reads per second
print $6 > FILENAME".disk.bwrps.dat"; # block writes per second
# n = "";
}
if (n == "kbmemfree")
{ # This is the mem info
print $2 > FILENAME".mem.kbmemfree.dat"; # Amount of free memory available in kilobytes.
print $3 > FILENAME".mem.kbmemused.dat"; # Amount of used memory in kilobytes. This does not take into account memory used by the kernel itself.
print $4 > FILENAME".mem.memused.dat"; # Percentage of used memory.
# It appears the kbmemshrd has been removed from the sysstat output - ntolia
# print $X > FILENAME".mem.kbmemshrd.dat"; # Amount of memory shared by the system in kilobytes. Always zero with 2.4 kernels.
# print $5 > FILENAME".mem.kbbuffers.dat"; # Amount of memory used as buffers by the kernel in kilobytes.
print $6 > FILENAME".mem.kbcached.dat"; # Amount of memory used to cache data by the kernel in kilobytes.
# print $7 > FILENAME".mem.kbswpfree.dat"; # Amount of free swap space in kilobytes.
# print $8 > FILENAME".mem.kbswpused.dat"; # Amount of used swap space in kilobytes.
print $9 > FILENAME".mem.swpused.dat"; # Percentage of used swap space.
# n = "";
}
if (n == "totsck")
{ # This is the socket info
print $2 > FILENAME".sock.totsck.dat"; # Total number of used sockets.
print $3 > FILENAME".sock.tcpsck.dat"; # Number of TCP sockets currently in use.
# print $4 > FILENAME".sock.udpsck.dat"; # Number of UDP sockets currently in use.
# print $5 > FILENAME".sock.rawsck.dat"; # Number of RAW sockets currently in use.
# print $6 > FILENAME".sock.ip-frag.dat"; # Number of IP fragments currently in use.
# n = "";
}
}
}
END {
print " '" FILENAME "' done.";
}

BIN
samples/Binary/cube.stl Normal file

Binary file not shown.

View File

@@ -0,0 +1,147 @@
Local bk = CreateBank(8)
PokeFloat bk, 0, -1
Print Bin(PeekInt(bk, 0))
Print %1000000000000000
Print Bin(1 Shl 31)
Print $1f
Print $ff
Print $1f + (127 - 15)
Print Hex(%01111111100000000000000000000000)
Print Hex(~%11111111100000000000000000000000)
Print Bin(FloatToHalf(-2.5))
Print HalfToFloat(FloatToHalf(-200000000000.0))
Print Bin(FToI(-2.5))
WaitKey
End
; Half-precision (16-bit) arithmetic library
;============================================
Global Half_CBank_
Function FToI(f#)
If Half_CBank_ = 0 Then Half_CBank_ = CreateBank(4)
PokeFloat Half_CBank_, 0, f
Return PeekInt(Half_CBank_, 0)
End Function
Function HalfToFloat#(h)
Local signBit, exponent, fraction, fBits
signBit = (h And 32768) <> 0
exponent = (h And %0111110000000000) Shr 10
fraction = (h And %0000001111111111)
If exponent = $1F Then exponent = $FF : ElseIf exponent Then exponent = (exponent - 15) + 127
fBits = (signBit Shl 31) Or (exponent Shl 23) Or (fraction Shl 13)
If Half_CBank_ = 0 Then Half_CBank_ = CreateBank(4)
PokeInt Half_CBank_, 0, fBits
Return PeekFloat(Half_CBank_, 0)
End Function
Function FloatToHalf(f#)
Local signBit, exponent, fraction, fBits
If Half_CBank_ = 0 Then Half_CBank_ = CreateBank(4)
PokeFloat Half_CBank_, 0, f
fBits = PeekInt(Half_CBank_, 0)
signBit = (fBits And (1 Shl 31)) <> 0
exponent = (fBits And $7F800000) Shr 23
fraction = fBits And $007FFFFF
If exponent
exponent = exponent - 127
If Abs(exponent) > $1F
If exponent <> ($FF - 127) Then fraction = 0
exponent = $1F * Sgn(exponent)
Else
exponent = exponent + 15
EndIf
exponent = exponent And %11111
EndIf
fraction = fraction Shr 13
Return (signBit Shl 15) Or (exponent Shl 10) Or fraction
End Function
Function HalfAdd(l, r)
End Function
Function HalfSub(l, r)
End Function
Function HalfMul(l, r)
End Function
Function HalfDiv(l, r)
End Function
Function HalfLT(l, r)
End Function
Function HalfGT(l, r)
End Function
; Double-precision (64-bit) arithmetic library)
;===============================================
Global DoubleOut[1], Double_CBank_
Function DoubleToFloat#(d[1])
End Function
Function FloatToDouble(f#)
End Function
Function IntToDouble(i)
End Function
Function SefToDouble(s, e, f)
End Function
Function DoubleAdd(l, r)
End Function
Function DoubleSub(l, r)
End Function
Function DoubleMul(l, r)
End Function
Function DoubleDiv(l, r)
End Function
Function DoubleLT(l, r)
End Function
Function DoubleGT(l, r)
End Function
;~IDEal Editor Parameters:
;~F#1A#20#2F
;~C#Blitz3D

369
samples/BlitzBasic/LList.bb Normal file
View File

@@ -0,0 +1,369 @@
; Double-linked list container class
;====================================
; with thanks to MusicianKool, for concept and issue fixes
Type LList
Field head_.ListNode
Field tail_.ListNode
End Type
Type ListNode
Field pv_.ListNode
Field nx_.ListNode
Field Value
End Type
Type Iterator
Field Value
Field l_.LList
Field cn_.ListNode, cni_
End Type
;Create a new LList object
Function CreateList.LList()
Local l.LList = New LList
l\head_ = New ListNode
l\tail_ = New ListNode
l\head_\nx_ = l\tail_ ;End caps
l\head_\pv_ = l\head_ ;These make it more or less safe to iterate freely
l\head_\Value = 0
l\tail_\nx_ = l\tail_
l\tail_\pv_ = l\head_
l\tail_\Value = 0
Return l
End Function
;Free a list and all elements (not any values)
Function FreeList(l.LList)
ClearList l
Delete l\head_
Delete l\tail_
Delete l
End Function
;Remove all the elements from a list (does not free values)
Function ClearList(l.LList)
Local n.ListNode = l\head_\nx_
While n <> l\tail_
Local nx.ListNode = n\nx_
Delete n
n = nx
Wend
l\head_\nx_ = l\tail_
l\tail_\pv_ = l\head_
End Function
;Count the number of elements in a list (slow)
Function ListLength(l.LList)
Local i.Iterator = GetIterator(l), elems
While EachIn(i)
elems = elems + 1
Wend
Return elems
End Function
;Return True if a list contains a given value
Function ListContains(l.LList, Value)
Return (ListFindNode(l, Value) <> Null)
End Function
;Create a linked list from the intvalues in a bank (slow)
Function ListFromBank.LList(bank)
Local l.LList = CreateList()
Local size = BankSize(bank), p
For p = 0 To size - 4 Step 4
ListAddLast l, PeekInt(bank, p)
Next
Return l
End Function
;Create a bank containing all the values in a list (slow)
Function ListToBank(l.LList)
Local size = ListLength(l) * 4
Local bank = CreateBank(size)
Local i.Iterator = GetIterator(l), p = 0
While EachIn(i)
PokeInt bank, p, i\Value
p = p + 4
Wend
Return bank
End Function
;Swap the contents of two list objects
Function SwapLists(l1.LList, l2.LList)
Local tempH.ListNode = l1\head_, tempT.ListNode = l1\tail_
l1\head_ = l2\head_
l1\tail_ = l2\tail_
l2\head_ = tempH
l2\tail_ = tempT
End Function
;Create a new list containing the same values as the first
Function CopyList.LList(lo.LList)
Local ln.LList = CreateList()
Local i.Iterator = GetIterator(lo) : While EachIn(i)
ListAddLast ln, i\Value
Wend
Return ln
End Function
;Reverse the order of elements of a list
Function ReverseList(l.LList)
Local n1.ListNode, n2.ListNode, tmp.ListNode
n1 = l\head_
n2 = l\head_\nx_
While n1 <> l\tail_
n1\pv_ = n2
tmp = n2\nx_
n2\nx_ = n1
n1 = n2
n2 = tmp
Wend
tmp = l\head_
l\head_ = l\tail_
l\tail_ = tmp
l\head_\pv_ = l\head_
l\tail_\nx_ = l\tail_
End Function
;Search a list to retrieve the first node with the given value
Function ListFindNode.ListNode(l.LList, Value)
Local n.ListNode = l\head_\nx_
While n <> l\tail_
If n\Value = Value Then Return n
n = n\nx_
Wend
Return Null
End Function
;Append a value to the end of a list (fast) and return the node
Function ListAddLast.ListNode(l.LList, Value)
Local n.ListNode = New ListNode
n\pv_ = l\tail_\pv_
n\nx_ = l\tail_
n\Value = Value
l\tail_\pv_ = n
n\pv_\nx_ = n
Return n
End Function
;Attach a value to the start of a list (fast) and return the node
Function ListAddFirst.ListNode(l.LList, Value)
Local n.ListNode = New ListNode
n\pv_ = l\head_
n\nx_ = l\head_\nx_
n\Value = Value
l\head_\nx_ = n
n\nx_\pv_ = n
Return n
End Function
;Remove the first occurence of the given value from a list
Function ListRemove(l.LList, Value)
Local n.ListNode = ListFindNode(l, Value)
If n <> Null Then RemoveListNode n
End Function
;Remove a node from a list
Function RemoveListNode(n.ListNode)
n\pv_\nx_ = n\nx_
n\nx_\pv_ = n\pv_
Delete n
End Function
;Return the value of the element at the given position from the start of the list,
;or backwards from the end of the list for a negative index
Function ValueAtIndex(l.LList, index)
Local n.ListNode = ListNodeAtIndex(l, index)
If n <> Null Then Return n\Value : Else Return 0
End Function
;Return the ListNode at the given position from the start of the list, or backwards
;from the end of the list for a negative index, or Null if invalid
Function ListNodeAtIndex.ListNode(l.LList, index)
Local e, n.ListNode
If index >= 0
n = l\head_
For e = 0 To index
n = n\nx_
Next
If n = l\tail_ Then n = Null ;Beyond the end of the list - not valid
Else ;Negative index - count backward
n = l\tail_
For e = 0 To index Step -1
n = n\pv_
Next
If n = l\head_ Then n = Null ;Before the start of the list - not valid
EndIf
Return n
End Function
;Replace a value at the given position (added by MusicianKool)
Function ReplaceValueAtIndex(l.LList,index,value)
Local n.ListNode = ListNodeAtIndex(l,index)
If n <> Null Then n\Value = value:Else Return 0
End Function
;Remove and return a value at the given position (added by MusicianKool)
Function RemoveNodeAtIndex(l.LList,index)
Local n.ListNode = ListNodeAtIndex(l,index),tval
If n <> Null Then tval = n\Value:RemoveListNode(n):Return tval:Else Return 0
End Function
;Retrieve the first value from a list
Function ListFirst(l.LList)
If l\head_\nx_ <> l\tail_ Then Return l\head_\nx_\Value
End Function
;Retrieve the last value from a list
Function ListLast(l.LList)
If l\tail_\pv_ <> l\head_ Then Return l\tail_\pv_\Value
End Function
;Remove the first element from a list, and return its value
Function ListRemoveFirst(l.LList)
Local val
If l\head_\nx_ <> l\tail_
val = l\head_\nx_\Value
RemoveListNode l\head_\nx_
EndIf
Return val
End Function
;Remove the last element from a list, and return its value
Function ListRemoveLast(l.LList)
Local val
If l\tail_\pv_ <> l\head_
val = l\tail_\pv_\Value
RemoveListNode l\tail_\pv_
EndIf
Return val
End Function
;Insert a value into a list before the specified node, and return the new node
Function InsertBeforeNode.ListNode(Value, n.ListNode)
Local bef.ListNode = New ListNode
bef\pv_ = n\pv_
bef\nx_ = n
bef\Value = Value
n\pv_ = bef
bef\pv_\nx_ = bef
Return bef
End Function
;Insert a value into a list after the specified node, and return then new node
Function InsertAfterNode.ListNode(Value, n.ListNode)
Local aft.ListNode = New ListNode
aft\nx_ = n\nx_
aft\pv_ = n
aft\Value = Value
n\nx_ = aft
aft\nx_\pv_ = aft
Return aft
End Function
;Get an iterator object to use with a loop
;This function means that most programs won't have to think about deleting iterators manually
;(in general only a small, constant number will be created)
Function GetIterator.Iterator(l.LList)
Local i.Iterator
If l = Null Then RuntimeError "Cannot create Iterator for Null"
For i = Each Iterator ;See if there's an available iterator at the moment
If i\l_ = Null Then Exit
Next
If i = Null Then i = New Iterator ;If there wasn't, create one
i\l_ = l
i\cn_ = l\head_
i\cni_ = -1
i\Value = 0 ;No especial reason why this has to be anything, but meh
Return i
End Function
;Use as the argument to While to iterate over the members of a list
Function EachIn(i.Iterator)
i\cn_ = i\cn_\nx_
If i\cn_ <> i\l_\tail_ ;Still items in the list
i\Value = i\cn_\Value
i\cni_ = i\cni_ + 1
Return True
Else
i\l_ = Null ;Disconnect from the list, having reached the end
i\cn_ = Null
i\cni_ = -1
Return False
EndIf
End Function
;Remove from the containing list the element currently pointed to by an iterator
Function IteratorRemove(i.Iterator)
If (i\cn_ <> i\l_\head_) And (i\cn_ <> i\l_\tail_)
Local temp.ListNode = i\cn_
i\cn_ = i\cn_\pv_
i\cni_ = i\cni_ - 1
i\Value = 0
RemoveListNode temp
Return True
Else
Return False
EndIf
End Function
;Call this before breaking out of an EachIn loop, to disconnect the iterator from the list
Function IteratorBreak(i.Iterator)
i\l_ = Null
i\cn_ = Null
i\cni_ = -1
i\Value = 0
End Function
;~IDEal Editor Parameters:
;~F#5#A#10#18#2A#32#3E#47#4C#58#66#6F#78#8F#9B#A9#B7#BD#C5#CC
;~F#E3#E9#EF#F4#F9#103#10D#11B#12B#13F#152#163
;~C#Blitz3D

View File

@@ -0,0 +1,66 @@
Local i, start, result
Local s.Sum3Obj = New Sum3Obj
For i = 1 To 100000
s = New Sum3Obj
result = Handle Before s
Delete s
Next
start = MilliSecs()
For i = 1 To 1000000
result = Sum3_(MakeSum3Obj(i, i, i))
Next
start = MilliSecs() - start
Print start
start = MilliSecs()
For i = 1 To 1000000
result = Sum3(i, i, i)
Next
start = MilliSecs() - start
Print start
WaitKey
End
Function Sum3(a, b, c)
Return a + b + c
End Function
Type Sum3Obj
Field isActive
Field a, b, c
End Type
Function MakeSum3Obj(a, b, c)
Local s.Sum3Obj = Last Sum3Obj
If s\isActive Then s = New Sum3Obj
s\isActive = True
s\a = a
s\b = b
s\c = c
Restore label
Read foo
Return Handle(s)
End Function
.label
Data (10 + 2), 12, 14
:
Function Sum3_(a_)
Local a.Sum3Obj = Object.Sum3Obj a_
Local return_ = a\a + a\b + a\c
Insert a Before First Sum3Obj :: a\isActive = False
Return return_
End Function
;~IDEal Editor Parameters:
;~C#Blitz3D

167
samples/Bluespec/TL.bsv Normal file
View File

@@ -0,0 +1,167 @@
package TL;
interface TL;
method Action ped_button_push();
(* always_enabled *)
method Action set_car_state_N(Bool x);
(* always_enabled *)
method Action set_car_state_S(Bool x);
(* always_enabled *)
method Action set_car_state_E(Bool x);
(* always_enabled *)
method Action set_car_state_W(Bool x);
method Bool lampRedNS();
method Bool lampAmberNS();
method Bool lampGreenNS();
method Bool lampRedE();
method Bool lampAmberE();
method Bool lampGreenE();
method Bool lampRedW();
method Bool lampAmberW();
method Bool lampGreenW();
method Bool lampRedPed();
method Bool lampAmberPed();
method Bool lampGreenPed();
endinterface: TL
typedef enum {
AllRed,
GreenNS, AmberNS,
GreenE, AmberE,
GreenW, AmberW,
GreenPed, AmberPed} TLstates deriving (Eq, Bits);
typedef UInt#(5) Time32;
typedef UInt#(20) CtrSize;
(* synthesize *)
module sysTL(TL);
Time32 allRedDelay = 2;
Time32 amberDelay = 4;
Time32 nsGreenDelay = 20;
Time32 ewGreenDelay = 10;
Time32 pedGreenDelay = 10;
Time32 pedAmberDelay = 6;
CtrSize clocks_per_sec = 100;
Reg#(TLstates) state <- mkReg(AllRed);
Reg#(TLstates) next_green <- mkReg(GreenNS);
Reg#(Time32) secs <- mkReg(0);
Reg#(Bool) ped_button_pushed <- mkReg(False);
Reg#(Bool) car_present_N <- mkReg(True);
Reg#(Bool) car_present_S <- mkReg(True);
Reg#(Bool) car_present_E <- mkReg(True);
Reg#(Bool) car_present_W <- mkReg(True);
Bool car_present_NS = car_present_N || car_present_S;
Reg#(CtrSize) cycle_ctr <- mkReg(0);
rule dec_cycle_ctr (cycle_ctr != 0);
cycle_ctr <= cycle_ctr - 1;
endrule
Rules low_priority_rule = (rules
rule inc_sec (cycle_ctr == 0);
secs <= secs + 1;
cycle_ctr <= clocks_per_sec;
endrule endrules);
function Action next_state(TLstates ns);
action
state <= ns;
secs <= 0;
endaction
endfunction: next_state
function TLstates green_seq(TLstates x);
case (x)
GreenNS: return (GreenE);
GreenE: return (GreenW);
GreenW: return (GreenNS);
endcase
endfunction
function Bool car_present(TLstates x);
case (x)
GreenNS: return (car_present_NS);
GreenE: return (car_present_E);
GreenW: return (car_present_W);
endcase
endfunction
function Rules make_from_green_rule(TLstates green_state, Time32 delay, Bool car_is_present, TLstates ns);
return (rules
rule from_green (state == green_state && (secs >= delay || !car_is_present));
next_state(ns);
endrule endrules);
endfunction: make_from_green_rule
function Rules make_from_amber_rule(TLstates amber_state, TLstates ng);
return (rules
rule from_amber (state == amber_state && secs >= amberDelay);
next_state(AllRed);
next_green <= ng;
endrule endrules);
endfunction: make_from_amber_rule
Rules hprs[7];
hprs[1] = make_from_green_rule(GreenNS, nsGreenDelay, car_present_NS, AmberNS);
hprs[2] = make_from_amber_rule(AmberNS, GreenE);
hprs[3] = make_from_green_rule(GreenE, ewGreenDelay, car_present_E, AmberE);
hprs[4] = make_from_amber_rule(AmberE, GreenW);
hprs[5] = make_from_green_rule(GreenW, ewGreenDelay, car_present_W, AmberW);
hprs[6] = make_from_amber_rule(AmberW, GreenNS);
hprs[0] = (rules
rule fromAllRed (state == AllRed && secs >= allRedDelay);
if (ped_button_pushed) action
ped_button_pushed <= False;
next_state(GreenPed);
endaction else if (car_present(next_green))
next_state(next_green);
else if (car_present(green_seq(next_green)))
next_state(green_seq(next_green));
else if (car_present(green_seq(green_seq(next_green))))
next_state(green_seq(green_seq(next_green)));
else
noAction;
endrule: fromAllRed endrules);
Rules high_priority_rules = hprs[0];
for (Integer i = 1; i<7; i=i+1)
high_priority_rules = rJoin(hprs[i], high_priority_rules);
addRules(preempts(high_priority_rules, low_priority_rule));
method Action ped_button_push();
ped_button_pushed <= True;
endmethod: ped_button_push
method Action set_car_state_N(b) ; car_present_N <= b; endmethod
method Action set_car_state_S(b) ; car_present_S <= b; endmethod
method Action set_car_state_E(b) ; car_present_E <= b; endmethod
method Action set_car_state_W(b) ; car_present_W <= b; endmethod
method lampRedNS() = (!(state == GreenNS || state == AmberNS));
method lampAmberNS() = (state == AmberNS);
method lampGreenNS() = (state == GreenNS);
method lampRedE() = (!(state == GreenE || state == AmberE));
method lampAmberE() = (state == AmberE);
method lampGreenE() = (state == GreenE);
method lampRedW() = (!(state == GreenW || state == AmberW));
method lampAmberW() = (state == AmberW);
method lampGreenW() = (state == GreenW);
method lampRedPed() = (!(state == GreenPed || state == AmberPed));
method lampAmberPed() = (state == AmberPed);
method lampGreenPed() = (state == GreenPed);
endmodule: sysTL
endpackage: TL

109
samples/Bluespec/TbTL.bsv Normal file
View File

@@ -0,0 +1,109 @@
package TbTL;
import TL::*;
interface Lamp;
method Bool changed;
method Action show_offs;
method Action show_ons;
method Action reset;
endinterface
module mkLamp#(String name, Bool lamp)(Lamp);
Reg#(Bool) prev <- mkReg(False);
method changed = (prev != lamp);
method Action show_offs;
if (prev && !lamp)
$write (name + " off, ");
endmethod
method Action show_ons;
if (!prev && lamp)
$write (name + " on, ");
endmethod
method Action reset;
prev <= lamp;
endmethod
endmodule
(* synthesize *)
module mkTest();
let dut <- sysTL;
Reg#(Bit#(16)) ctr <- mkReg(0);
Reg#(Bool) carN <- mkReg(False);
Reg#(Bool) carS <- mkReg(False);
Reg#(Bool) carE <- mkReg(False);
Reg#(Bool) carW <- mkReg(False);
Lamp lamps[12];
lamps[0] <- mkLamp("0: NS red ", dut.lampRedNS);
lamps[1] <- mkLamp("1: NS amber", dut.lampAmberNS);
lamps[2] <- mkLamp("2: NS green", dut.lampGreenNS);
lamps[3] <- mkLamp("3: E red ", dut.lampRedE);
lamps[4] <- mkLamp("4: E amber", dut.lampAmberE);
lamps[5] <- mkLamp("5: E green", dut.lampGreenE);
lamps[6] <- mkLamp("6: W red ", dut.lampRedW);
lamps[7] <- mkLamp("7: W amber", dut.lampAmberW);
lamps[8] <- mkLamp("8: W green", dut.lampGreenW);
lamps[9] <- mkLamp("9: Ped red ", dut.lampRedPed);
lamps[10] <- mkLamp("10: Ped amber", dut.lampAmberPed);
lamps[11] <- mkLamp("11: Ped green", dut.lampGreenPed);
rule start (ctr == 0);
$dumpvars;
endrule
rule detect_cars;
dut.set_car_state_N(carN);
dut.set_car_state_S(carS);
dut.set_car_state_E(carE);
dut.set_car_state_W(carW);
endrule
rule go;
ctr <= ctr + 1;
if (ctr == 5000) carN <= True;
if (ctr == 6500) carN <= False;
if (ctr == 12_000) dut.ped_button_push;
endrule
rule stop (ctr > 32768);
$display("TESTS FINISHED");
$finish(0);
endrule
function do_offs(l) = l.show_offs;
function do_ons(l) = l.show_ons;
function do_reset(l) = l.reset;
function do_it(f);
action
for (Integer i=0; i<12; i=i+1)
f(lamps[i]);
endaction
endfunction
function any_changes();
Bool b = False;
for (Integer i=0; i<12; i=i+1)
b = b || lamps[i].changed;
return b;
endfunction
rule show (any_changes());
do_it(do_offs);
do_it(do_ons);
do_it(do_reset);
$display("(at time %d)", $time);
endrule
endmodule
endpackage

View File

@@ -0,0 +1,305 @@
' *********************************************************
' ** Simple Grid Screen Demonstration App
' ** Jun 2010
' ** Copyright (c) 2010 Roku Inc. All Rights Reserved.
' *********************************************************
'************************************************************
'** Application startup
'************************************************************
Sub Main()
'initialize theme attributes like titles, logos and overhang color
initTheme()
gridstyle = "Flat-Movie"
'set to go, time to get started
while gridstyle <> ""
print "starting grid style= ";gridstyle
screen=preShowGridScreen(gridstyle)
gridstyle = showGridScreen(screen, gridstyle)
end while
End Sub
'*************************************************************
'** Set the configurable theme attributes for the application
'**
'** Configure the custom overhang and Logo attributes
'** These attributes affect the branding of the application
'** and are artwork, colors and offsets specific to the app
'*************************************************************
Sub initTheme()
app = CreateObject("roAppManager")
app.SetTheme(CreateDefaultTheme())
End Sub
'******************************************************
'** @return The default application theme.
'** Screens can make slight adjustments to the default
'** theme by getting it from here and then overriding
'** individual theme attributes.
'******************************************************
Function CreateDefaultTheme() as Object
theme = CreateObject("roAssociativeArray")
theme.ThemeType = "generic-dark"
' All these are greyscales
theme.GridScreenBackgroundColor = "#363636"
theme.GridScreenMessageColor = "#808080"
theme.GridScreenRetrievingColor = "#CCCCCC"
theme.GridScreenListNameColor = "#FFFFFF"
' Color values work here
theme.GridScreenDescriptionTitleColor = "#001090"
theme.GridScreenDescriptionDateColor = "#FF005B"
theme.GridScreenDescriptionRuntimeColor = "#5B005B"
theme.GridScreenDescriptionSynopsisColor = "#606000"
'used in the Grid Screen
theme.CounterTextLeft = "#FF0000"
theme.CounterSeparator = "#00FF00"
theme.CounterTextRight = "#0000FF"
theme.GridScreenLogoHD = "pkg:/images/Overhang_Test_HD.png"
theme.GridScreenLogoOffsetHD_X = "0"
theme.GridScreenLogoOffsetHD_Y = "0"
theme.GridScreenOverhangHeightHD = "99"
theme.GridScreenLogoSD = "pkg:/images/Overhang_Test_SD43.png"
theme.GridScreenOverhangHeightSD = "66"
theme.GridScreenLogoOffsetSD_X = "0"
theme.GridScreenLogoOffsetSD_Y = "0"
' to use your own focus ring artwork
'theme.GridScreenFocusBorderSD = "pkg:/images/GridCenter_Border_Movies_SD43.png"
'theme.GridScreenBorderOffsetSD = "(-26,-25)"
'theme.GridScreenFocusBorderHD = "pkg:/images/GridCenter_Border_Movies_HD.png"
'theme.GridScreenBorderOffsetHD = "(-28,-20)"
' to use your own description background artwork
'theme.GridScreenDescriptionImageSD = "pkg:/images/Grid_Description_Background_SD43.png"
'theme.GridScreenDescriptionOffsetSD = "(125,170)"
'theme.GridScreenDescriptionImageHD = "pkg:/images/Grid_Description_Background_HD.png"
'theme.GridScreenDescriptionOffsetHD = "(190,255)"
return theme
End Function
'******************************************************
'** Perform any startup/initialization stuff prior to
'** initially showing the screen.
'******************************************************
Function preShowGridScreen(style as string) As Object
m.port=CreateObject("roMessagePort")
screen = CreateObject("roGridScreen")
screen.SetMessagePort(m.port)
' screen.SetDisplayMode("best-fit")
screen.SetDisplayMode("scale-to-fill")
screen.SetGridStyle(style)
return screen
End Function
'******************************************************
'** Display the gird screen and wait for events from
'** the screen. The screen will show retreiving while
'** we fetch and parse the feeds for the show posters
'******************************************************
Function showGridScreen(screen As Object, gridstyle as string) As string
print "enter showGridScreen"
categoryList = getCategoryList()
categoryList[0] = "GridStyle: " + gridstyle
screen.setupLists(categoryList.count())
screen.SetListNames(categoryList)
StyleButtons = getGridControlButtons()
screen.SetContentList(0, StyleButtons)
for i = 1 to categoryList.count()-1
screen.SetContentList(i, getShowsForCategoryItem(categoryList[i]))
end for
screen.Show()
while true
print "Waiting for message"
msg = wait(0, m.port)
'msg = wait(0, screen.GetMessagePort()) ' getmessageport does not work on gridscreen
print "Got Message:";type(msg)
if type(msg) = "roGridScreenEvent" then
print "msg= "; msg.GetMessage() " , index= "; msg.GetIndex(); " data= "; msg.getData()
if msg.isListItemFocused() then
print"list item focused | current show = "; msg.GetIndex()
else if msg.isListItemSelected() then
row = msg.GetIndex()
selection = msg.getData()
print "list item selected row= "; row; " selection= "; selection
' Did we get a selection from the gridstyle selection row?
if (row = 0)
' yes, return so we can come back with new style
return StyleButtons[selection].Title
endif
'm.curShow = displayShowDetailScreen(showList[msg.GetIndex()])
else if msg.isScreenClosed() then
return ""
end if
end If
end while
End Function
'**********************************************************
'** When a poster on the home screen is selected, we call
'** this function passing an roAssociativeArray with the
'** ContentMetaData for the selected show. This data should
'** be sufficient for the springboard to display
'**********************************************************
Function displayShowDetailScreen(category as Object, showIndex as Integer) As Integer
'add code to create springboard, for now we do nothing
return 1
End Function
'**************************************************************
'** Return the list of categories to display in the filter
'** banner. The result is an roArray containing the names of
'** all of the categories. All just static data for the example.
'***************************************************************
Function getCategoryList() As Object
categoryList = [ "GridStyle", "Reality", "History", "News", "Comedy", "Drama"]
return categoryList
End Function
'********************************************************************
'** Given the category from the filter banner, return an array
'** of ContentMetaData objects (roAssociativeArray's) representing
'** the shows for the category. For this example, we just cheat and
'** create and return a static array with just the minimal items
'** set, but ideally, you'd go to a feed service, fetch and parse
'** this data dynamically, so content for each category is dynamic
'********************************************************************
Function getShowsForCategoryItem(category As Object) As Object
print "getting shows for category "; category
showList = [
{
Title: category + ": Header",
releaseDate: "1976",
length: 3600-600,
Description:"This row is category " + category,
hdBranded: true,
HDPosterUrl:"http://upload.wikimedia.org/wikipedia/commons/4/43/Gold_star_on_blue.gif",
SDPosterUrl:"http://upload.wikimedia.org/wikipedia/commons/4/43/Gold_star_on_blue.gif",
Description:"Short Synopsis #1",
Synopsis:"Length",
StarRating:10,
}
{
Title: category + ": Beverly Hillbillies",
releaseDate: "1969",
rating: "PG",
Description:"Come and listen to a story about a man named Jed: Poor mountaineer, barely kept his family fed. Then one day he was shootin at some food, and up through the ground came a bubblin crude. Oil that is, black gold, Texas tea.",
numEpisodes:42,
contentType:"season",
HDPosterUrl:"http://upload.wikimedia.org/wikipedia/en/4/4e/The_Beverly_Hillbillies.jpg",
SDPosterUrl:"http://upload.wikimedia.org/wikipedia/en/4/4e/The_Beverly_Hillbillies.jpg",
StarRating:80,
UserStarRating:40
}
{
Title: category + ": Babylon 5",
releaseDate: "1996",
rating: "PG",
Description:"The show centers on the Babylon 5 space station: a focal point for politics, diplomacy, and conflict during the years 2257-2262.",
numEpisodes:102,
contentType:"season",
HDPosterUrl:"http://upload.wikimedia.org/wikipedia/en/9/9d/Smb5-s4.jpg",
SDPosterUrl:"http://upload.wikimedia.org/wikipedia/en/9/9d/Smb5-s4.jpg",
StarRating:80,
UserStarRating:40
}
{
Title: category + ": John F. Kennedy",
releaseDate: "1961",
rating: "PG",
Description:"My fellow citizens of the world: ask not what America will do for you, but what together we can do for the freedom of man.",
HDPosterUrl:"http://upload.wikimedia.org/wikipedia/en/5/52/Jfk_happy_birthday_1.jpg",
SDPosterUrl:"http://upload.wikimedia.org/wikipedia/en/5/52/Jfk_happy_birthday_1.jpg",
StarRating:100
}
{
Title: category + ": Man on the Moon",
releaseDate: "1969",
rating: "PG",
Description:"That's one small step for a man, one giant leap for mankind.",
HDPosterUrl:"http://upload.wikimedia.org/wikipedia/commons/1/1e/Apollo_11_first_step.jpg",
SDPosterUrl:"http://upload.wikimedia.org/wikipedia/commons/1/1e/Apollo_11_first_step.jpg",
StarRating:100
}
{
Title: category + ": I have a Dream",
releaseDate: "1963",
rating: "PG",
Description:"I have a dream that my four little children will one day live in a nation where they will not be judged by the color of their skin, but by the content of their character.",
HDPosterUrl:"http://upload.wikimedia.org/wikipedia/commons/8/81/Martin_Luther_King_-_March_on_Washington.jpg",
SDPosterUrl:"http://upload.wikimedia.org/wikipedia/commons/8/81/Martin_Luther_King_-_March_on_Washington.jpg",
StarRating:100
}
]
return showList
End Function
function getGridControlButtons() as object
buttons = [
{ Title: "Flat-Movie"
ReleaseDate: "HD:5x2 SD:5x2"
Description: "Flat-Movie (Netflix) style"
HDPosterUrl:"http://upload.wikimedia.org/wikipedia/commons/4/43/Gold_star_on_blue.gif"
SDPosterUrl:"http://upload.wikimedia.org/wikipedia/commons/4/43/Gold_star_on_blue.gif"
}
{ Title: "Flat-Landscape"
ReleaseDate: "HD:5x3 SD:4x3"
Description: "Channel Store"
HDPosterUrl:"http://upload.wikimedia.org/wikipedia/commons/thumb/9/96/Dunkery_Hill.jpg/800px-Dunkery_Hill.jpg",
SDPosterUrl:"http://upload.wikimedia.org/wikipedia/commons/thumb/9/96/Dunkery_Hill.jpg/800px-Dunkery_Hill.jpg",
}
{ Title: "Flat-Portrait"
ReleaseDate: "HD:5x2 SD:5x2"
Description: "3x4 style posters"
HDPosterUrl:"http://upload.wikimedia.org/wikipedia/commons/9/9f/Kane_George_Gurnett.jpg",
SDPosterUrl:"http://upload.wikimedia.org/wikipedia/commons/9/9f/Kane_George_Gurnett.jpg",
}
{ Title: "Flat-Square"
ReleaseDate: "HD:7x3 SD:6x3"
Description: "1x1 style posters"
HDPosterUrl:"http://upload.wikimedia.org/wikipedia/commons/thumb/d/de/SQUARE_SHAPE.svg/536px-SQUARE_SHAPE.svg.png",
SDPosterUrl:"http://upload.wikimedia.org/wikipedia/commons/thumb/d/de/SQUARE_SHAPE.svg/536px-SQUARE_SHAPE.svg.png",
}
{ Title: "Flat-16x9"
ReleaseDate: "HD:5x3 SD:4x3"
Description: "HD style posters"
HDPosterUrl:"http://upload.wikimedia.org/wikipedia/commons/thumb/2/22/%C3%89cran_TV_plat.svg/200px-%C3%89cran_TV_plat.svg.png",
SDPosterUrl:"http://upload.wikimedia.org/wikipedia/commons/thumb/2/22/%C3%89cran_TV_plat.svg/200px-%C3%89cran_TV_plat.svg.png",
}
]
return buttons
End Function

View File

@@ -1,39 +0,0 @@
void foo()
{
cudaArray* cu_array;
texture<float, 2, cudaReadModeElementType> tex;
// Allocate array
cudaChannelFormatDesc description = cudaCreateChannelDesc<float>();
cudaMallocArray(&cu_array, &description, width, height);
// Copy image data to array
cudaMemcpyToArray(cu_array, image, width*height*sizeof(float), cudaMemcpyHostToDevice);
// Set texture parameters (default)
tex.addressMode[0] = cudaAddressModeClamp;
tex.addressMode[1] = cudaAddressModeClamp;
tex.filterMode = cudaFilterModePoint;
tex.normalized = false; // do not normalize coordinates
// Bind the array to the texture
cudaBindTextureToArray(tex, cu_array);
// Run kernel
dim3 blockDim(16, 16, 1);
dim3 gridDim((width + blockDim.x - 1)/ blockDim.x, (height + blockDim.y - 1) / blockDim.y, 1);
kernel<<< gridDim, blockDim, 0 >>>(d_data, height, width);
// Unbind the array from the texture
cudaUnbindTexture(tex);
} //end foo()
__global__ void kernel(float* odata, int height, int width)
{
unsigned int x = blockIdx.x*blockDim.x + threadIdx.x;
unsigned int y = blockIdx.y*blockDim.y + threadIdx.y;
if (x < width && y < height) {
float c = tex2D(tex, x, y);
odata[y*width+x] = c;
}
}

69
samples/C++/gdsdbreader.h Normal file
View File

@@ -0,0 +1,69 @@
#ifndef GDSDBREADER_H
#define GDSDBREADER_H
// This file contains core structures, classes and types for the entire gds app
// WARNING: DO NOT MODIFY UNTIL IT'S STRICTLY NECESSARY
#include <QDir>
#include "diagramwidget/qgldiagramwidget.h"
#define GDS_DIR "gdsdata"
enum level {LEVEL_ONE, LEVEL_TWO, LEVEL_THREE};
// The internal structure of the db to store information about each node (each level)
// this will be serialized before being written to file
class dbDataStructure
{
public:
QString label;
quint32 depth;
quint32 userIndex;
QByteArray data; // This is COMPRESSED data, optimize ram and disk space, is decompressed
// just when needed (to display the comments)
// The following ID is used to create second-third level files
quint64 uniqueID;
// All the next items linked to this one
QVector<dbDataStructure*> nextItems;
// Corresponding indices vector (used to store data)
QVector<quint32> nextItemsIndices;
// The father element (or NULL if it's root)
dbDataStructure* father;
// Corresponding indices vector (used to store data)
quint32 fatherIndex;
bool noFatherRoot; // Used to tell if this node is the root (so hasn't a father)
// These fields will be useful for levels 2 and 3
QString fileName; // Relative filename for the associated code file
QByteArray firstLineData; // Compressed first line data, this will be used with the line number to retrieve info
QVector<quint32> linesNumbers; // First and next lines (next are relative to the first) numbers
// -- Generic system data not to be stored on disk
void *glPointer; // GL pointer
// These operator overrides prevent the glPointer and other non-disk-necessary data serialization
friend QDataStream& operator<<(QDataStream& stream, const dbDataStructure& myclass)
// Notice: this function has to be "friend" because it cannot be a member function, member functions
// have an additional parameter "this" which isn't in the argument list of an operator overload. A friend
// function has full access to private data of the class without having the "this" argument
{
// Don't write glPointer and every pointer-dependent structure
return stream << myclass.label << myclass.depth << myclass.userIndex << qCompress(myclass.data)
<< myclass.uniqueID << myclass.nextItemsIndices << myclass.fatherIndex << myclass.noFatherRoot
<< myclass.fileName << qCompress(myclass.firstLineData) << myclass.linesNumbers;
}
friend QDataStream& operator>>(QDataStream& stream, dbDataStructure& myclass)
{
//Don't read it, either
stream >> myclass.label >> myclass.depth >> myclass.userIndex >> myclass.data
>> myclass.uniqueID >> myclass.nextItemsIndices >> myclass.fatherIndex >> myclass.noFatherRoot
>> myclass.fileName >> myclass.firstLineData >> myclass.linesNumbers;
myclass.data = qUncompress(myclass.data);
myclass.firstLineData = qUncompress(myclass.firstLineData);
return stream;
}
};
#endif // GDSDBREADER_H

View File

@@ -0,0 +1,327 @@
// Generated by the protocol buffer compiler. DO NOT EDIT!
// source: protocol-buffer.proto
#define INTERNAL_SUPPRESS_PROTOBUF_FIELD_DEPRECATION
#include "protocol-buffer.pb.h"
#include <algorithm>
#include <google/protobuf/stubs/common.h>
#include <google/protobuf/stubs/once.h>
#include <google/protobuf/io/coded_stream.h>
#include <google/protobuf/wire_format_lite_inl.h>
#include <google/protobuf/descriptor.h>
#include <google/protobuf/generated_message_reflection.h>
#include <google/protobuf/reflection_ops.h>
#include <google/protobuf/wire_format.h>
// @@protoc_insertion_point(includes)
namespace persons {
namespace {
const ::google::protobuf::Descriptor* Person_descriptor_ = NULL;
const ::google::protobuf::internal::GeneratedMessageReflection*
Person_reflection_ = NULL;
} // namespace
void protobuf_AssignDesc_protocol_2dbuffer_2eproto() {
protobuf_AddDesc_protocol_2dbuffer_2eproto();
const ::google::protobuf::FileDescriptor* file =
::google::protobuf::DescriptorPool::generated_pool()->FindFileByName(
"protocol-buffer.proto");
GOOGLE_CHECK(file != NULL);
Person_descriptor_ = file->message_type(0);
static const int Person_offsets_[1] = {
GOOGLE_PROTOBUF_GENERATED_MESSAGE_FIELD_OFFSET(Person, name_),
};
Person_reflection_ =
new ::google::protobuf::internal::GeneratedMessageReflection(
Person_descriptor_,
Person::default_instance_,
Person_offsets_,
GOOGLE_PROTOBUF_GENERATED_MESSAGE_FIELD_OFFSET(Person, _has_bits_[0]),
GOOGLE_PROTOBUF_GENERATED_MESSAGE_FIELD_OFFSET(Person, _unknown_fields_),
-1,
::google::protobuf::DescriptorPool::generated_pool(),
::google::protobuf::MessageFactory::generated_factory(),
sizeof(Person));
}
namespace {
GOOGLE_PROTOBUF_DECLARE_ONCE(protobuf_AssignDescriptors_once_);
inline void protobuf_AssignDescriptorsOnce() {
::google::protobuf::GoogleOnceInit(&protobuf_AssignDescriptors_once_,
&protobuf_AssignDesc_protocol_2dbuffer_2eproto);
}
void protobuf_RegisterTypes(const ::std::string&) {
protobuf_AssignDescriptorsOnce();
::google::protobuf::MessageFactory::InternalRegisterGeneratedMessage(
Person_descriptor_, &Person::default_instance());
}
} // namespace
void protobuf_ShutdownFile_protocol_2dbuffer_2eproto() {
delete Person::default_instance_;
delete Person_reflection_;
}
void protobuf_AddDesc_protocol_2dbuffer_2eproto() {
static bool already_here = false;
if (already_here) return;
already_here = true;
GOOGLE_PROTOBUF_VERIFY_VERSION;
::google::protobuf::DescriptorPool::InternalAddGeneratedFile(
"\n\025protocol-buffer.proto\022\007persons\"\026\n\006Pers"
"on\022\014\n\004name\030\001 \002(\t", 56);
::google::protobuf::MessageFactory::InternalRegisterGeneratedFile(
"protocol-buffer.proto", &protobuf_RegisterTypes);
Person::default_instance_ = new Person();
Person::default_instance_->InitAsDefaultInstance();
::google::protobuf::internal::OnShutdown(&protobuf_ShutdownFile_protocol_2dbuffer_2eproto);
}
// Force AddDescriptors() to be called at static initialization time.
struct StaticDescriptorInitializer_protocol_2dbuffer_2eproto {
StaticDescriptorInitializer_protocol_2dbuffer_2eproto() {
protobuf_AddDesc_protocol_2dbuffer_2eproto();
}
} static_descriptor_initializer_protocol_2dbuffer_2eproto_;
// ===================================================================
#ifndef _MSC_VER
const int Person::kNameFieldNumber;
#endif // !_MSC_VER
Person::Person()
: ::google::protobuf::Message() {
SharedCtor();
}
void Person::InitAsDefaultInstance() {
}
Person::Person(const Person& from)
: ::google::protobuf::Message() {
SharedCtor();
MergeFrom(from);
}
void Person::SharedCtor() {
_cached_size_ = 0;
name_ = const_cast< ::std::string*>(&::google::protobuf::internal::kEmptyString);
::memset(_has_bits_, 0, sizeof(_has_bits_));
}
Person::~Person() {
SharedDtor();
}
void Person::SharedDtor() {
if (name_ != &::google::protobuf::internal::kEmptyString) {
delete name_;
}
if (this != default_instance_) {
}
}
void Person::SetCachedSize(int size) const {
GOOGLE_SAFE_CONCURRENT_WRITES_BEGIN();
_cached_size_ = size;
GOOGLE_SAFE_CONCURRENT_WRITES_END();
}
const ::google::protobuf::Descriptor* Person::descriptor() {
protobuf_AssignDescriptorsOnce();
return Person_descriptor_;
}
const Person& Person::default_instance() {
if (default_instance_ == NULL) protobuf_AddDesc_protocol_2dbuffer_2eproto();
return *default_instance_;
}
Person* Person::default_instance_ = NULL;
Person* Person::New() const {
return new Person;
}
void Person::Clear() {
if (_has_bits_[0 / 32] & (0xffu << (0 % 32))) {
if (has_name()) {
if (name_ != &::google::protobuf::internal::kEmptyString) {
name_->clear();
}
}
}
::memset(_has_bits_, 0, sizeof(_has_bits_));
mutable_unknown_fields()->Clear();
}
bool Person::MergePartialFromCodedStream(
::google::protobuf::io::CodedInputStream* input) {
#define DO_(EXPRESSION) if (!(EXPRESSION)) return false
::google::protobuf::uint32 tag;
while ((tag = input->ReadTag()) != 0) {
switch (::google::protobuf::internal::WireFormatLite::GetTagFieldNumber(tag)) {
// required string name = 1;
case 1: {
if (::google::protobuf::internal::WireFormatLite::GetTagWireType(tag) ==
::google::protobuf::internal::WireFormatLite::WIRETYPE_LENGTH_DELIMITED) {
DO_(::google::protobuf::internal::WireFormatLite::ReadString(
input, this->mutable_name()));
::google::protobuf::internal::WireFormat::VerifyUTF8String(
this->name().data(), this->name().length(),
::google::protobuf::internal::WireFormat::PARSE);
} else {
goto handle_uninterpreted;
}
if (input->ExpectAtEnd()) return true;
break;
}
default: {
handle_uninterpreted:
if (::google::protobuf::internal::WireFormatLite::GetTagWireType(tag) ==
::google::protobuf::internal::WireFormatLite::WIRETYPE_END_GROUP) {
return true;
}
DO_(::google::protobuf::internal::WireFormat::SkipField(
input, tag, mutable_unknown_fields()));
break;
}
}
}
return true;
#undef DO_
}
void Person::SerializeWithCachedSizes(
::google::protobuf::io::CodedOutputStream* output) const {
// required string name = 1;
if (has_name()) {
::google::protobuf::internal::WireFormat::VerifyUTF8String(
this->name().data(), this->name().length(),
::google::protobuf::internal::WireFormat::SERIALIZE);
::google::protobuf::internal::WireFormatLite::WriteString(
1, this->name(), output);
}
if (!unknown_fields().empty()) {
::google::protobuf::internal::WireFormat::SerializeUnknownFields(
unknown_fields(), output);
}
}
::google::protobuf::uint8* Person::SerializeWithCachedSizesToArray(
::google::protobuf::uint8* target) const {
// required string name = 1;
if (has_name()) {
::google::protobuf::internal::WireFormat::VerifyUTF8String(
this->name().data(), this->name().length(),
::google::protobuf::internal::WireFormat::SERIALIZE);
target =
::google::protobuf::internal::WireFormatLite::WriteStringToArray(
1, this->name(), target);
}
if (!unknown_fields().empty()) {
target = ::google::protobuf::internal::WireFormat::SerializeUnknownFieldsToArray(
unknown_fields(), target);
}
return target;
}
int Person::ByteSize() const {
int total_size = 0;
if (_has_bits_[0 / 32] & (0xffu << (0 % 32))) {
// required string name = 1;
if (has_name()) {
total_size += 1 +
::google::protobuf::internal::WireFormatLite::StringSize(
this->name());
}
}
if (!unknown_fields().empty()) {
total_size +=
::google::protobuf::internal::WireFormat::ComputeUnknownFieldsSize(
unknown_fields());
}
GOOGLE_SAFE_CONCURRENT_WRITES_BEGIN();
_cached_size_ = total_size;
GOOGLE_SAFE_CONCURRENT_WRITES_END();
return total_size;
}
void Person::MergeFrom(const ::google::protobuf::Message& from) {
GOOGLE_CHECK_NE(&from, this);
const Person* source =
::google::protobuf::internal::dynamic_cast_if_available<const Person*>(
&from);
if (source == NULL) {
::google::protobuf::internal::ReflectionOps::Merge(from, this);
} else {
MergeFrom(*source);
}
}
void Person::MergeFrom(const Person& from) {
GOOGLE_CHECK_NE(&from, this);
if (from._has_bits_[0 / 32] & (0xffu << (0 % 32))) {
if (from.has_name()) {
set_name(from.name());
}
}
mutable_unknown_fields()->MergeFrom(from.unknown_fields());
}
void Person::CopyFrom(const ::google::protobuf::Message& from) {
if (&from == this) return;
Clear();
MergeFrom(from);
}
void Person::CopyFrom(const Person& from) {
if (&from == this) return;
Clear();
MergeFrom(from);
}
bool Person::IsInitialized() const {
if ((_has_bits_[0] & 0x00000001) != 0x00000001) return false;
return true;
}
void Person::Swap(Person* other) {
if (other != this) {
std::swap(name_, other->name_);
std::swap(_has_bits_[0], other->_has_bits_[0]);
_unknown_fields_.Swap(&other->_unknown_fields_);
std::swap(_cached_size_, other->_cached_size_);
}
}
::google::protobuf::Metadata Person::GetMetadata() const {
protobuf_AssignDescriptorsOnce();
::google::protobuf::Metadata metadata;
metadata.descriptor = Person_descriptor_;
metadata.reflection = Person_reflection_;
return metadata;
}
// @@protoc_insertion_point(namespace_scope)
} // namespace persons
// @@protoc_insertion_point(global_scope)

View File

@@ -0,0 +1,218 @@
// Generated by the protocol buffer compiler. DO NOT EDIT!
// source: protocol-buffer.proto
#ifndef PROTOBUF_protocol_2dbuffer_2eproto__INCLUDED
#define PROTOBUF_protocol_2dbuffer_2eproto__INCLUDED
#include <string>
#include <google/protobuf/stubs/common.h>
#if GOOGLE_PROTOBUF_VERSION < 2005000
#error This file was generated by a newer version of protoc which is
#error incompatible with your Protocol Buffer headers. Please update
#error your headers.
#endif
#if 2005000 < GOOGLE_PROTOBUF_MIN_PROTOC_VERSION
#error This file was generated by an older version of protoc which is
#error incompatible with your Protocol Buffer headers. Please
#error regenerate this file with a newer version of protoc.
#endif
#include <google/protobuf/generated_message_util.h>
#include <google/protobuf/message.h>
#include <google/protobuf/repeated_field.h>
#include <google/protobuf/extension_set.h>
#include <google/protobuf/unknown_field_set.h>
// @@protoc_insertion_point(includes)
namespace persons {
// Internal implementation detail -- do not call these.
void protobuf_AddDesc_protocol_2dbuffer_2eproto();
void protobuf_AssignDesc_protocol_2dbuffer_2eproto();
void protobuf_ShutdownFile_protocol_2dbuffer_2eproto();
class Person;
// ===================================================================
class Person : public ::google::protobuf::Message {
public:
Person();
virtual ~Person();
Person(const Person& from);
inline Person& operator=(const Person& from) {
CopyFrom(from);
return *this;
}
inline const ::google::protobuf::UnknownFieldSet& unknown_fields() const {
return _unknown_fields_;
}
inline ::google::protobuf::UnknownFieldSet* mutable_unknown_fields() {
return &_unknown_fields_;
}
static const ::google::protobuf::Descriptor* descriptor();
static const Person& default_instance();
void Swap(Person* other);
// implements Message ----------------------------------------------
Person* New() const;
void CopyFrom(const ::google::protobuf::Message& from);
void MergeFrom(const ::google::protobuf::Message& from);
void CopyFrom(const Person& from);
void MergeFrom(const Person& from);
void Clear();
bool IsInitialized() const;
int ByteSize() const;
bool MergePartialFromCodedStream(
::google::protobuf::io::CodedInputStream* input);
void SerializeWithCachedSizes(
::google::protobuf::io::CodedOutputStream* output) const;
::google::protobuf::uint8* SerializeWithCachedSizesToArray(::google::protobuf::uint8* output) const;
int GetCachedSize() const { return _cached_size_; }
private:
void SharedCtor();
void SharedDtor();
void SetCachedSize(int size) const;
public:
::google::protobuf::Metadata GetMetadata() const;
// nested types ----------------------------------------------------
// accessors -------------------------------------------------------
// required string name = 1;
inline bool has_name() const;
inline void clear_name();
static const int kNameFieldNumber = 1;
inline const ::std::string& name() const;
inline void set_name(const ::std::string& value);
inline void set_name(const char* value);
inline void set_name(const char* value, size_t size);
inline ::std::string* mutable_name();
inline ::std::string* release_name();
inline void set_allocated_name(::std::string* name);
// @@protoc_insertion_point(class_scope:persons.Person)
private:
inline void set_has_name();
inline void clear_has_name();
::google::protobuf::UnknownFieldSet _unknown_fields_;
::std::string* name_;
mutable int _cached_size_;
::google::protobuf::uint32 _has_bits_[(1 + 31) / 32];
friend void protobuf_AddDesc_protocol_2dbuffer_2eproto();
friend void protobuf_AssignDesc_protocol_2dbuffer_2eproto();
friend void protobuf_ShutdownFile_protocol_2dbuffer_2eproto();
void InitAsDefaultInstance();
static Person* default_instance_;
};
// ===================================================================
// ===================================================================
// Person
// required string name = 1;
inline bool Person::has_name() const {
return (_has_bits_[0] & 0x00000001u) != 0;
}
inline void Person::set_has_name() {
_has_bits_[0] |= 0x00000001u;
}
inline void Person::clear_has_name() {
_has_bits_[0] &= ~0x00000001u;
}
inline void Person::clear_name() {
if (name_ != &::google::protobuf::internal::kEmptyString) {
name_->clear();
}
clear_has_name();
}
inline const ::std::string& Person::name() const {
return *name_;
}
inline void Person::set_name(const ::std::string& value) {
set_has_name();
if (name_ == &::google::protobuf::internal::kEmptyString) {
name_ = new ::std::string;
}
name_->assign(value);
}
inline void Person::set_name(const char* value) {
set_has_name();
if (name_ == &::google::protobuf::internal::kEmptyString) {
name_ = new ::std::string;
}
name_->assign(value);
}
inline void Person::set_name(const char* value, size_t size) {
set_has_name();
if (name_ == &::google::protobuf::internal::kEmptyString) {
name_ = new ::std::string;
}
name_->assign(reinterpret_cast<const char*>(value), size);
}
inline ::std::string* Person::mutable_name() {
set_has_name();
if (name_ == &::google::protobuf::internal::kEmptyString) {
name_ = new ::std::string;
}
return name_;
}
inline ::std::string* Person::release_name() {
clear_has_name();
if (name_ == &::google::protobuf::internal::kEmptyString) {
return NULL;
} else {
::std::string* temp = name_;
name_ = const_cast< ::std::string*>(&::google::protobuf::internal::kEmptyString);
return temp;
}
}
inline void Person::set_allocated_name(::std::string* name) {
if (name_ != &::google::protobuf::internal::kEmptyString) {
delete name_;
}
if (name) {
set_has_name();
name_ = name;
} else {
clear_has_name();
name_ = const_cast< ::std::string*>(&::google::protobuf::internal::kEmptyString);
}
}
// @@protoc_insertion_point(namespace_scope)
} // namespace persons
#ifndef SWIG
namespace google {
namespace protobuf {
} // namespace google
} // namespace protobuf
#endif // SWIG
// @@protoc_insertion_point(global_scope)
#endif // PROTOBUF_protocol_2dbuffer_2eproto__INCLUDED

415
samples/C++/qscicommand.h Normal file
View File

@@ -0,0 +1,415 @@
// This defines the interface to the QsciCommand class.
//
// Copyright (c) 2011 Riverbank Computing Limited <info@riverbankcomputing.com>
//
// This file is part of QScintilla.
//
// This file may be used under the terms of the GNU General Public
// License versions 2.0 or 3.0 as published by the Free Software
// Foundation and appearing in the files LICENSE.GPL2 and LICENSE.GPL3
// included in the packaging of this file. Alternatively you may (at
// your option) use any later version of the GNU General Public
// License if such license has been publicly approved by Riverbank
// Computing Limited (or its successors, if any) and the KDE Free Qt
// Foundation. In addition, as a special exception, Riverbank gives you
// certain additional rights. These rights are described in the Riverbank
// GPL Exception version 1.1, which can be found in the file
// GPL_EXCEPTION.txt in this package.
//
// If you are unsure which license is appropriate for your use, please
// contact the sales department at sales@riverbankcomputing.com.
//
// This file is provided AS IS with NO WARRANTY OF ANY KIND, INCLUDING THE
// WARRANTY OF DESIGN, MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE.
#ifndef QSCICOMMAND_H
#define QSCICOMMAND_H
#ifdef __APPLE__
extern "C++" {
#endif
#include <qstring.h>
#include <Qsci/qsciglobal.h>
#include <Qsci/qsciscintillabase.h>
class QsciScintilla;
//! \brief The QsciCommand class represents an internal editor command that may
//! have one or two keys bound to it.
//!
//! Methods are provided to change the keys bound to the command and to remove
//! a key binding. Each command has a user friendly description of the command
//! for use in key mapping dialogs.
class QSCINTILLA_EXPORT QsciCommand
{
public:
//! This enum defines the different commands that can be assigned to a key.
enum Command {
//! Move down one line.
LineDown = QsciScintillaBase::SCI_LINEDOWN,
//! Extend the selection down one line.
LineDownExtend = QsciScintillaBase::SCI_LINEDOWNEXTEND,
//! Extend the rectangular selection down one line.
LineDownRectExtend = QsciScintillaBase::SCI_LINEDOWNRECTEXTEND,
//! Scroll the view down one line.
LineScrollDown = QsciScintillaBase::SCI_LINESCROLLDOWN,
//! Move up one line.
LineUp = QsciScintillaBase::SCI_LINEUP,
//! Extend the selection up one line.
LineUpExtend = QsciScintillaBase::SCI_LINEUPEXTEND,
//! Extend the rectangular selection up one line.
LineUpRectExtend = QsciScintillaBase::SCI_LINEUPRECTEXTEND,
//! Scroll the view up one line.
LineScrollUp = QsciScintillaBase::SCI_LINESCROLLUP,
//! Scroll to the start of the document.
ScrollToStart = QsciScintillaBase::SCI_SCROLLTOSTART,
//! Scroll to the end of the document.
ScrollToEnd = QsciScintillaBase::SCI_SCROLLTOEND,
//! Scroll vertically to centre the current line.
VerticalCentreCaret = QsciScintillaBase::SCI_VERTICALCENTRECARET,
//! Move down one paragraph.
ParaDown = QsciScintillaBase::SCI_PARADOWN,
//! Extend the selection down one paragraph.
ParaDownExtend = QsciScintillaBase::SCI_PARADOWNEXTEND,
//! Move up one paragraph.
ParaUp = QsciScintillaBase::SCI_PARAUP,
//! Extend the selection up one paragraph.
ParaUpExtend = QsciScintillaBase::SCI_PARAUPEXTEND,
//! Move left one character.
CharLeft = QsciScintillaBase::SCI_CHARLEFT,
//! Extend the selection left one character.
CharLeftExtend = QsciScintillaBase::SCI_CHARLEFTEXTEND,
//! Extend the rectangular selection left one character.
CharLeftRectExtend = QsciScintillaBase::SCI_CHARLEFTRECTEXTEND,
//! Move right one character.
CharRight = QsciScintillaBase::SCI_CHARRIGHT,
//! Extend the selection right one character.
CharRightExtend = QsciScintillaBase::SCI_CHARRIGHTEXTEND,
//! Extend the rectangular selection right one character.
CharRightRectExtend = QsciScintillaBase::SCI_CHARRIGHTRECTEXTEND,
//! Move left one word.
WordLeft = QsciScintillaBase::SCI_WORDLEFT,
//! Extend the selection left one word.
WordLeftExtend = QsciScintillaBase::SCI_WORDLEFTEXTEND,
//! Move right one word.
WordRight = QsciScintillaBase::SCI_WORDRIGHT,
//! Extend the selection right one word.
WordRightExtend = QsciScintillaBase::SCI_WORDRIGHTEXTEND,
//! Move to the end of the previous word.
WordLeftEnd = QsciScintillaBase::SCI_WORDLEFTEND,
//! Extend the selection to the end of the previous word.
WordLeftEndExtend = QsciScintillaBase::SCI_WORDLEFTENDEXTEND,
//! Move to the end of the next word.
WordRightEnd = QsciScintillaBase::SCI_WORDRIGHTEND,
//! Extend the selection to the end of the next word.
WordRightEndExtend = QsciScintillaBase::SCI_WORDRIGHTENDEXTEND,
//! Move left one word part.
WordPartLeft = QsciScintillaBase::SCI_WORDPARTLEFT,
//! Extend the selection left one word part.
WordPartLeftExtend = QsciScintillaBase::SCI_WORDPARTLEFTEXTEND,
//! Move right one word part.
WordPartRight = QsciScintillaBase::SCI_WORDPARTRIGHT,
//! Extend the selection right one word part.
WordPartRightExtend = QsciScintillaBase::SCI_WORDPARTRIGHTEXTEND,
//! Move to the start of the document line.
Home = QsciScintillaBase::SCI_HOME,
//! Extend the selection to the start of the document line.
HomeExtend = QsciScintillaBase::SCI_HOMEEXTEND,
//! Extend the rectangular selection to the start of the document line.
HomeRectExtend = QsciScintillaBase::SCI_HOMERECTEXTEND,
//! Move to the start of the displayed line.
HomeDisplay = QsciScintillaBase::SCI_HOMEDISPLAY,
//! Extend the selection to the start of the displayed line.
HomeDisplayExtend = QsciScintillaBase::SCI_HOMEDISPLAYEXTEND,
//! Move to the start of the displayed or document line.
HomeWrap = QsciScintillaBase::SCI_HOMEWRAP,
//! Extend the selection to the start of the displayed or document
//! line.
HomeWrapExtend = QsciScintillaBase::SCI_HOMEWRAPEXTEND,
//! Move to the first visible character in the document line.
VCHome = QsciScintillaBase::SCI_VCHOME,
//! Extend the selection to the first visible character in the document
//! line.
VCHomeExtend = QsciScintillaBase::SCI_VCHOMEEXTEND,
//! Extend the rectangular selection to the first visible character in
//! the document line.
VCHomeRectExtend = QsciScintillaBase::SCI_VCHOMERECTEXTEND,
//! Move to the first visible character of the displayed or document
//! line.
VCHomeWrap = QsciScintillaBase::SCI_VCHOMEWRAP,
//! Extend the selection to the first visible character of the
//! displayed or document line.
VCHomeWrapExtend = QsciScintillaBase::SCI_VCHOMEWRAPEXTEND,
//! Move to the end of the document line.
LineEnd = QsciScintillaBase::SCI_LINEEND,
//! Extend the selection to the end of the document line.
LineEndExtend = QsciScintillaBase::SCI_LINEENDEXTEND,
//! Extend the rectangular selection to the end of the document line.
LineEndRectExtend = QsciScintillaBase::SCI_LINEENDRECTEXTEND,
//! Move to the end of the displayed line.
LineEndDisplay = QsciScintillaBase::SCI_LINEENDDISPLAY,
//! Extend the selection to the end of the displayed line.
LineEndDisplayExtend = QsciScintillaBase::SCI_LINEENDDISPLAYEXTEND,
//! Move to the end of the displayed or document line.
LineEndWrap = QsciScintillaBase::SCI_LINEENDWRAP,
//! Extend the selection to the end of the displayed or document line.
LineEndWrapExtend = QsciScintillaBase::SCI_LINEENDWRAPEXTEND,
//! Move to the start of the document.
DocumentStart = QsciScintillaBase::SCI_DOCUMENTSTART,
//! Extend the selection to the start of the document.
DocumentStartExtend = QsciScintillaBase::SCI_DOCUMENTSTARTEXTEND,
//! Move to the end of the document.
DocumentEnd = QsciScintillaBase::SCI_DOCUMENTEND,
//! Extend the selection to the end of the document.
DocumentEndExtend = QsciScintillaBase::SCI_DOCUMENTENDEXTEND,
//! Move up one page.
PageUp = QsciScintillaBase::SCI_PAGEUP,
//! Extend the selection up one page.
PageUpExtend = QsciScintillaBase::SCI_PAGEUPEXTEND,
//! Extend the rectangular selection up one page.
PageUpRectExtend = QsciScintillaBase::SCI_PAGEUPRECTEXTEND,
//! Move down one page.
PageDown = QsciScintillaBase::SCI_PAGEDOWN,
//! Extend the selection down one page.
PageDownExtend = QsciScintillaBase::SCI_PAGEDOWNEXTEND,
//! Extend the rectangular selection down one page.
PageDownRectExtend = QsciScintillaBase::SCI_PAGEDOWNRECTEXTEND,
//! Stuttered move up one page.
StutteredPageUp = QsciScintillaBase::SCI_STUTTEREDPAGEUP,
//! Stuttered extend the selection up one page.
StutteredPageUpExtend = QsciScintillaBase::SCI_STUTTEREDPAGEUPEXTEND,
//! Stuttered move down one page.
StutteredPageDown = QsciScintillaBase::SCI_STUTTEREDPAGEDOWN,
//! Stuttered extend the selection down one page.
StutteredPageDownExtend = QsciScintillaBase::SCI_STUTTEREDPAGEDOWNEXTEND,
//! Delete the current character.
Delete = QsciScintillaBase::SCI_CLEAR,
//! Delete the previous character.
DeleteBack = QsciScintillaBase::SCI_DELETEBACK,
//! Delete the previous character if not at start of line.
DeleteBackNotLine = QsciScintillaBase::SCI_DELETEBACKNOTLINE,
//! Delete the word to the left.
DeleteWordLeft = QsciScintillaBase::SCI_DELWORDLEFT,
//! Delete the word to the right.
DeleteWordRight = QsciScintillaBase::SCI_DELWORDRIGHT,
//! Delete right to the end of the next word.
DeleteWordRightEnd = QsciScintillaBase::SCI_DELWORDRIGHTEND,
//! Delete the line to the left.
DeleteLineLeft = QsciScintillaBase::SCI_DELLINELEFT,
//! Delete the line to the right.
DeleteLineRight = QsciScintillaBase::SCI_DELLINERIGHT,
//! Delete the current line.
LineDelete = QsciScintillaBase::SCI_LINEDELETE,
//! Cut the current line to the clipboard.
LineCut = QsciScintillaBase::SCI_LINECUT,
//! Copy the current line to the clipboard.
LineCopy = QsciScintillaBase::SCI_LINECOPY,
//! Transpose the current and previous lines.
LineTranspose = QsciScintillaBase::SCI_LINETRANSPOSE,
//! Duplicate the current line.
LineDuplicate = QsciScintillaBase::SCI_LINEDUPLICATE,
//! Select the whole document.
SelectAll = QsciScintillaBase::SCI_SELECTALL,
//! Move the selected lines up one line.
MoveSelectedLinesUp = QsciScintillaBase::SCI_MOVESELECTEDLINESUP,
//! Move the selected lines down one line.
MoveSelectedLinesDown = QsciScintillaBase::SCI_MOVESELECTEDLINESDOWN,
//! Duplicate the selection.
SelectionDuplicate = QsciScintillaBase::SCI_SELECTIONDUPLICATE,
//! Convert the selection to lower case.
SelectionLowerCase = QsciScintillaBase::SCI_LOWERCASE,
//! Convert the selection to upper case.
SelectionUpperCase = QsciScintillaBase::SCI_UPPERCASE,
//! Cut the selection to the clipboard.
SelectionCut = QsciScintillaBase::SCI_CUT,
//! Copy the selection to the clipboard.
SelectionCopy = QsciScintillaBase::SCI_COPY,
//! Paste from the clipboard.
Paste = QsciScintillaBase::SCI_PASTE,
//! Toggle insert/overtype.
EditToggleOvertype = QsciScintillaBase::SCI_EDITTOGGLEOVERTYPE,
//! Insert a platform dependent newline.
Newline = QsciScintillaBase::SCI_NEWLINE,
//! Insert a formfeed.
Formfeed = QsciScintillaBase::SCI_FORMFEED,
//! Indent one level.
Tab = QsciScintillaBase::SCI_TAB,
//! De-indent one level.
Backtab = QsciScintillaBase::SCI_BACKTAB,
//! Cancel any current operation.
Cancel = QsciScintillaBase::SCI_CANCEL,
//! Undo the last command.
Undo = QsciScintillaBase::SCI_UNDO,
//! Redo the last command.
Redo = QsciScintillaBase::SCI_REDO,
//! Zoom in.
ZoomIn = QsciScintillaBase::SCI_ZOOMIN,
//! Zoom out.
ZoomOut = QsciScintillaBase::SCI_ZOOMOUT,
};
//! Return the command that will be executed by this instance.
Command command() const {return scicmd;}
//! Execute the command.
void execute();
//! Binds the key \a key to the command. If \a key is 0 then the key
//! binding is removed. If \a key is invalid then the key binding is
//! unchanged. Valid keys are any visible or control character or any
//! of \c Key_Down, \c Key_Up, \c Key_Left, \c Key_Right, \c Key_Home,
//! \c Key_End, \c Key_PageUp, \c Key_PageDown, \c Key_Delete,
//! \c Key_Insert, \c Key_Escape, \c Key_Backspace, \c Key_Tab and
//! \c Key_Return. Keys may be modified with any combination of \c SHIFT,
//! \c CTRL, \c ALT and \c META.
//!
//! \sa key(), setAlternateKey(), validKey()
void setKey(int key);
//! Binds the alternate key \a altkey to the command. If \a key is 0
//! then the alternate key binding is removed.
//!
//! \sa alternateKey(), setKey(), validKey()
void setAlternateKey(int altkey);
//! The key that is currently bound to the command is returned.
//!
//! \sa setKey(), alternateKey()
int key() const {return qkey;}
//! The alternate key that is currently bound to the command is
//! returned.
//!
//! \sa setAlternateKey(), key()
int alternateKey() const {return qaltkey;}
//! If the key \a key is valid then true is returned.
static bool validKey(int key);
//! The user friendly description of the command is returned.
QString description() const;
private:
friend class QsciCommandSet;
QsciCommand(QsciScintilla *qs, Command cmd, int key, int altkey,
const char *desc);
void bindKey(int key,int &qk,int &scik);
QsciScintilla *qsCmd;
Command scicmd;
int qkey, scikey, qaltkey, scialtkey;
const char *descCmd;
QsciCommand(const QsciCommand &);
QsciCommand &operator=(const QsciCommand &);
};
#ifdef __APPLE__
}
#endif
#endif

116
samples/C++/qsciprinter.h Normal file
View File

@@ -0,0 +1,116 @@
// This module defines interface to the QsciPrinter class.
//
// Copyright (c) 2011 Riverbank Computing Limited <info@riverbankcomputing.com>
//
// This file is part of QScintilla.
//
// This file may be used under the terms of the GNU General Public
// License versions 2.0 or 3.0 as published by the Free Software
// Foundation and appearing in the files LICENSE.GPL2 and LICENSE.GPL3
// included in the packaging of this file. Alternatively you may (at
// your option) use any later version of the GNU General Public
// License if such license has been publicly approved by Riverbank
// Computing Limited (or its successors, if any) and the KDE Free Qt
// Foundation. In addition, as a special exception, Riverbank gives you
// certain additional rights. These rights are described in the Riverbank
// GPL Exception version 1.1, which can be found in the file
// GPL_EXCEPTION.txt in this package.
//
// If you are unsure which license is appropriate for your use, please
// contact the sales department at sales@riverbankcomputing.com.
//
// This file is provided AS IS with NO WARRANTY OF ANY KIND, INCLUDING THE
// WARRANTY OF DESIGN, MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE.
#ifndef QSCIPRINTER_H
#define QSCIPRINTER_H
#ifdef __APPLE__
extern "C++" {
#endif
#include <qprinter.h>
#include <Qsci/qsciglobal.h>
#include <Qsci/qsciscintilla.h>
QT_BEGIN_NAMESPACE
class QRect;
class QPainter;
QT_END_NAMESPACE
class QsciScintillaBase;
//! \brief The QsciPrinter class is a sub-class of the Qt QPrinter class that
//! is able to print the text of a Scintilla document.
//!
//! The class can be further sub-classed to alter to layout of the text, adding
//! headers and footers for example.
class QSCINTILLA_EXPORT QsciPrinter : public QPrinter
{
public:
//! Constructs a printer paint device with mode \a mode.
QsciPrinter(PrinterMode mode = ScreenResolution);
//! Destroys the QsciPrinter instance.
virtual ~QsciPrinter();
//! Format a page, by adding headers and footers for example, before the
//! document text is drawn on it. \a painter is the painter to be used to
//! add customised text and graphics. \a drawing is true if the page is
//! actually being drawn rather than being sized. \a painter drawing
//! methods must only be called when \a drawing is true. \a area is the
//! area of the page that will be used to draw the text. This should be
//! modified if it is necessary to reserve space for any customised text or
//! graphics. By default the area is relative to the printable area of the
//! page. Use QPrinter::setFullPage() because calling printRange() if you
//! want to try and print over the whole page. \a pagenr is the number of
//! the page. The first page is numbered 1.
virtual void formatPage(QPainter &painter, bool drawing, QRect &area,
int pagenr);
//! Return the number of points to add to each font when printing.
//!
//! \sa setMagnification()
int magnification() const {return mag;}
//! Sets the number of points to add to each font when printing to \a
//! magnification.
//!
//! \sa magnification()
virtual void setMagnification(int magnification);
//! Print a range of lines from the Scintilla instance \a qsb. \a from is
//! the first line to print and a negative value signifies the first line
//! of text. \a to is the last line to print and a negative value
//! signifies the last line of text. true is returned if there was no
//! error.
virtual int printRange(QsciScintillaBase *qsb, int from = -1, int to = -1);
//! Return the line wrap mode used when printing. The default is
//! QsciScintilla::WrapWord.
//!
//! \sa setWrapMode()
QsciScintilla::WrapMode wrapMode() const {return wrap;}
//! Sets the line wrap mode used when printing to \a wmode.
//!
//! \sa wrapMode()
virtual void setWrapMode(QsciScintilla::WrapMode wmode);
private:
int mag;
QsciScintilla::WrapMode wrap;
QsciPrinter(const QsciPrinter &);
QsciPrinter &operator=(const QsciPrinter &);
};
#ifdef __APPLE__
}
#endif
#endif

File diff suppressed because it is too large Load Diff

61
samples/C/jni_layer.h Normal file
View File

@@ -0,0 +1,61 @@
/* DO NOT EDIT THIS FILE - it is machine generated */
#include <jni.h>
/* Header for class jni_JniLayer */
#ifndef _Included_jni_JniLayer
#define _Included_jni_JniLayer
#ifdef __cplusplus
extern "C" {
#endif
/*
* Class: jni_JniLayer
* Method: jni_layer_initialize
* Signature: ([II)J
*/
JNIEXPORT jlong JNICALL Java_jni_JniLayer_jni_1layer_1initialize
(JNIEnv *, jobject, jintArray, jint, jint);
/*
* Class: jni_JniLayer
* Method: jni_layer_mainloop
* Signature: (J)V
*/
JNIEXPORT void JNICALL Java_jni_JniLayer_jni_1layer_1mainloop
(JNIEnv *, jobject, jlong);
/*
* Class: jni_JniLayer
* Method: jni_layer_set_button
* Signature: (JII)V
*/
JNIEXPORT void JNICALL Java_jni_JniLayer_jni_1layer_1set_1button
(JNIEnv *, jobject, jlong, jint, jint);
/*
* Class: jni_JniLayer
* Method: jni_layer_set_analog
* Signature: (JIIF)V
*/
JNIEXPORT void JNICALL Java_jni_JniLayer_jni_1layer_1set_1analog
(JNIEnv *, jobject, jlong, jint, jint, jfloat);
/*
* Class: jni_JniLayer
* Method: jni_layer_report_analog_chg
* Signature: (JI)V
*/
JNIEXPORT void JNICALL Java_jni_JniLayer_jni_1layer_1report_1analog_1chg
(JNIEnv *, jobject, jlong, jint);
/*
* Class: jni_JniLayer
* Method: jni_layer_kill
* Signature: (J)V
*/
JNIEXPORT void JNICALL Java_jni_JniLayer_jni_1layer_1kill
(JNIEnv *, jobject, jlong);
#ifdef __cplusplus
}
#endif
#endif

1267
samples/C/rf_io.c Normal file

File diff suppressed because it is too large Load Diff

682
samples/C/rf_io.h Normal file
View File

@@ -0,0 +1,682 @@
/**
** Copyright (c) 2011-2012, Karapetsas Eleftherios
** All rights reserved.
**
** Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met:
** 1. Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer.
** 2. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in
** the documentation and/or other materials provided with the distribution.
** 3. Neither the name of the Original Author of Refu nor the names of its contributors may be used to endorse or promote products derived from
**
** THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES,
** INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
** DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
** SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
** SERVICES;LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY,
** WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
** OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
**/
#ifndef REFU_IO_H
#define REFU_IO_H
#include <rf_setup.h>
#include <stdio.h>
#ifdef __cplusplus
extern "C"
{// opening bracket for calling from C++
#endif
// New line feed
#define RF_LF 0xA
// Carriage Return
#define RF_CR 0xD
#ifdef REFU_WIN32_VERSION
#define i_PLUSB_WIN32 "b"
#else
#define i_PLUSB_WIN32 ""
#endif
// This is the type that represents the file offset
#ifdef _MSC_VER
typedef __int64 foff_rft;
#else
#include <sys/types.h>
typedef off64_t foff_rft;
#endif
///Fseek and Ftelll definitions
#ifdef _MSC_VER
#define rfFseek(i_FILE_,i_OFFSET_,i_WHENCE_) _fseeki64(i_FILE_,i_OFFSET_,i_WHENCE_)
#define rfFtell(i_FILE_) _ftelli64(i_FILE_)
#else
#define rfFseek(i_FILE_,i_OFFSET_,i_WHENCE_) fseeko64(i_FILE_,i_OFFSET_,i_WHENCE_)
#define rfFtell(i_FILE_) ftello64(i_FILE_)
#endif
/**
** @defgroup RF_IOGRP I/O
** @addtogroup RF_IOGRP
** @{
**/
// @brief Reads a UTF-8 file descriptor until end of line or EOF is found and returns a UTF-8 byte buffer
//
// The file descriptor at @c f must have been opened in <b>binary</b> and not text mode. That means that if under
// Windows make sure to call fopen with "wb", "rb" e.t.c. instead of the simple "w", "r" e.t.c. since the initial
// default value under Windows is text mode. Alternatively you can set the initial value using _get_fmode() and
// _set_fmode(). For more information take a look at the msdn pages here:
// http://msdn.microsoft.com/en-us/library/ktss1a9b.aspx
//
// When the compile flag @c RF_NEWLINE_CRLF is defined (the default case at Windows) then this function
// shall not be adding any CR character that is found in the file behind a newline character since this is
// the Windows line ending scheme. Beware though that the returned read bytes value shall still count the CR character inside.
//
// @param[in] f The file descriptor to read
// @param[out] utf8 Give here a refence to an unitialized char* that will be allocated inside the function
// and contain the utf8 byte buffer. Needs to be freed by the caller explicitly later
// @param[out] byteLength Give an @c uint32_t here to receive the length of the @c utf8 buffer in bytes
// @param[out] bufferSize Give an @c uint32_t here to receive the capacity of the @c utf8 buffer in bytes
// @param[out] eof Pass a pointer to a char to receive a true or false value in case the end of file
// with reading this line
// @return Returns either a positive number for success that represents the number of bytes read from @c f and and error in case something goes wrong.
// The possible errors to return are the same as rfFgets_UTF8()
i_DECLIMEX_ int32_t rfFReadLine_UTF8(FILE* f,char** utf8,uint32_t* byteLength,uint32_t* bufferSize,char* eof);
// @brief Reads a Big Endian UTF-16 file descriptor until end of line or EOF is found and returns a UTF-8 byte buffer
//
// The file descriptor at @c f must have been opened in <b>binary</b> and not text mode. That means that if under
// Windows make sure to call fopen with "wb", "rb" e.t.c. instead of the simple "w", "r" e.t.c. since the initial
// default value under Windows is text mode. Alternatively you can set the initial value using _get_fmode() and
// _set_fmode(). For more information take a look at the msdn pages here:
// http://msdn.microsoft.com/en-us/library/ktss1a9b.aspx
//
// When the compile flag @c RF_NEWLINE_CRLF is defined (the default case at Windows) then this function
// shall not be adding any CR character that is found in the file behind a newline character since this is
// the Windows line ending scheme. Beware though that the returned read bytes value shall still count the CR character inside.
//
// @param[in] f The file descriptor to read
// @param[out] utf8 Give here a refence to an unitialized char* that will be allocated inside the function
// and contain the utf8 byte buffer. Needs to be freed by the caller explicitly later
// @param[out] byteLength Give an @c uint32_t here to receive the length of the @c utf8 buffer in bytes
// @param[out] eof Pass a pointer to a char to receive a true or false value in case the end of file
// with reading this line
// @return Returns either a positive number for success that represents the number of bytes read from @c f and and error in case something goes wrong.
// + Any error that can be returned by @ref rfFgets_UTF16BE()
// + @c RE_UTF16_INVALID_SEQUENCE: Failed to decode the UTF-16 byte stream of the file descriptor
// + @c RE_UTF8_ENCODING: Failed to encode the UTF-16 of the file descriptor into UTF-8
i_DECLIMEX_ int32_t rfFReadLine_UTF16BE(FILE* f,char** utf8,uint32_t* byteLength,char* eof);
// @brief Reads a Little Endian UTF-16 file descriptor until end of line or EOF is found and returns a UTF-8 byte buffer
//
// The file descriptor at @c f must have been opened in <b>binary</b> and not text mode. That means that if under
// Windows make sure to call fopen with "wb", "rb" e.t.c. instead of the simple "w", "r" e.t.c. since the initial
// default value under Windows is text mode. Alternatively you can set the initial value using _get_fmode() and
// _set_fmode(). For more information take a look at the msdn pages here:
// http://msdn.microsoft.com/en-us/library/ktss1a9b.aspx
//
// When the compile flag @c RF_NEWLINE_CRLF is defined (the default case at Windows) then this function
// shall not be adding any CR character that is found in the file behind a newline character since this is
// the Windows line ending scheme. Beware though that the returned read bytes value shall still count the CR character inside.
//
// @param[in] f The file descriptor to read
// @param[out] utf8 Give here a refence to an unitialized char* that will be allocated inside the function
// and contain the utf8 byte buffer. Needs to be freed by the caller explicitly later
// @param[out] byteLength Give an @c uint32_t here to receive the length of the @c utf8 buffer in bytes
// @param[out] eof Pass a pointer to a char to receive a true or false value in case the end of file
// with reading this line
// @return Returns either a positive number for success that represents the number of bytes read from @c f and and error in case something goes wrong.
// + Any error that can be returned by @ref rfFgets_UTF16LE()
// + @c RE_UTF16_INVALID_SEQUENCE: Failed to decode the UTF-16 byte stream of the file descriptor
// + @c RE_UTF8_ENCODING: Failed to encode the UTF-16 of the file descriptor into UTF-8
i_DECLIMEX_ int32_t rfFReadLine_UTF16LE(FILE* f,char** utf8,uint32_t* byteLength,char* eof);
// @brief Reads a Big Endian UTF-32 file descriptor until end of line or EOF is found and returns a UTF-8 byte buffer
//
// The file descriptor at @c f must have been opened in <b>binary</b> and not text mode. That means that if under
// Windows make sure to call fopen with "wb", "rb" e.t.c. instead of the simple "w", "r" e.t.c. since the initial
// default value under Windows is text mode. Alternatively you can set the initial value using _get_fmode() and
// _set_fmode(). For more information take a look at the msdn pages here:
// http://msdn.microsoft.com/en-us/library/ktss1a9b.aspx
//
// When the compile flag @c RF_NEWLINE_CRLF is defined (the default case at Windows) then this function
// shall not be adding any CR character that is found in the file behind a newline character since this is
// the Windows line ending scheme. Beware though that the returned read bytes value shall still count the CR character inside.
//
// @param[in] f The file descriptor to read
// @param[out] utf8 Give here a refence to an unitialized char* that will be allocated inside the function
// and contain the utf8 byte buffer. Needs to be freed by the caller explicitly later
// @param[out] byteLength Give an @c uint32_t here to receive the length of the @c utf8 buffer in bytes
// @param[out] eof Pass a pointer to a char to receive a true or false value in case the end of file
// with reading this line
// @return Returns either a positive number for success that represents the number of bytes read from @c f and and error in case something goes wrong.
// + Any error that can be returned by @ref rfFgets_UTF32BE()
// + @c RE_UTF8_ENCODING: Failed to encode the UTF-16 of the file descriptor into UTF-8
i_DECLIMEX_ int32_t rfFReadLine_UTF32BE(FILE* f,char** utf8,uint32_t* byteLength,char* eof);
// @brief Reads a Little Endian UTF-32 file descriptor until end of line or EOF is found and returns a UTF-8 byte buffer
//
// The file descriptor at @c f must have been opened in <b>binary</b> and not text mode. That means that if under
// Windows make sure to call fopen with "wb", "rb" e.t.c. instead of the simple "w", "r" e.t.c. since the initial
// default value under Windows is text mode. Alternatively you can set the initial value using _get_fmode() and
// _set_fmode(). For more information take a look at the msdn pages here:
// http://msdn.microsoft.com/en-us/library/ktss1a9b.aspx
//
// When the compile flag @c RF_NEWLINE_CRLF is defined (the default case at Windows) then this function
// shall not be adding any CR character that is found in the file behind a newline character since this is
// the Windows line ending scheme. Beware though that the returned read bytes value shall still count the CR character inside.
//
// @param[in] f The file descriptor to read
// @param[out] utf8 Give here a refence to an unitialized char* that will be allocated inside the function
// and contain the utf8 byte buffer. Needs to be freed by the caller explicitly later
// @param[out] byteLength Give an @c uint32_t here to receive the length of the @c utf8 buffer in bytes
// @param[out] eof Pass a pointer to a char to receive a true or false value in case the end of file
// with reading this line
// @return Returns either a positive number for success that represents the number of bytes read from @c f and and error in case something goes wrong.
// + Any error that can be returned by @ref rfFgets_UTF32LE()
// + @c RE_UTF8_ENCODING: Failed to encode the UTF-16 of the file descriptor into UTF-8
i_DECLIMEX_ int32_t rfFReadLine_UTF32LE(FILE* f,char** utf8,uint32_t* byteLength,char* eof);
// @brief Gets a number of bytes from a BIG endian UTF-32 file descriptor
//
// This is a function that's similar to c library fgets but it also returns the number of bytes read. Reads in from the file until @c num bytes
// have been read or new line or EOF character has been encountered.
//
// The function will read until @c num characters are read and if @c num
// would take us to the middle of a UTF32 character then the next character shall also be read
// and the function will return the number of bytes read.
// Since the function null terminates the buffer the given @c buff needs to be of at least
// @c num+7 size to cater for the worst case.
//
// The final bytestream stored inside @c buff is in the endianess of the system.
//
// If right after the last character read comes the EOF, the function
// shall detect so and assign @c true to @c eof.
//
// In Windows where file endings are in the form of 2 bytes CR-LF (Carriage return - NewLine) this function
// shall just ignore the carriage returns and not return it inside the return buffer at @c buff.
//
// The file descriptor at @c f must have been opened in <b>binary</b> and not text mode. That means that if under
// Windows make sure to call fopen with "wb", "rb" e.t.c. instead of the simple "w", "r" e.t.c. since the initial
// default value under Windows is text mode. Alternatively you can set the initial value using _get_fmode() and
// _set_fmode(). For more information take a look at the msdn pages here:
// http://msdn.microsoft.com/en-us/library/ktss1a9b.aspx
//
// @param[in] buff A buffer to be filled with the contents of the file. Should be of size at least @c num+7
// @param[in] num The maximum number of bytes to read from within the file NOT including the null terminating character(which in itelf is 4 bytes). Should be a multiple of 4
// @param[in] f A valid FILE descriptor from which to read the bytes
// @param[out] eof Pass a reference to a char to receive a true/false value for whether EOF has been reached.
// @return Returns the actual number of bytes read or an error if there was a problem.
// The possible errors are:
// + @c RE_FILE_READ: If during reading the file there was an unknown read error
// + @c RE_FILE_READ_BLOCK: If the read operation failed due to the file descriptor being occupied by another thread
// + @c RE_FILE_MODE: If during reading the file the file descriptor's mode was not correctly set for reading
// + @c RE_FILE_POS_OVERFLOW: If during reading, the current file position can't be represented by the system
// + @c RE_INTERRUPT: If during reading, there was a system interrupt
// + @c RE_FILE_IO: If there was a physical I/O error
// + @c RE_FILE_NOSPACE: If reading failed due to insufficient storage space
i_DECLIMEX_ int32_t rfFgets_UTF32BE(char* buff,uint32_t num,FILE* f,char* eof);
// @brief Gets a number of bytes from a Little endian UTF-32 file descriptor
//
// This is a function that's similar to c library fgets but it also returns the number of bytes read. Reads in from the file until @c num bytes
// have been read or new line or EOF character has been encountered.
//
// The function will read until @c num characters are read and if @c num
// would take us to the middle of a UTF32 character then the next character shall also be read
// and the function will return the number of bytes read.
// Since the function null terminates the buffer the given @c buff needs to be of at least
// @c num+7 size to cater for the worst case.
//
// The final bytestream stored inside @c buff is in the endianess of the system.
//
// If right after the last character read comes the EOF, the function
// shall detect so and assign @c true to @c eof.
//
// In Windows where file endings are in the form of 2 bytes CR-LF (Carriage return - NewLine) this function
// shall just ignore the carriage returns and not return it inside the return buffer at @c buff.
//
// The file descriptor at @c f must have been opened in <b>binary</b> and not text mode. That means that if under
// Windows make sure to call fopen with "wb", "rb" e.t.c. instead of the simple "w", "r" e.t.c. since the initial
// default value under Windows is text mode. Alternatively you can set the initial value using _get_fmode() and
// _set_fmode(). For more information take a look at the msdn pages here:
// http://msdn.microsoft.com/en-us/library/ktss1a9b.aspx
//
// @param[in] buff A buffer to be filled with the contents of the file. Should be of size at least @c num+7
// @param[in] num The maximum number of bytes to read from within the file NOT including the null terminating character(which in itelf is 4 bytes). Should be a multiple of 4
// @param[in] f A valid FILE descriptor from which to read the bytes
// @param[out] eof Pass a reference to a char to receive a true/false value for whether EOF has been reached.
// @return Returns the actual number of bytes read or an error if there was a problem.
// The possible errors are:
// + @c RE_FILE_READ: If during reading the file there was an unknown read error
// + @c RE_FILE_READ_BLOCK: If the read operation failed due to the file descriptor being occupied by another thread
// + @c RE_FILE_MODE: If during reading the file the file descriptor's mode was not correctly set for reading
// + @c RE_FILE_POS_OVERFLOW: If during reading, the current file position can't be represented by the system
// + @c RE_INTERRUPT: If during reading, there was a system interrupt
// + @c RE_FILE_IO: If there was a physical I/O error
// + @c RE_FILE_NOSPACE: If reading failed due to insufficient storage space
i_DECLIMEX_ int32_t rfFgets_UTF32LE(char* buff,uint32_t num,FILE* f,char* eof);
// @brief Gets a number of bytes from a BIG endian UTF-16 file descriptor
//
// This is a function that's similar to c library fgets but it also returns the number of bytes read. Reads in from the file until @c num bytes
// have been read or new line or EOF character has been encountered.
//
// The function will read until @c num characters are read and if @c num
// would take us to the middle of a UTF16 character then the next character shall also be read
// and the function will return the number of bytes read.
// Since the function null terminates the buffer the given @c buff needs to be of at least
// @c num+5 size to cater for the worst case.
//
// The final bytestream stored inside @c buff is in the endianess of the system.
//
// If right after the last character read comes the EOF, the function
// shall detect so and assign @c true to @c eof.
//
// In Windows where file endings are in the form of 2 bytes CR-LF (Carriage return - NewLine) this function
// shall just ignore the carriage returns and not return it inside the return buffer at @c buff.
//
// The file descriptor at @c f must have been opened in <b>binary</b> and not text mode. That means that if under
// Windows make sure to call fopen with "wb", "rb" e.t.c. instead of the simple "w", "r" e.t.c. since the initial
// default value under Windows is text mode. Alternatively you can set the initial value using _get_fmode() and
// _set_fmode(). For more information take a look at the msdn pages here:
// http://msdn.microsoft.com/en-us/library/ktss1a9b.aspx
//
// @param[in] buff A buffer to be filled with the contents of the file. Should be of size at least @c num+5
// @param[in] num The maximum number of bytes to read from within the file NOT including the null terminating character(which in itelf is 2 bytes). Should be a multiple of 2
// @param[in] f A valid FILE descriptor from which to read the bytes
// @param[out] eof Pass a reference to a char to receive a true/false value for whether EOF has been reached.
// @return Returns the actual number of bytes read or an error if there was a problem.
// The possible errors are:
// + @c RE_FILE_READ: If during reading the file there was an unknown read error
// + @c RE_FILE_READ_BLOCK: If the read operation failed due to the file descriptor being occupied by another thread
// + @c RE_FILE_MODE: If during reading the file the file descriptor's mode was not correctly set for reading
// + @c RE_FILE_POS_OVERFLOW: If during reading, the current file position can't be represented by the system
// + @c RE_INTERRUPT: If during reading, there was a system interrupt
// + @c RE_FILE_IO: If there was a physical I/O error
// + @c RE_FILE_NOSPACE: If reading failed due to insufficient storage space
i_DECLIMEX_ int32_t rfFgets_UTF16BE(char* buff,uint32_t num,FILE* f,char* eof);
// @brief Gets a number of bytes from a Little endian UTF-16 file descriptor
//
// This is a function that's similar to c library fgets but it also returns the number of bytes read. Reads in from the file until @c num bytes
// have been read or new line or EOF character has been encountered.
//
// The function will read until @c num characters are read and if @c num
// would take us to the middle of a UTF16 character then the next character shall also be read
// and the function will return the number of bytes read.
// Since the function null terminates the buffer the given @c buff needs to be of at least
// @c num+5 size to cater for the worst case.
//
// The final bytestream stored inside @c buff is in the endianess of the system.
//
// If right after the last character read comes the EOF, the function
// shall detect so and assign @c true to @c eof.
//
// In Windows where file endings are in the form of 2 bytes CR-LF (Carriage return - NewLine) this function
// shall just ignore the carriage returns and not return it inside the return buffer at @c buff.
//
// The file descriptor at @c f must have been opened in <b>binary</b> and not text mode. That means that if under
// Windows make sure to call fopen with "wb", "rb" e.t.c. instead of the simple "w", "r" e.t.c. since the initial
// default value under Windows is text mode. Alternatively you can set the initial value using _get_fmode() and
// _set_fmode(). For more information take a look at the msdn pages here:
// http://msdn.microsoft.com/en-us/library/ktss1a9b.aspx
//
// @param[in] buff A buffer to be filled with the contents of the file. Should be of size at least @c num+2
// @param[in] num The maximum number of bytes to read from within the file NOT including the null terminating character(which in itelf is 2 bytes). Should be a multiple of 2
// @param[in] f A valid FILE descriptor from which to read the bytes
// @param[out] eof Pass a reference to a char to receive a true/false value for whether EOF has been reached.
// @return Returns the actual number of bytes read or an error if there was a problem.
// The possible errors are:
// + @c RE_FILE_READ: If during reading the file there was an unknown read error
// + @c RE_FILE_READ_BLOCK: If the read operation failed due to the file descriptor being occupied by another thread
// + @c RE_FILE_MODE: If during reading the file the file descriptor's mode was not correctly set for reading
// + @c RE_FILE_POS_OVERFLOW: If during reading, the current file position can't be represented by the system
// + @c RE_INTERRUPT: If during reading, there was a system interrupt
// + @c RE_FILE_IO: If there was a physical I/O error
// + @c RE_FILE_NOSPACE: If reading failed due to insufficient storage space
i_DECLIMEX_ int32_t rfFgets_UTF16LE(char* buff,uint32_t num,FILE* f,char* eof);
// @brief Gets a number of bytes from a UTF-8 file descriptor
//
// This is a function that's similar to c library fgets but it also returns the number of bytes read. Reads in from the file until @c num characters
// have been read or new line or EOF character has been encountered.
//
// The function automatically adds a null termination character at the end of
// @c buff but this character is not included in the returned actual number of bytes.
//
// The function will read until @c num characters are read and if @c num
// would take us to the middle of a UTF8 character then the next character shall also be read
// and the function will return the number of bytes read.
// Since the function null terminates the buffer the given @c buff needs to be of at least
// @c num+4 size to cater for the worst case.
//
// If right after the last character read comes the EOF, the function
// shall detect so and assign @c true to @c eof.
//
// In Windows where file endings are in the form of 2 bytes CR-LF (Carriage return - NewLine) this function
// shall just ignore the carriage returns and not return it inside the return buffer at @c buff.
//
// The file descriptor at @c f must have been opened in <b>binary</b> and not text mode. That means that if under
// Windows make sure to call fopen with "wb", "rb" e.t.c. instead of the simple "w", "r" e.t.c. since the initial
// default value under Windows is text mode. Alternatively you can set the initial value using _get_fmode() and
// _set_fmode(). For more information take a look at the msdn pages here:
// http://msdn.microsoft.com/en-us/library/ktss1a9b.aspx
//
// @param[in] buff A buffer to be filled with the contents of the file. Should of size at least @c num+4
// @param[in] num The maximum number of bytes to read from within the file NOT including the null terminating character(which in itelf is 1 byte)
// @param[in] f A valid FILE descriptor from which to read the bytes
// @param[out] eof Pass a reference to a char to receive a true/false value for whether EOF has been reached.
// @return Returns the actual number of bytes read or an error if there was a problem.
// The possible errors are:
// + @c RE_UTF8_INVALID_SEQUENCE_INVALID_BYTE: If an invalid UTF-8 byte has been found
// + @c RE_UTF8_INVALID_SEQUENCE_CONBYTE: If during parsing the file we were expecting a continuation
// byte and did not find it
// + @c RE_UTF8_INVALID_SEQUENCE_END: If the null character is encountered in between bytes that should
// have been continuation bytes
// + @c RE_FILE_READ: If during reading the file there was an unknown read error
// + @c RE_FILE_READ_BLOCK: If the read operation failed due to the file descriptor being occupied by another thread
// + @c RE_FILE_MODE: If during reading the file the file descriptor's mode was not correctly set for reading
// + @c RE_FILE_POS_OVERFLOW: If during reading, the current file position can't be represented by the system
// + @c RE_INTERRUPT: If during reading, there was a system interrupt
// + @c RE_FILE_IO: If there was a physical I/O error
// + @c RE_FILE_NOSPACE: If reading failed due to insufficient storage space
i_DECLIMEX_ int32_t rfFgets_UTF8(char* buff,uint32_t num,FILE* f,char* eof);
// @brief Gets a unicode character from a UTF-8 file descriptor
//
// This function attempts to assume a more modern fgetc() role for UTF-8 encoded files.
// Reads bytes from the File descriptor @c f until a full UTF-8 unicode character has been read
//
// After this function the file pointer will have moved either by @c 1, @c 2, @c 3 or @c 4
// bytes if the return value is positive. You can see how much by checking the return value.
//
// You shall need to provide an integer at @c c to contain either the decoded Unicode
// codepoint or the UTF-8 endoced byte depending on the value of the @c cp argument.
//
// @param f A valid FILE descriptor from which to read the bytes
// @param c Pass an int that will receive either the unicode code point value or
// the UTF8 bytes depending on the value of the @c cp flag
// @param cp A boolean flag. If @c true then the int passed at @c c will contain the unicode code point
// of the read character, so the UTF-8 will be decoded.
// If @c false the int passed at @c c will contain the value of the read bytes in UTF-8 without any decoding
// @return Returns the number of bytes read (either @c 1, @c 2, @c 3 or @c 4) or an error if the function
// fails for some reason. Possible error values are:
// + @c RE_FILE_EOF: The end of file has been found while reading. If the end of file is encountered
// in the middle of a UTF-8 encoded character where we would be expecting something different
// and @c RE_UTF8_INVALID_SEQUENCE_END error is also logged
// + @c RE_UTF8_INVALID_SEQUENCE_INVALID_BYTE: If an invalid UTF-8 byte has been found
// + @c RE_UTF8_INVALID_SEQUENCE_CONBYTE: If during parsing the file we were expecting a continuation
// byte and did not find it
// + @c RE_UTF8_INVALID_SEQUENCE_END: If the null character is encountered in between bytes that should
// have been continuation bytes
// + @c RE_FILE_READ: If during reading the file there was an unknown read error
// + @c RE_FILE_READ_BLOCK: If the read operation failed due to the file descriptor being occupied by another thread
// + @c RE_FILE_MODE: If during reading the file the file descriptor's mode was not correctly set for reading
// + @c RE_FILE_POS_OVERFLOW: If during reading, the current file position can't be represented by the system
// + @c RE_INTERRUPT: If during reading, there was a system interrupt
// + @c RE_FILE_IO: If there was a physical I/O error
// + @c RE_FILE_NOSPACE: If reading failed due to insufficient storage space
i_DECLIMEX_ int32_t rfFgetc_UTF8(FILE* f,uint32_t *c,char cp);
// @brief Gets a unicode character from a UTF-16 Big Endian file descriptor
//
// This function attempts to assume a more modern fgetc() role for UTF-16 encoded files.
// Reads bytes from the File descriptor @c f until a full UTF-16 unicode character has been read
//
// After this function the file pointer will have moved either by @c 2 or @c 4
// bytes if the return value is positive. You can see how much by checking the return value.
//
// You shall need to provide an integer at @c c to contain either the decoded Unicode
// codepoint or the Bigendian encoded UTF-16 bytes depending on the value of @c the cp argument.
//
// @param f A valid FILE descriptor from which to read the bytes
// @param c Pass an int that will receive either the unicode code point value or
// the UTF16 bytes depending on the value of the @c cp flag
// @param cp A boolean flag. If @c true then the int passed at @c c will contain the unicode code point
// of the read character, so the UTF-16 will be decoded.
// If @c false the int passed at @c c will contain the value of the read bytes in UTF-16 without any decoding
// @return Returns the number of bytes read (either @c 2 or @c 4) or an error if the function
// fails for some reason. Possible error values are:
// + @c RE_UTF16_INVALID_SEQUENCE: Either the read word or its surrogate pair if 4 bytes were read held illegal values
// + @c RE_UTF16_NO_SURRPAIR: According to the first read word a surrogate pair was expected but none was found
// + @c RE_FILE_EOF: The end of file has been found while reading. If the end of file is encountered
// while we expect a UTF-16 surrogate pair an appropriate error is logged
// + @c RE_FILE_READ: If during reading the file there was an unknown read error
// + @c RE_FILE_READ_BLOCK: If the read operation failed due to the file descriptor being occupied by another thread
// + @c RE_FILE_MODE: If during reading the file the file descriptor's mode was not correctly set for reading
// + @c RE_FILE_POS_OVERFLOW: If during reading, the current file position can't be represented by the system
// + @c RE_INTERRUPT: If during reading, there was a system interrupt
// + @c RE_FILE_IO: If there was a physical I/O error
// + @c RE_FILE_NOSPACE: If reading failed due to insufficient storage space
i_DECLIMEX_ int32_t rfFgetc_UTF16BE(FILE* f,uint32_t *c,char cp);
// @brief Gets a unicode character from a UTF-16 Little Endian file descriptor
//
// This function attempts to assume a more modern fgetc() role for UTF-16 encoded files.
// Reads bytes from the File descriptor @c f until a full UTF-16 unicode character has been read
//
// After this function the file pointer will have moved either by @c 2 or @c 4
// bytes if the return value is positive. You can see how much by checking the return value.
//
// You shall need to provide an integer at @c c to contain either the decoded Unicode
// codepoint or the Bigendian encoded UTF-16 bytes depending on the value of @c the cp argument.
//
// @param f A valid FILE descriptor from which to read the bytes
// @param c Pass an int that will receive either the unicode code point value or
// the UTF16 bytes depending on the value of the @c cp flag
// @param cp A boolean flag. If @c true then the int passed at @c c will contain the unicode code point
// of the read character, so the UTF-16 will be decoded.
// If @c false the int passed at @c c will contain the value of the read bytes in UTF-16 without any decoding
// @return Returns the number of bytes read (either @c 2 or @c 4) or an error if the function
// fails for some reason. Possible error values are:
// + @c RE_UTF16_INVALID_SEQUENCE: Either the read word or its surrogate pair if 4 bytes were read held illegal values
// + @c RE_UTF16_NO_SURRPAIR: According to the first read word a surrogate pair was expected but none was found
// + @c RE_FILE_EOF: The end of file has been found while reading. If the end of file is encountered
// while we expect a UTF-16 surrogate pair an appropriate error is logged
// + @c RE_FILE_READ: If during reading the file there was an unknown read error
// + @c RE_FILE_READ_BLOCK: If the read operation failed due to the file descriptor being occupied by another thread
// + @c RE_FILE_MODE: If during reading the file the file descriptor's mode was not correctly set for reading
// + @c RE_FILE_POS_OVERFLOW: If during reading, the current file position can't be represented by the system
// + @c RE_INTERRUPT: If during reading, there was a system interrupt
// + @c RE_FILE_IO: If there was a physical I/O error
// + @c RE_FILE_NOSPACE: If reading failed due to insufficient storage space
i_DECLIMEX_ int32_t rfFgetc_UTF16LE(FILE* f,uint32_t *c,char cp);
// @brief Gets a unicode character from a UTF-32 Little Endian file descriptor
//
// This function attempts to assume a more modern fgetc() role for UTF-32 encoded files.
// Reads bytes from the File descriptor @c f until a full UTF-32 unicode character has been read
//
// After this function the file pointer will have moved by @c 4
// bytes if the return value is positive.
//
// You shall need to provide an integer at @c to contain the UTF-32 codepoint.
//
// @param f A valid FILE descriptor from which to read the bytes
// @param c Pass an int that will receive either the unicode code point value or
// the UTF16 bytes depending on the value of the @c cp flag
// If @c false the int passed at @c c will contain the value of the read bytes in UTF-16 without any decoding
// @return Returns either @c RF_SUCCESS for succesfull readin or one of the following errors:
// + @c RE_FILE_EOF: The end of file has been found while reading.
// + @c RE_FILE_READ: If during reading the file there was an unknown read error
// + @c RE_FILE_READ_BLOCK: If the read operation failed due to the file descriptor being occupied by another thread
// + @c RE_FILE_MODE: If during reading the file the file descriptor's mode was not correctly set for reading
// + @c RE_FILE_POS_OVERFLOW: If during reading, the current file position can't be represented by the system
// + @c RE_INTERRUPT: If during reading, there was a system interrupt
// + @c RE_FILE_IO: If there was a physical I/O error
// + @c RE_FILE_NOSPACE: If reading failed due to insufficient storage space
i_DECLIMEX_ int32_t rfFgetc_UTF32LE(FILE* f,uint32_t *c);
// @brief Gets a unicode character from a UTF-32 Big Endian file descriptor
//
// This function attempts to assume a more modern fgetc() role for UTF-32 encoded files.
// Reads bytes from the File descriptor @c f until a full UTF-32 unicode character has been read
//
// After this function the file pointer will have moved by @c 4
// bytes if the return value is positive.
//
// You shall need to provide an integer at @c to contain the UTF-32 codepoint.
//
// @param f A valid FILE descriptor from which to read the bytes
// @param c Pass an int that will receive either the unicode code point value or
// the UTF16 bytes depending on the value of the @c cp flag
// If @c false the int passed at @c c will contain the value of the read bytes in UTF-16 without any decoding
// @return Returns either @c RF_SUCCESS for succesfull readin or one of the following errors:
// + @c RE_FILE_EOF: The end of file has been found while reading.
// + @c RE_FILE_READ: If during reading the file there was an unknown read error
// + @c RE_FILE_READ_BLOCK: If the read operation failed due to the file descriptor being occupied by another thread
// + @c RE_FILE_MODE: If during reading the file the file descriptor's mode was not correctly set for reading
// + @c RE_FILE_POS_OVERFLOW: If during reading, the current file position can't be represented by the system
// + @c RE_INTERRUPT: If during reading, there was a system interrupt
// + @c RE_FILE_IO: If there was a physical I/O error
// + @c RE_FILE_NOSPACE: If reading failed due to insufficient storage space
i_DECLIMEX_ int32_t rfFgetc_UTF32BE(FILE* f,uint32_t *c);
// @brief Moves a unicode character backwards in a big endian UTF-32 file stream
//
// @param f The file stream
// @param c Returns the character we moved back to as a unicode codepoint
// @return Returns either @c RF_SUCCESS for success or one of the following errors:
// + @c RE_FILE_POS_OVERFLOW: If during trying to read the current file's position it can't be represented by the system
// + @c RE_FILE_BAD: If The file descriptor is corrupt/illegal
// + @c RE_FILE_NOTFILE: If the file descriptor is not a file but something else. e.g. socket.
// + @c RE_FILE_GETFILEPOS: If the file's position could not be retrieved for some unknown reason
// + @c RE_FILE_WRITE_BLOCK: While attempting to move the file pointer, it was occupied by another thread, and the no block flag was set
// + @c RE_INTERRUPT: Operating on the file failed due to a system interrupt
// + @c RE_FILE_IO: There was a physical I/O error
// + @c RE_FILE_NOSPACE: There was no space on the device holding the file
// + @c RE_FILE_NOTFILE: The device we attempted to manipulate is non-existent
// + @c RE_FILE_READ: If during reading the file there was an error
// + @c RE_FILE_READ_BLOCK: If during reading the file the read operation failed due to the file being occupied by another thread
// + @c RE_FILE_MODE: If during reading the file the underlying file descriptor's mode was not correctly set for reading
i_DECLIMEX_ int32_t rfFback_UTF32BE(FILE* f,uint32_t *c);
// @brief Moves a unicode character backwards in a little endian UTF-32 file stream
//
// The file descriptor at @c f must have been opened in <b>binary</b> and not text mode. That means that if under
// Windows make sure to call fopen with "wb", "rb" e.t.c. instead of the simple "w", "r" e.t.c. since the initial
// default value under Windows is text mode. Alternatively you can set the initial value using _get_fmode() and
// _set_fmode(). For more information take a look at the msdn pages here:
// http://msdn.microsoft.com/en-us/library/ktss1a9b.aspx
//
// @param f The file stream
// @param c Returns the character we moved back to as a unicode codepoint
// @return Returns either @c RF_SUCCESS for success or one of the following errors:
// + @c RE_FILE_POS_OVERFLOW: If during trying to read the current file's position it can't be represented by the system
// + @c RE_FILE_BAD: If The file descriptor is corrupt/illegal
// + @c RE_FILE_NOTFILE: If the file descriptor is not a file but something else. e.g. socket.
// + @c RE_FILE_GETFILEPOS: If the file's position could not be retrieved for some unknown reason
// + @c RE_FILE_WRITE_BLOCK: While attempting to move the file pointer, it was occupied by another thread, and the no block flag was set
// + @c RE_INTERRUPT: Operating on the file failed due to a system interrupt
// + @c RE_FILE_IO: There was a physical I/O error
// + @c RE_FILE_NOSPACE: There was no space on the device holding the file
// + @c RE_FILE_NOTFILE: The device we attempted to manipulate is non-existent
// + @c RE_FILE_READ: If during reading the file there was an error
// + @c RE_FILE_READ_BLOCK: If during reading the file the read operation failed due to the file being occupied by another thread
// + @c RE_FILE_MODE: If during reading the file the underlying file descriptor's mode was not correctly set for reading
i_DECLIMEX_ int32_t rfFback_UTF32LE(FILE* f,uint32_t *c);
// @brief Moves a unicode character backwards in a big endian UTF-16 file stream
//
// The file descriptor at @c f must have been opened in <b>binary</b> and not text mode. That means that if under
// Windows make sure to call fopen with "wb", "rb" e.t.c. instead of the simple "w", "r" e.t.c. since the initial
// default value under Windows is text mode. Alternatively you can set the initial value using _get_fmode() and
// _set_fmode(). For more information take a look at the msdn pages here:
// http://msdn.microsoft.com/en-us/library/ktss1a9b.aspx
//
// @param f The file stream
// @param c Returns the character we moved back to as a unicode codepoint
// @return Returns either the number of bytes moved backwards (either @c 4 or @c 2) for success or one of the following errors:
// + @c RE_UTF16_INVALID_SEQUENCE: Either the read word or its surrogate pair if 4 bytes were read held illegal values
// + @c RE_FILE_POS_OVERFLOW: If during trying to read the current file's position it can't be represented by the system
// + @c RE_FILE_BAD: If The file descriptor is corrupt/illegal
// + @c RE_FILE_NOTFILE: If the file descriptor is not a file but something else. e.g. socket.
// + @c RE_FILE_GETFILEPOS: If the file's position could not be retrieved for some unknown reason
// + @c RE_FILE_WRITE_BLOCK: While attempting to move the file pointer, it was occupied by another thread, and the no block flag was set
// + @c RE_INTERRUPT: Operating on the file failed due to a system interrupt
// + @c RE_FILE_IO: There was a physical I/O error
// + @c RE_FILE_NOSPACE: There was no space on the device holding the file
// + @c RE_FILE_NOTFILE: The device we attempted to manipulate is non-existent
// + @c RE_FILE_READ: If during reading the file there was an error
// + @c RE_FILE_READ_BLOCK: If during reading the file the read operation failed due to the file being occupied by another thread
// + @c RE_FILE_MODE: If during reading the file the underlying file descriptor's mode was not correctly set for reading
i_DECLIMEX_ int32_t rfFback_UTF16BE(FILE* f,uint32_t *c);
// @brief Moves a unicode character backwards in a little endian UTF-16 file stream
//
// The file descriptor at @c f must have been opened in <b>binary</b> and not text mode. That means that if under
// Windows make sure to call fopen with "wb", "rb" e.t.c. instead of the simple "w", "r" e.t.c. since the initial
// default value under Windows is text mode. Alternatively you can set the initial value using _get_fmode() and
// _set_fmode(). For more information take a look at the msdn pages here:
// http://msdn.microsoft.com/en-us/library/ktss1a9b.aspx
//
// @param f The file stream
// @param c Returns the character we moved back to as a unicode codepoint
// @return Returns either the number of bytes moved backwards (either @c 4 or @c 2) for success or one of the following errors:
// + @c RE_UTF16_INVALID_SEQUENCE: Either the read word or its surrogate pair if 4 bytes were read held illegal values
// + @c RE_FILE_POS_OVERFLOW: If during trying to read the current file's position it can't be represented by the system
// + @c RE_FILE_BAD: If The file descriptor is corrupt/illegal
// + @c RE_FILE_NOTFILE: If the file descriptor is not a file but something else. e.g. socket.
// + @c RE_FILE_GETFILEPOS: If the file's position could not be retrieved for some unknown reason
// + @c RE_FILE_WRITE_BLOCK: While attempting to move the file pointer, it was occupied by another thread, and the no block flag was set
// + @c RE_INTERRUPT: Operating on the file failed due to a system interrupt
// + @c RE_FILE_IO: There was a physical I/O error
// + @c RE_FILE_NOSPACE: There was no space on the device holding the file
// + @c RE_FILE_NOTFILE: The device we attempted to manipulate is non-existent
// + @c RE_FILE_READ: If during reading the file there was an error
// + @c RE_FILE_READ_BLOCK: If during reading the file the read operation failed due to the file being occupied by another thread
// + @c RE_FILE_MODE: If during reading the file the underlying file descriptor's mode was not correctly set for reading
i_DECLIMEX_ int32_t rfFback_UTF16LE(FILE* f,uint32_t *c);
// @brief Moves a unicode character backwards in a UTF-8 file stream
//
// The file descriptor at @c f must have been opened in <b>binary</b> and not text mode. That means that if under
// Windows make sure to call fopen with "wb", "rb" e.t.c. instead of the simple "w", "r" e.t.c. since the initial
// default value under Windows is text mode. Alternatively you can set the initial value using _get_fmode() and
// _set_fmode(). For more information take a look at the msdn pages here:
// http://msdn.microsoft.com/en-us/library/ktss1a9b.aspx
//
// @param f The file stream
// @param c Returns the character we moved back to as a unicode codepoint
// @return Returns either the number of bytes moved backwards for success (either @c 4, @c 3, @c 2 or @c 1) or one of the following errors:
// + @c RE_UTF8_INVALID_SEQUENCE: If during moving bacwards in the file unexpected UTF-8 bytes were found
// + @c RE_FILE_POS_OVERFLOW: If during trying to read the current file's position it can't be represented by the system
// + @c RE_FILE_BAD: If The file descriptor is corrupt/illegal
// + @c RE_FILE_NOTFILE: If the file descriptor is not a file but something else. e.g. socket.
// + @c RE_FILE_GETFILEPOS: If the file's position could not be retrieved for some unknown reason
// + @c RE_FILE_WRITE_BLOCK: While attempting to move the file pointer, it was occupied by another thread, and the no block flag was set
// + @c RE_INTERRUPT: Operating on the file failed due to a system interrupt
// + @c RE_FILE_IO: There was a physical I/O error
// + @c RE_FILE_NOSPACE: There was no space on the device holding the file
// + @c RE_FILE_NOTFILE: The device we attempted to manipulate is non-existent
// + @c RE_FILE_READ: If during reading the file there was an error
// + @c RE_FILE_READ_BLOCK: If during reading the file the read operation failed due to the file being occupied by another thread
// + @c RE_FILE_MODE: If during reading the file the underlying file descriptor's mode was not correctly set for reading
i_DECLIMEX_ int32_t rfFback_UTF8(FILE* f,uint32_t *c);
// @brief Opens another process as a pipe
//
// This function is a cross-platform popen wrapper. In linux it uses popen and in Windows it uses
// _popen.
// @lmsFunction
// @param command The string with the command to execute. Is basically the name of the program/process you want to spawn
// with its full path and its parameters. @inhtype{String,StringX} @tmpSTR
// @param mode The mode you want the pipe to work in. There are two possible values:
// + @c "r" The calling process can read the spawned command's standard output via the returned stream.
// + @c "w" The calling process can write to the spawned command's standard input via the returned stream.
//
// Anything else will result in an error
// @return For success popen will return a FILE descriptor that can be used to either read or write from the pipe.
// If there was an error @c 0 is returned and an error is logged.
#ifdef RF_IAMHERE_FOR_DOXYGEN
i_DECLIMEX_ FILE* rfPopen(void* command,const char* mode);
#else
i_DECLIMEX_ FILE* i_rfPopen(void* command,const char* mode);
#define rfPopen(i_CMD_,i_MODE_) i_rfLMS_WRAP2(FILE*,i_rfPopen,i_CMD_,i_MODE_)
#endif
// @brief Closes a pipe
//
// This function is a cross-platform wrapper for pclose. It closes a file descriptor opened with @ref rfPopen() and
// returns the exit code of the process that was running
// @param stream The file descriptor of the pipe returned by @ref rfPopen() that we want to close
// @return Returns the exit code of the process or -1 if there was an error
i_DECLIMEX_ int rfPclose(FILE* stream);
// @} End of I/O group
#ifdef __cplusplus
}///closing bracket for calling from C++
#endif
#endif//include guards end

2348
samples/C/rfc_string.c Normal file

File diff suppressed because it is too large Load Diff

1459
samples/C/rfc_string.h Normal file

File diff suppressed because it is too large Load Diff

15669
samples/C/sgd_fast.c Normal file

File diff suppressed because it is too large Load Diff

5
samples/C/syscalldefs.h Normal file
View File

@@ -0,0 +1,5 @@
static const syscalldef syscalldefs[] = {
[SYSCALL_OR_NUM(0, SYS_restart_syscall)] = MAKE_UINT16(0, 1),
[SYSCALL_OR_NUM(1, SYS_exit)] = MAKE_UINT16(1, 17),
[SYSCALL_OR_NUM(2, SYS_fork)] = MAKE_UINT16(0, 22),
};

1363
samples/C/wglew.h Normal file

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,5 @@
program-id. hello.
procedure division.
display "Hello, World!".
stop run.

View File

@@ -0,0 +1,6 @@
IDENTIFICATION DIVISION.
PROGRAM-ID. hello.
PROCEDURE DIVISION.
DISPLAY "Hello World, yet again.".
STOP RUN.

View File

@@ -0,0 +1,6 @@
IDENTIFICATION DIVISION.
PROGRAM-ID. hello.
PROCEDURE DIVISION.
DISPLAY "Hello World!".
STOP RUN.

7
samples/COBOL/simple.cpy Normal file
View File

@@ -0,0 +1,7 @@
01 COBOL-TEST-RECORD.
05 COBOL-TEST-USAGES.
10 COBOL-4-COMP PIC S9(4) COMP.
10 COBOL-8-COMP PIC S9(8) COMP.
10 COBOL-9-COMP PIC S9(9) COMP.
10 COBOL-4-COMP2 PIC S9(4) COMP-2.
10 COBOL-7-COMP2 PIC 9(7) COMP-2.

6307
samples/CSS/bootstrap.css vendored Normal file

File diff suppressed because it is too large Load Diff

873
samples/CSS/bootstrap.min.css vendored Normal file

File diff suppressed because one or more lines are too long

17
samples/Clojure/for.clj Normal file
View File

@@ -0,0 +1,17 @@
(defn prime? [n]
(not-any? zero? (map #(rem n %) (range 2 n))))
(range 3 33 2)
'(3 5 7 9 11 13 15 17 19 21 23 25 27 29 31)
;; :when continues through the collection even if some have the
;; condition evaluate to false, like filter
(for [x (range 3 33 2) :when (prime? x)]
x)
'(3 5 7 11 13 17 19 23 29 31)
;; :while stops at the first collection element that evaluates to
;; false, like take-while
(for [x (range 3 33 2) :while (prime? x)]
x)
'(3 5 7)

View File

@@ -0,0 +1,8 @@
[:html
[:head
[:meta {:charset "utf-8"}]
[:link {:rel "stylesheet" :href "css/bootstrap.min.css"}]
[:script {:src "app.js"}]]
[:body
[:div.nav
[:p "Hello world!"]]]]

View File

@@ -0,0 +1,13 @@
(defn into-array
([aseq]
(into-array nil aseq))
([type aseq]
(let [n (count aseq)
a (make-array n)]
(loop [aseq (seq aseq)
i 0]
(if (< i n)
(do
(aset a i (first aseq))
(recur (next aseq) (inc i)))
a)))))

View File

@@ -0,0 +1,15 @@
(defprotocol ISound (sound []))
(deftype Cat []
ISound
(sound [_] "Meow!"))
(deftype Dog []
ISound
(sound [_] "Woof!"))
(extend-type default
ISound
(sound [_] "... silence ..."))
(sound 1) ;; => "... silence ..."

View File

@@ -0,0 +1,5 @@
(defn rand
"Returns a random floating point number between 0 (inclusive) and
n (default 1) (exclusive)."
([] (scm* [n] (random-real)))
([n] (* (rand) n)))

20
samples/Clojure/svg.cljx Normal file
View File

@@ -0,0 +1,20 @@
^:clj (ns c2.svg
(:use [c2.core :only [unify]]
[c2.maths :only [Pi Tau radians-per-degree
sin cos mean]]))
^:cljs (ns c2.svg
(:use [c2.core :only [unify]]
[c2.maths :only [Pi Tau radians-per-degree
sin cos mean]])
(:require [c2.dom :as dom]))
;;Stub for float fn, which does not exist on cljs runtime
^:cljs (def float identity)
(defn ->xy
"Convert coordinates (potentially map of `{:x :y}`) to 2-vector."
[coordinates]
(cond
(and (vector? coordinates) (= 2 (count coordinates))) coordinates
(map? coordinates) [(:x coordinates) (:y coordinates)]))

View File

@@ -0,0 +1,20 @@
(deftest function-tests
(is (= 3
(count [1 2 3])))
(is (= false
(not true)))
(is (= true
(contains? {:foo 1 :bar 2} :foo)))
(is (= {"foo" 1, "baz" 3}
(select-keys {:foo 1 :bar 2 :baz 3} [:foo :baz])))
(is (= [1 2 3]
(vals {:foo 1 :bar 2 :baz 3})))
(is (= ["foo" "bar" "baz"]
(keys {:foo 1 :bar 2 :baz 3})))
(is (= [2 4 6]
(filter (fn [x] (=== (rem x 2) 0)) [1 2 3 4 5 6]))))

View File

@@ -0,0 +1,21 @@
;;;; -*- lisp -*-
(in-package :foo)
;;; Header comment.
(defvar *foo*)
(eval-when (:execute :compile-toplevel :load-toplevel)
(defun add (x &optional y &key z)
(declare (ignore z))
;; Inline comment.
(+ x (or y 1))))
#|
Multi-line comment.
|#
(defmacro foo (x &body b)
(if x
`(1+ ,x) ;After-line comment.
42))

View File

@@ -1,13 +1,3 @@
(************************************************************************)
(* v * The Coq Proof Assistant / The Coq Development Team *)
(* <O___,, * INRIA - CNRS - LIX - LRI - PPS - Copyright 1999-2010 *)
(* \VV/ **************************************************************)
(* // * This file is distributed under the terms of the *)
(* * GNU Lesser General Public License Version 2.1 *)
(************************************************************************)
(** This file is deprecated, for a tree on list, use [Mergesort.v]. *)
(** A development of Treesort on Heap trees. It has an average
complexity of O(n.log n) but of O() in the worst case (e.g. if
the list is already sorted) *)
@@ -88,9 +78,9 @@ Section defs.
forall P:Tree -> Type,
P Tree_Leaf ->
(forall (a:A) (T1 T2:Tree),
leA_Tree a T1 ->
leA_Tree a T2 ->
is_heap T1 -> P T1 -> is_heap T2 -> P T2 -> P (Tree_Node a T1 T2)) ->
leA_Tree a T1 ->
leA_Tree a T2 ->
is_heap T1 -> P T1 -> is_heap T2 -> P T2 -> P (Tree_Node a T1 T2)) ->
forall T:Tree, is_heap T -> P T.
Proof.
simple induction T; auto with datatypes.
@@ -105,9 +95,9 @@ Section defs.
forall P:Tree -> Set,
P Tree_Leaf ->
(forall (a:A) (T1 T2:Tree),
leA_Tree a T1 ->
leA_Tree a T2 ->
is_heap T1 -> P T1 -> is_heap T2 -> P T2 -> P (Tree_Node a T1 T2)) ->
leA_Tree a T1 ->
leA_Tree a T2 ->
is_heap T1 -> P T1 -> is_heap T2 -> P T2 -> P (Tree_Node a T1 T2)) ->
forall T:Tree, is_heap T -> P T.
Proof.
simple induction T; auto with datatypes.
@@ -135,13 +125,13 @@ Section defs.
(forall a, HdRel leA a l1 -> HdRel leA a l2 -> HdRel leA a l) ->
merge_lem l1 l2.
Require Import Morphisms.
Instance: Equivalence (@meq A).
Proof. constructor; auto with datatypes. red. apply meq_trans. Defined.
Instance: Proper (@meq A ++> @meq _ ++> @meq _) (@munion A).
Proof. intros x y H x' y' H'. now apply meq_congr. Qed.
Lemma merge :
forall l1:list A, Sorted leA l1 ->
forall l2:list A, Sorted leA l2 -> merge_lem l1 l2.
@@ -150,8 +140,8 @@ Section defs.
apply merge_exist with l2; auto with datatypes.
rename l1 into l.
revert l2 H0. fix 1. intros.
destruct l2 as [|a0 l0].
apply merge_exist with (a :: l); simpl; auto with datatypes.
destruct l2 as [|a0 l0].
apply merge_exist with (a :: l); simpl; auto with datatypes.
elim (leA_dec a a0); intros.
(* 1 (leA a a0) *)
@@ -159,18 +149,18 @@ Section defs.
destruct (merge l H (a0 :: l0) H0).
apply merge_exist with (a :: l1). clear merge merge0.
auto using cons_sort, cons_leA with datatypes.
simpl. rewrite m. now rewrite munion_ass.
intros. apply cons_leA.
simpl. rewrite m. now rewrite munion_ass.
intros. apply cons_leA.
apply (@HdRel_inv _ leA) with l; trivial with datatypes.
(* 2 (leA a0 a) *)
apply Sorted_inv in H0. destruct H0.
destruct (merge0 l0 H0). clear merge merge0.
apply merge_exist with (a0 :: l1);
destruct (merge0 l0 H0). clear merge merge0.
apply merge_exist with (a0 :: l1);
auto using cons_sort, cons_leA with datatypes.
simpl; rewrite m. simpl. setoid_rewrite munion_ass at 1. rewrite munion_comm.
repeat rewrite munion_ass. setoid_rewrite munion_comm at 3. reflexivity.
intros. apply cons_leA.
intros. apply cons_leA.
apply (@HdRel_inv _ leA) with l0; trivial with datatypes.
Qed.
@@ -186,7 +176,7 @@ Section defs.
match t with
| Tree_Leaf => emptyBag
| Tree_Node a t1 t2 =>
munion (contents t1) (munion (contents t2) (singletonBag a))
munion (contents t1) (munion (contents t2) (singletonBag a))
end.
@@ -272,11 +262,11 @@ Section defs.
apply flat_exist with (a :: l); simpl; auto with datatypes.
apply meq_trans with
(munion (list_contents _ eqA_dec l1)
(munion (list_contents _ eqA_dec l2) (singletonBag a))).
(munion (list_contents _ eqA_dec l2) (singletonBag a))).
apply meq_congr; auto with datatypes.
apply meq_trans with
(munion (singletonBag a)
(munion (list_contents _ eqA_dec l1) (list_contents _ eqA_dec l2))).
(munion (list_contents _ eqA_dec l1) (list_contents _ eqA_dec l2))).
apply munion_rotate.
apply meq_right; apply meq_sym; trivial with datatypes.
Qed.

View File

@@ -1,11 +1,3 @@
(************************************************************************)
(* v * The Coq Proof Assistant / The Coq Development Team *)
(* <O___,, * INRIA - CNRS - LIX - LRI - PPS - Copyright 1999-2010 *)
(* \VV/ **************************************************************)
(* // * This file is distributed under the terms of the *)
(* * GNU Lesser General Public License Version 2.1 *)
(************************************************************************)
Require Import Omega Relations Multiset SetoidList.
(** This file is deprecated, use [Permutation.v] instead.
@@ -154,7 +146,7 @@ Lemma permut_add_cons_inside :
Proof.
intros;
replace (a :: l) with ([] ++ a :: l); trivial;
apply permut_add_inside; trivial.
apply permut_add_inside; trivial.
Qed.
Lemma permut_middle :
@@ -168,8 +160,8 @@ Lemma permut_sym_app :
Proof.
intros l1 l2;
unfold permutation, meq;
intro a; do 2 rewrite list_contents_app; simpl;
auto with arith.
intro a; do 2 rewrite list_contents_app; simpl;
auto with arith.
Qed.
Lemma permut_rev :

View File

@@ -1,17 +1,5 @@
(************************************************************************)
(* v * The Coq Proof Assistant / The Coq Development Team *)
(* <O___,, * INRIA - CNRS - LIX - LRI - PPS - Copyright 1999-2010 *)
(* \VV/ **************************************************************)
(* // * This file is distributed under the terms of the *)
(* * GNU Lesser General Public License Version 2.1 *)
(************************************************************************)
(*********************************************************************)
(** * List permutations as a composition of adjacent transpositions *)
(*********************************************************************)
(* Adapted in May 2006 by Jean-Marc Notin from initial contents by
Laurent Théry (Huffmann contribution, October 2003) *)
Laurent Thery (Huffmann contribution, October 2003) *)
Require Import List Setoid Compare_dec Morphisms.
Import ListNotations. (* For notations [] and [a;b;c] *)

View File

@@ -1,10 +1,2 @@
(************************************************************************)
(* v * The Coq Proof Assistant / The Coq Development Team *)
(* <O___,, * INRIA - CNRS - LIX - LRI - PPS - Copyright 1999-2010 *)
(* \VV/ **************************************************************)
(* // * This file is distributed under the terms of the *)
(* * GNU Lesser General Public License Version 2.1 *)
(************************************************************************)
Require Export Sorted.
Require Export Mergesort.

View File

@@ -0,0 +1,47 @@
= Creole
Creole is a Creole-to-HTML converter for Creole, the lightweight markup
language (http://wikicreole.org/). Github uses this converter to render *.creole files.
Project page on github:
* http://github.com/minad/creole
Travis-CI:
* https://travis-ci.org/minad/creole
RDOC:
* http://rdoc.info/projects/minad/creole
== INSTALLATION
{{{
gem install creole
}}}
== SYNOPSIS
{{{
require 'creole'
html = Creole.creolize('== Creole text')
}}}
== BUGS
If you found a bug, please report it at the Creole project's tracker
on GitHub:
http://github.com/minad/creole/issues
== AUTHORS
* Lars Christensen (larsch)
* Daniel Mendler (minad)
== LICENSE
Creole is Copyright (c) 2008 - 2013 Lars Christensen, Daniel Mendler. It is free software, and
may be redistributed under the terms specified in the README file of
the Ruby distribution.

View File

@@ -0,0 +1,52 @@
__global__ void scalarProdGPU(
float *d_C,
float *d_A,
float *d_B,
int vectorN,
int elementN
)
{
//Accumulators cache
__shared__ float accumResult[ACCUM_N];
////////////////////////////////////////////////////////////////////////////
// Cycle through every pair of vectors,
// taking into account that vector counts can be different
// from total number of thread blocks
////////////////////////////////////////////////////////////////////////////
for (int vec = blockIdx.x; vec < vectorN; vec += gridDim.x)
{
int vectorBase = IMUL(elementN, vec);
int vectorEnd = vectorBase + elementN;
////////////////////////////////////////////////////////////////////////
// Each accumulator cycles through vectors with
// stride equal to number of total number of accumulators ACCUM_N
// At this stage ACCUM_N is only preferred be a multiple of warp size
// to meet memory coalescing alignment constraints.
////////////////////////////////////////////////////////////////////////
for (int iAccum = threadIdx.x; iAccum < ACCUM_N; iAccum += blockDim.x)
{
float sum = 0;
for (int pos = vectorBase + iAccum; pos < vectorEnd; pos += ACCUM_N)
sum += d_A[pos] * d_B[pos];
accumResult[iAccum] = sum;
}
////////////////////////////////////////////////////////////////////////
// Perform tree-like reduction of accumulators' results.
// ACCUM_N has to be power of two at this stage
////////////////////////////////////////////////////////////////////////
for (int stride = ACCUM_N / 2; stride > 0; stride >>= 1)
{
__syncthreads();
for (int iAccum = threadIdx.x; iAccum < stride; iAccum += blockDim.x)
accumResult[iAccum] += accumResult[stride + iAccum];
}
if (threadIdx.x == 0) d_C[vec] = accumResult[0];
}
}

46
samples/Cuda/vectorAdd.cu Normal file
View File

@@ -0,0 +1,46 @@
#include <stdio.h>
#include <cuda_runtime.h>
/**
* CUDA Kernel Device code
*
* Computes the vector addition of A and B into C. The 3 vectors have the same
* number of elements numElements.
*/
__global__ void
vectorAdd(const float *A, const float *B, float *C, int numElements)
{
int i = blockDim.x * blockIdx.x + threadIdx.x;
if (i < numElements)
{
C[i] = A[i] + B[i];
}
}
/**
* Host main routine
*/
int
main(void)
{
// Error code to check return values for CUDA calls
cudaError_t err = cudaSuccess;
// Launch the Vector Add CUDA Kernel
int threadsPerBlock = 256;
int blocksPerGrid =(numElements + threadsPerBlock - 1) / threadsPerBlock;
vectorAdd<<<blocksPerGrid, threadsPerBlock>>>(d_A, d_B, d_C, numElements);
err = cudaGetLastError();
if (err != cudaSuccess)
{
fprintf(stderr, "Failed to launch vectorAdd kernel (error code %s)!\n", cudaGetErrorString(err));
exit(EXIT_FAILURE);
}
// Reset the device and exit
err = cudaDeviceReset();
return 0;
}

87
samples/DM/example.dm Normal file
View File

@@ -0,0 +1,87 @@
// This is a single line comment.
/*
This is a multi-line comment
*/
// Pre-processor keywords
#define PI 3.1415
#if PI == 4
#define G 5
#elif PI == 3
#define I 6
#else
#define K 7
#endif
var/GlobalCounter = 0
var/const/CONST_VARIABLE = 2
var/list/MyList = list("anything", 1, new /datum/entity)
var/list/EmptyList[99] // creates a list of 99 null entries
var/list/NullList = null
/*
Entity Class
*/
/datum/entity
var/name = "Entity"
var/number = 0
/datum/entity/proc/myFunction()
world.log << "Entity has called myFunction"
/datum/entity/New()
number = GlobalCounter++
/*
Unit Class, Extends from Entity
*/
/datum/entity/unit
name = "Unit"
/datum/entity/unit/New()
..() // calls the parent's proc; equal to super() and base() in other languages
number = rand(1, 99)
/datum/entity/unit/myFunction()
world.log << "Unit has overriden and called myFunction"
// Global Function
/proc/ReverseList(var/list/input)
var/list/output = list()
for(var/i = input.len; i >= 1; i--) // IMPORTANT: List Arrays count from 1.
output += input[i] // "+= x" is ".Add(x)"
return output
// Bitflags
/proc/DoStuff()
var/bitflag = 0
bitflag |= 8
return bitflag
/proc/DoOtherStuff()
var/bitflag = 65535 // 16 bits is the maximum amount
bitflag &= ~8
return bitflag
// Logic
/proc/DoNothing()
var/pi = PI
if(pi == 4)
world.log << "PI is 4"
else if(pi == CONST_VARIABLE)
world.log << "PI is [CONST_VARIABLE]!"
else
world.log << "PI is approximety [pi]"
#undef PI // Undefine PI

42
samples/ECL/sample.ecl Normal file
View File

@@ -0,0 +1,42 @@
/*
* Multi-line comment
*/
#option ('slidingJoins', true);
namesRecord :=
RECORD
string20 surname;
string10 forename;
integer2 age;
integer2 dadAge;
integer2 mumAge;
END;
namesRecord2 :=
record
string10 extra;
namesRecord;
end;
namesTable := dataset('x',namesRecord,FLAT);
namesTable2 := dataset('y',namesRecord2,FLAT);
integer2 aveAgeL(namesRecord l) := (l.dadAge+l.mumAge)/2;
integer2 aveAgeR(namesRecord2 r) := (r.dadAge+r.mumAge)/2;
// Standard join on a function of left and right
output(join(namesTable, namesTable2, aveAgeL(left) = aveAgeR(right)));
//Several simple examples of sliding join syntax
output(join(namesTable, namesTable2, left.age >= right.age - 10 and left.age <= right.age +10));
output(join(namesTable, namesTable2, left.age between right.age - 10 and right.age +10));
output(join(namesTable, namesTable2, left.age between right.age + 10 and right.age +30));
output(join(namesTable, namesTable2, left.age between (right.age + 20) - 10 and (right.age +20) + 10));
output(join(namesTable, namesTable2, aveAgeL(left) between aveAgeR(right)+10 and aveAgeR(right)+40));
//Same, but on strings. Also includes age to ensure sort is done by non-sliding before sliding.
output(join(namesTable, namesTable2, left.surname between right.surname[1..10]+'AAAAAAAAAA' and right.surname[1..10]+'ZZZZZZZZZZ' and left.age=right.age));
output(join(namesTable, namesTable2, left.surname between right.surname[1..10]+'AAAAAAAAAA' and right.surname[1..10]+'ZZZZZZZZZZ' and left.age=right.age,all));
//This should not generate a self join
output(join(namesTable, namesTable, left.age between right.age - 10 and right.age +10));

127
samples/Elm/Basic.elm Normal file
View File

@@ -0,0 +1,127 @@
import List (intercalate,intersperse)
import Website.Skeleton
import Website.ColorScheme
addFolder folder lst =
let add (x,y) = (x, folder ++ y ++ ".elm") in
let f (n,xs) = (n, map add xs) in
map f lst
elements = addFolder "Elements/"
[ ("Primitives",
[ ("Text" , "HelloWorld")
, ("Images", "Image")
, ("Fitted Images", "FittedImage")
, ("Videos", "Video")
, ("Markdown", "Markdown")
])
, ("Formatting",
[ ("Size" , "Size")
, ("Opacity" , "Opacity")
, ("Text" , "Text")
, ("Typeface", "Typeface")
])
, ("Layout",
[ ("Simple Flow", "FlowDown1a")
, ("Flow Down" , "FlowDown2")
, ("Layers" , "Layers")
, ("Positioning", "Position")
, ("Spacers" , "Spacer")
])
, ("Collage", [ ("Lines" , "Lines")
, ("Shapes" , "Shapes")
, ("Sprites" , "Sprite")
, ("Elements" , "ToForm")
, ("Colors" , "Color")
, ("Textures" , "Texture")
, ("Transforms", "Transforms")
])
]
functional = addFolder "Functional/"
[ ("Recursion",
[ ("Factorial" , "Factorial")
, ("List Length", "Length")
, ("Zip" , "Zip")
, ("Quick Sort" , "QuickSort")
])
, ("Functions",
[ ("Anonymous Functions", "Anonymous")
, ("Application" , "Application")
, ("Composition" , "Composition")
, ("Infix Operators" , "Infix")
])
, ("Higher-Order",
[ ("Map" , "Map")
, ("Fold" , "Sum")
, ("Filter" , "Filter")
, ("ZipWith", "ZipWith")
])
, ("Data Types",
[ ("Maybe", "Maybe")
, ("Boolean Expressions", "BooleanExpressions")
, ("Tree", "Tree")
])
]
reactive = addFolder "Reactive/"
[ ("Mouse", [ ("Position", "Position")
, ("Presses" , "IsDown")
, ("Clicks" , "CountClicks")
, ("Position+Image", "ResizeYogi")
, ("Position+Collage" , "Transforms")
-- , ("Hover" , "IsAbove")
])
,("Keyboard",[ ("Keys Down" , "KeysDown")
, ("Key Presses", "CharPressed")
])
, ("Window", [ ("Size", "ResizePaint")
, ("Centering", "Centering")
])
, ("Time", [ ("Before and After", "Between")
, ("Every" , "Every")
, ("Clock" , "Clock")
])
, ("Input", [ ("Text Fields", "TextField")
, ("Passwords" , "Password")
, ("Check Boxes", "CheckBox")
, ("String Drop Down", "StringDropDown")
, ("Drop Down", "DropDown")
])
, ("Random", [ ("Randomize", "Randomize") ])
, ("HTTP", [ ("Zip Codes", "ZipCodes") ])
, ("Filters",[ ("Sample", "SampleOn")
, ("Keep If", "KeepIf")
, ("Drop Repeats", "DropRepeats")
])
]
example (name, loc) = Text.link ("/edit/examples/" ++ loc) (toText name)
toLinks (title, links) =
flow right [ width 130 (text $ toText " " ++ italic (toText title))
, text (intercalate (bold . Text.color accent4 $ toText " &middot; ") $ map example links)
]
insertSpace lst = case lst of { x:xs -> x : spacer 1 5 : xs ; [] -> [] }
subsection w (name,info) =
flow down . insertSpace . intersperse (spacer 1 1) . map (width w) $
(text . bold $ toText name) : map toLinks info
words = [markdown|
### Basic Examples
Each example listed below focuses on a single function or concept.
These examples demonstrate all of the basic building blocks of Elm.
|]
content w =
words : map (subsection w) [ ("Display",elements), ("React",reactive), ("Compute",functional) ]
exampleSets w = flow down . map (width w) . intersperse (plainText " ") $ content w
main = lift (skeleton exampleSets) Window.width

32
samples/Elm/QuickSort.elm Normal file
View File

@@ -0,0 +1,32 @@
main = asText (qsort [3,9,1,8,5,4,7])
qsort lst =
case lst of
x:xs -> qsort (filter ((>=)x) xs) ++ [x] ++ qsort (filter ((<)x) xs)
[] -> []
{---------------------
QuickSort works as follows:
- Choose a pivot element which be placed in the "middle" of the sorted list.
In our case we are choosing the first element as the pivot.
- Gather all of the elements less than the pivot (the first filter).
We know that these must come before our pivot element in the sorted list.
Note: ((>=)x) === (\y -> (>=) x y) === (\y -> x >= y)
- Gather all of the elements greater than the pivot (the second filter).
We know that these must come after our pivot element in the sorted list.
- Run `qsort` on the lesser elements, producing a sorted list that contains
only elements less than the pivot. Put these before the pivot.
- Run `qsort` on the greater elements, producing a sorted list. Put these
after the pivot.
Note that choosing a bad pivot can have bad effects. Take a sorted list with
N elements. The pivot will always be the lowest member, meaning that it does
not divide the list very evenly. The list of lessers has 0 elements
and the list of greaters has N-1 elemens. This means qsort will be called
N times, each call looking through the entire list. This means, in the worst
case, QuickSort will make N^2 comparisons.
----------------------}

91
samples/Elm/Tree.elm Normal file
View File

@@ -0,0 +1,91 @@
{-----------------------------------------------------------------
Overview: A "Tree" represents a binary tree. A "Node" in a binary
tree always has two children. A tree can also be "Empty". Below
I have defined "Tree" and a number of useful functions.
This example also includes some challenge problems :)
-----------------------------------------------------------------}
data Tree a = Node a (Tree a) (Tree a) | Empty
empty = Empty
singleton v = Node v Empty Empty
insert x tree =
case tree of
Empty -> singleton x
Node y left right ->
if x == y then tree else
if x < y then Node y (insert x left) right
else Node y left (insert x right)
fromList xs = foldl insert empty xs
depth tree =
case tree of
Node v left right -> 1 + max (depth left) (depth right)
Empty -> 0
map f tree =
case tree of
Node v left right -> Node (f v) (map f left) (map f right)
Empty -> Empty
t1 = fromList [1,2,3]
t2 = fromList [2,1,3]
main = flow down [ display "depth" depth t1
, display "depth" depth t2
, display "map ((+)1)" (map ((+)1)) t2
]
display name f v =
text . monospace . toText $
concat [ show (f v), " &lArr; ", name, " ", show v ]
{-----------------------------------------------------------------
Exercises:
(1) Sum all of the elements of a tree.
sum :: Tree Number -> Number
(2) Flatten a tree into a list.
flatten :: Tree a -> [a]
(3) Check to see if an element is in a given tree.
isElement :: a -> Tree a -> Bool
(4) Write a general fold function that acts on trees. The fold
function does not need to guarantee a particular order of
traversal.
fold :: (a -> b -> b) -> b -> Tree a -> b
(5) Use "fold" to do exercises 1-3 in one line each. The best
readable versions I have come up have the following length
in characters including spaces and function name:
sum: 16
flatten: 21
isElement: 46
See if you can match or beat me! Don't forget about currying
and partial application!
(6) Can "fold" be used to implement "map" or "depth"?
(7) Try experimenting with different ways to traverse a
tree: pre-order, in-order, post-order, depth-first, etc.
More info at: http://en.wikipedia.org/wiki/Tree_traversal
-----------------------------------------------------------------}

View File

@@ -0,0 +1,473 @@
;; ess-julia.el --- ESS julia mode and inferior interaction
;;
;; Copyright (C) 2012 Vitalie Spinu.
;;
;; Filename: ess-julia.el
;; Author: Vitalie Spinu (based on julia-mode.el from julia-lang project)
;; Maintainer: Vitalie Spinu
;; Created: 02-04-2012 (ESS 12.03)
;; Keywords: ESS, julia
;;
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
;;
;; This file is *NOT* part of GNU Emacs.
;; This file is part of ESS
;;
;; This program is free software; you can redistribute it and/or
;; modify it under the terms of the GNU General Public License as
;; published by the Free Software Foundation; either version 3, any later version.
;;
;; This program is distributed in the hope that it will be useful, but WITHOUT
;; ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS
;; FOR A PARTICULAR PURPOSE. See the GNU General Public License for more
;; details.
;;
;; You should have received a copy of the GNU General Public License along with
;; this program; see the file COPYING. If not, write to the Free Software
;; Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301,
;; USA.
;;
;;
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
;;
;;; Commentary:
;; customise inferior-julia-program-name to point to your julia-release-basic
;; and start the inferior with M-x julia.
;;
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
;;
(require 'compile); for compilation-* below
;;; Code:
(defvar julia-mode-hook nil)
(add-to-list 'auto-mode-alist '("\\.jl\\'" . julia-mode))
(defvar julia-syntax-table
(let ((table (make-syntax-table)))
(modify-syntax-entry ?_ "_" table) ; underscores in words
(modify-syntax-entry ?@ "_" table)
(modify-syntax-entry ?. "_" table)
(modify-syntax-entry ?# "<" table) ; # single-line comment start
(modify-syntax-entry ?\n ">" table) ; \n single-line comment end
(modify-syntax-entry ?\{ "(} " table)
(modify-syntax-entry ?\} "){ " table)
(modify-syntax-entry ?\[ "(] " table)
(modify-syntax-entry ?\] ")[ " table)
(modify-syntax-entry ?\( "() " table)
(modify-syntax-entry ?\) ")( " table)
;(modify-syntax-entry ?\\ "." table) ; \ is an operator outside quotes
(modify-syntax-entry ?' "." table) ; character quote or transpose
(modify-syntax-entry ?\" "\"" table)
(modify-syntax-entry ?` "\"" table)
;; (modify-syntax-entry ?\" "." table)
(modify-syntax-entry ?? "." table)
(modify-syntax-entry ?$ "." table)
(modify-syntax-entry ?& "." table)
(modify-syntax-entry ?* "." table)
(modify-syntax-entry ?+ "." table)
(modify-syntax-entry ?- "." table)
(modify-syntax-entry ?< "." table)
(modify-syntax-entry ?> "." table)
(modify-syntax-entry ?= "." table)
(modify-syntax-entry ?% "." table)
table)
"Syntax table for julia-mode")
;; syntax table that holds within strings
(defvar julia-mode-string-syntax-table
(let ((table (make-syntax-table)))
table)
"Syntax table for julia-mode")
;; disable " inside char quote
(defvar julia-mode-char-syntax-table
(let ((table (make-syntax-table)))
(modify-syntax-entry ?\" "." table)
table)
"Syntax table for julia-mode")
;; not used
;; (defconst julia-string-regex
;; "\"[^\"]*?\\(\\(\\\\\\\\\\)*\\\\\"[^\"]*?\\)*\"")
(defconst julia-char-regex
"\\(\\s(\\|\\s-\\|-\\|[,%=<>\\+*/?&|$!\\^~\\\\;:]\\|^\\)\\('\\(\\([^']*?[^\\\\]\\)\\|\\(\\\\\\\\\\)\\)'\\)")
(defconst julia-unquote-regex
"\\(\\s(\\|\\s-\\|-\\|[,%=<>\\+*/?&|!\\^~\\\\;:]\\|^\\)\\($[a-zA-Z0-9_]+\\)")
(defconst julia-forloop-in-regex
"for +[^
]+ +.*\\(in\\)\\(\\s-\\|$\\)+")
(defconst ess-subset-regexp
"\\[[0-9:, ]*\\]" )
(defconst julia-font-lock-defaults
(list '("\\<\\(\\|Uint\\(8\\|16\\|32\\|64\\)\\|Int\\(8\\|16\\|32\\|64\\)\\|Integer\\|Float\\|Float32\\|Float64\\|Complex128\\|Complex64\\|ComplexNum\\|Bool\\|Char\\|Number\\|Scalar\\|Real\\|Int\\|Uint\\|Array\\|DArray\\|AbstractArray\\|AbstractVector\\|AbstractMatrix\\|SubArray\\|StridedArray\\|StridedVector\\|StridedMatrix\\|VecOrMat\\|StridedVecOrMat\\|Range\\|Range1\\|SparseMatrixCSC\\|Tuple\\|NTuple\\|Buffer\\|Size\\|Index\\|Symbol\\|Function\\|Vector\\|Matrix\\|Union\\|Type\\|Any\\|Complex\\|None\\|String\\|Ptr\\|Void\\|Exception\\|PtrInt\\|Long\\|Ulong\\)\\>" .
font-lock-type-face)
(cons
(concat "\\<\\("
(mapconcat
'identity
'("if" "else" "elseif" "while" "for" "begin" "end" "quote"
"try" "catch" "return" "local" "abstract" "function" "macro" "ccall"
"typealias" "break" "continue" "type" "global" "@\\w+"
"module" "import" "export" "const" "let" "bitstype" "using")
"\\|") "\\)\\>")
'font-lock-keyword-face)
'("\\<\\(true\\|false\\|C_NULL\\|Inf\\|NaN\\|Inf32\\|NaN32\\)\\>" . font-lock-constant-face)
(list julia-unquote-regex 2 'font-lock-constant-face)
(list julia-char-regex 2 'font-lock-string-face)
(list julia-forloop-in-regex 1 'font-lock-keyword-face)
;; (cons ess-subset-regexp 'font-lock-constant-face)
(cons "\\(\\sw+\\) ?(" '(1 font-lock-function-name-face keep))
;(list julia-string-regex 0 'font-lock-string-face)
))
(defconst julia-block-start-keywords
(list "if" "while" "for" "begin" "try" "function" "type" "let" "macro"
"quote"))
(defconst julia-block-other-keywords
(list "else" "elseif"))
(defconst julia-block-end-keywords
(list "end" "else" "elseif" "catch"))
(defun ess-inside-brackets-p (&optional pos)
(save-excursion
(let* ((pos (or pos (point)))
(beg (re-search-backward "\\[" (max (point-min) (- pos 1000)) t))
(end (re-search-forward "\\]" (min (point-max) (+ pos 1000)) t)))
(and beg end (> pos beg) (> end pos)))))
(defun julia-at-keyword (kw-list)
; not a keyword if used as a field name, X.word, or quoted, :word
(and (or (= (point) 1)
(and (not (equal (char-before (point)) ?.))
(not (equal (char-before (point)) ?:))))
(not (ess-inside-string-or-comment-p (point)))
(not (ess-inside-brackets-p (point)))
(member (current-word) kw-list)))
; get the position of the last open block
(defun julia-last-open-block-pos (min)
(let ((count 0))
(while (not (or (> count 0) (<= (point) min)))
(backward-word 1)
(setq count
(cond ((julia-at-keyword julia-block-start-keywords)
(+ count 1))
((and (equal (current-word) "end")
(not (ess-inside-comment-p)) (not (ess-inside-brackets-p)))
(- count 1))
(t count))))
(if (> count 0)
(point)
nil)))
; get indent for last open block
(defun julia-last-open-block (min)
(let ((pos (julia-last-open-block-pos min)))
(and pos
(progn
(goto-char pos)
(+ julia-basic-offset (current-indentation))))))
; return indent implied by a special form opening on the previous line, if any
(defun julia-form-indent ()
(forward-line -1)
(end-of-line)
(backward-sexp)
(if (julia-at-keyword julia-block-other-keywords)
(+ julia-basic-offset (current-indentation))
(if (char-equal (char-after (point)) ?\()
(progn
(backward-word 1)
(let ((cur (current-indentation)))
(if (julia-at-keyword julia-block-start-keywords)
(+ julia-basic-offset cur)
nil)))
nil)))
(defun julia-paren-indent ()
(let* ((p (parse-partial-sexp (save-excursion
;; only indent by paren if the last open
;; paren is closer than the last open
;; block
(or (julia-last-open-block-pos (point-min))
(point-min)))
(progn (beginning-of-line)
(point))))
(pos (cadr p)))
(if (or (= 0 (car p)) (null pos))
nil
(progn (goto-char pos) (+ 1 (current-column))))))
; (forward-line -1)
; (end-of-line)
; (let ((pos (condition-case nil
; (scan-lists (point) -1 1)
; (error nil))))
; (if pos
; (progn (goto-char pos) (+ 1 (current-column)))
; nil)))
(defun julia-indent-line ()
"Indent current line of julia code"
(interactive)
; (save-excursion
(end-of-line)
(indent-line-to
(or (and (ess-inside-string-p (point-at-bol)) 0)
(save-excursion (ignore-errors (julia-form-indent)))
(save-excursion (ignore-errors (julia-paren-indent)))
;; previous line ends in =
(save-excursion
(beginning-of-line)
(skip-chars-backward " \t\n")
(when (eql (char-before) ?=)
(+ julia-basic-offset (current-indentation))))
(save-excursion
(let ((endtok (progn
(beginning-of-line)
(forward-to-indentation 0)
(julia-at-keyword julia-block-end-keywords))))
(ignore-errors (+ (julia-last-open-block (point-min))
(if endtok (- julia-basic-offset) 0)))))
;; take same indentation as previous line
(save-excursion (forward-line -1)
(current-indentation))
0))
(when (julia-at-keyword julia-block-end-keywords)
(forward-word 1)))
(defvar julia-editing-alist
'((paragraph-start . (concat "\\s-*$\\|" page-delimiter))
(paragraph-separate . (concat "\\s-*$\\|" page-delimiter))
(paragraph-ignore-fill-prefix . t)
(require-final-newline . t)
(comment-start . "# ")
(comment-add . 1)
(comment-start-skip . "#+\\s-*")
(comment-column . 40)
;;(comment-indent-function . 'S-comment-indent)
;;(ess-comment-indent . 'S-comment-indent)
;; (ess-indent-line . 'S-indent-line)
;;(ess-calculate-indent . 'ess-calculate-indent)
(ess-indent-line-function . 'julia-indent-line)
(indent-line-function . 'julia-indent-line)
(parse-sexp-ignore-comments . t)
(ess-style . ess-default-style) ;; ignored
(ess-local-process-name . nil)
;;(ess-keep-dump-files . 'ask)
(ess-mode-syntax-table . julia-syntax-table)
;; For Changelog add, require ' ' before <- : "attr<-" is a function name :
;; (add-log-current-defun-header-regexp . "^\\(.+\\)\\s-+=[ \t\n]*function")
(add-log-current-defun-header-regexp . "^.*function[ \t]*\\([^ \t(]*\\)[ \t]*(")
(font-lock-defaults . '(julia-font-lock-defaults
nil nil ((?\_ . "w"))))
)
"General options for julia source files.")
(autoload 'inferior-ess "ess-inf" "Run an ESS process.")
(autoload 'ess-mode "ess-mode" "Edit an ESS process.")
(defun julia-send-string-function (process string visibly)
(let ((file (concat temporary-file-directory "julia_eval_region.jl")))
(with-temp-file file
(insert string))
(process-send-string process (format ess-load-command file))))
(defun julia-get-help-topics (&optional proc)
(ess-get-words-from-vector "ESS.all_help_topics()\n"))
;; (ess-command com)))
(defvar julia-help-command "help(\"%s\")\n")
(defvar ess-julia-error-regexp-alist '(julia-in julia-at)
"List of symbols which are looked up in `compilation-error-regexp-alist-alist'.")
(add-to-list 'compilation-error-regexp-alist-alist
'(julia-in "^\\s-*in [^ \t\n]* \\(at \\(.*\\):\\([0-9]+\\)\\)" 2 3 nil 2 1))
(add-to-list 'compilation-error-regexp-alist-alist
'(julia-at "^\\S-+\\s-+\\(at \\(.*\\):\\([0-9]+\\)\\)" 2 3 nil 2 1))
(defvar julia-customize-alist
'((comint-use-prompt-regexp . t)
(ess-eldoc-function . 'ess-julia-eldoc-function)
(inferior-ess-primary-prompt . "a> ") ;; from julia>
(inferior-ess-secondary-prompt . nil)
(inferior-ess-prompt . "\\w*> ")
(ess-local-customize-alist . 'julia-customize-alist)
(inferior-ess-program . inferior-julia-program-name)
(inferior-ess-font-lock-defaults . julia-font-lock-defaults)
(ess-get-help-topics-function . 'julia-get-help-topics)
(ess-help-web-search-command . "http://docs.julialang.org/en/latest/search/?q=%s")
(ess-load-command . "include(\"%s\")\n")
(ess-funargs-command . "ESS.fun_args(\"%s\")\n")
(ess-dump-error-re . "in \\w* at \\(.*\\):[0-9]+")
(ess-error-regexp . "\\(^\\s-*at\\s-*\\(?3:.*\\):\\(?2:[0-9]+\\)\\)")
(ess-error-regexp-alist . ess-julia-error-regexp-alist)
(ess-send-string-function . nil);'julia-send-string-function)
(ess-imenu-generic-expression . julia-imenu-generic-expression)
;; (inferior-ess-objects-command . inferior-R-objects-command)
;; (inferior-ess-search-list-command . "search()\n")
(inferior-ess-help-command . julia-help-command)
;; (inferior-ess-help-command . "help(\"%s\")\n")
(ess-language . "julia")
(ess-dialect . "julia")
(ess-suffix . "jl")
(ess-dump-filename-template . (ess-replace-regexp-in-string
"S$" ess-suffix ; in the one from custom:
ess-dump-filename-template-proto))
(ess-mode-syntax-table . julia-syntax-table)
(ess-mode-editing-alist . julia-editing-alist)
(ess-change-sp-regexp . nil );ess-R-change-sp-regexp)
(ess-help-sec-regex . ess-help-R-sec-regex)
(ess-help-sec-keys-alist . ess-help-R-sec-keys-alist)
(ess-loop-timeout . ess-S-loop-timeout);fixme: dialect spec.
(ess-cmd-delay . ess-R-cmd-delay)
(ess-function-pattern . ess-R-function-pattern)
(ess-object-name-db-file . "ess-r-namedb.el" )
(ess-smart-operators . ess-R-smart-operators)
(inferior-ess-help-filetype . nil)
(inferior-ess-exit-command . "exit()\n")
;;harmful for shell-mode's C-a: -- but "necessary" for ESS-help?
(inferior-ess-start-file . nil) ;; "~/.ess-R"
(inferior-ess-start-args . "")
(inferior-ess-language-start . nil)
(ess-STERM . "iESS")
(ess-editor . R-editor)
(ess-pager . R-pager)
)
"Variables to customize for Julia -- set up later than emacs initialization.")
(defvar ess-julia-versions '("julia")
"List of partial strings for versions of Julia to access within ESS.
Each string specifies the start of a filename. If a filename
beginning with one of these strings is found on `exec-path', a M-x
command for that version of Julia is made available. ")
(defcustom inferior-julia-args ""
"String of arguments (see 'julia --help') used when starting julia."
;; These arguments are currently not passed to other versions of julia that have
;; been created using the variable `ess-r-versions'."
:group 'ess-julia
:type 'string)
;;;###autoload
(defun julia-mode (&optional proc-name)
"Major mode for editing julia source. See `ess-mode' for more help."
(interactive "P")
;; (setq ess-customize-alist julia-customize-alist)
(ess-mode julia-customize-alist proc-name)
;; for emacs < 24
;; (add-hook 'comint-dynamic-complete-functions 'ess-complete-object-name nil 'local)
;; for emacs >= 24
;; (remove-hook 'completion-at-point-functions 'ess-filename-completion 'local) ;; should be first
;; (add-hook 'completion-at-point-functions 'ess-object-completion nil 'local)
;; (add-hook 'completion-at-point-functions 'ess-filename-completion nil 'local)
(if (fboundp 'ess-add-toolbar) (ess-add-toolbar))
(set (make-local-variable 'end-of-defun-function) 'ess-end-of-function)
;; (local-set-key "\t" 'julia-indent-line) ;; temp workaround
;; (set (make-local-variable 'indent-line-function) 'julia-indent-line)
(set (make-local-variable 'julia-basic-offset) 4)
(setq imenu-generic-expression julia-imenu-generic-expression)
(imenu-add-to-menubar "Imenu-jl")
(run-hooks 'julia-mode-hook))
(defvar ess-julia-post-run-hook nil
"Functions run in process buffer after the initialization of
julia process.")
;;;###autoload
(defun julia (&optional start-args)
"Call 'julia',
Optional prefix (C-u) allows to set command line arguments, such as
--load=<file>. This should be OS agnostic.
If you have certain command line arguments that should always be passed
to julia, put them in the variable `inferior-julia-args'."
(interactive "P")
;; get settings, notably inferior-julia-program-name :
(if (null inferior-julia-program-name)
(error "'inferior-julia-program-name' does not point to 'julia-release-basic' executable")
(setq ess-customize-alist julia-customize-alist)
(ess-write-to-dribble-buffer ;; for debugging only
(format
"\n(julia): ess-dialect=%s, buf=%s, start-arg=%s\n current-prefix-arg=%s\n"
ess-dialect (current-buffer) start-args current-prefix-arg))
(let* ((jl-start-args
(concat inferior-julia-args " " ; add space just in case
(if start-args
(read-string
(concat "Starting Args"
(if inferior-julia-args
(concat " [other than '" inferior-julia-args "']"))
" ? "))
nil))))
(inferior-ess jl-start-args) ;; -> .. (ess-multi ...) -> .. (inferior-ess-mode) ..
(ess--tb-start)
(set (make-local-variable 'julia-basic-offset) 4)
;; remove ` from julia's logo
(goto-char (point-min))
(while (re-search-forward "`" nil t)
(replace-match "'"))
(goto-char (point-max))
(ess--inject-code-from-file (format "%sess-julia.jl" ess-etc-directory))
(with-ess-process-buffer nil
(run-mode-hooks 'ess-julia-post-run-hook))
)))
;;; ELDOC
(defun ess-julia-eldoc-function ()
"Return the doc string, or nil.
If an ESS process is not associated with the buffer, do not try
to look up any doc strings."
(interactive)
(when (and (ess-process-live-p)
(not (ess-process-get 'busy)))
(let ((funname (or (and ess-eldoc-show-on-symbol ;; aggressive completion
(symbol-at-point))
(car (ess--funname.start)))))
(when funname
(let* ((args (copy-sequence (nth 2 (ess-function-arguments funname))))
(W (- (window-width (minibuffer-window)) (+ 4 (length funname))))
(doc (concat (propertize funname 'face font-lock-function-name-face) ": ")))
(when args
(setq args (sort args (lambda (s1 s2)
(< (length s1) (length s2)))))
(setq doc (concat doc (pop args)))
(while (and args (< (length doc) W))
(setq doc (concat doc " "
(pop args))))
(when (and args (< (length doc) W))
(setq doc (concat doc " {--}"))))
doc)))))
;;; IMENU
(defvar julia-imenu-generic-expression
;; don't use syntax classes, screws egrep
'(("Function (_)" "[ \t]*function[ \t]+\\(_[^ \t\n]*\\)" 1)
("Function" "[ \t]*function[ \t]+\\([^_][^\t\n]*\\)" 1)
("Const" "[ \t]*const \\([^ \t\n]*\\)" 1)
("Type" "^[ \t]*[a-zA-Z0-9_]*type[a-zA-Z0-9_]* \\([^ \t\n]*\\)" 1)
("Require" " *\\(\\brequire\\)(\\([^ \t\n)]*\\)" 2)
("Include" " *\\(\\binclude\\)(\\([^ \t\n)]*\\)" 2)
;; ("Classes" "^.*setClass(\\(.*\\)," 1)
;; ("Coercions" "^.*setAs(\\([^,]+,[^,]*\\)," 1) ; show from and to
;; ("Generics" "^.*setGeneric(\\([^,]*\\)," 1)
;; ("Methods" "^.*set\\(Group\\|Replace\\)?Method(\"\\(.+\\)\"," 2)
;; ;;[ ]*\\(signature=\\)?(\\(.*,?\\)*\\)," 1)
;; ;;
;; ;;("Other" "^\\(.+\\)\\s-*<-[ \t\n]*[^\\(function\\|read\\|.*data\.frame\\)]" 1)
;; ("Package" "^.*\\(library\\|require\\)(\\(.*\\)," 2)
;; ("Data" "^\\(.+\\)\\s-*<-[ \t\n]*\\(read\\|.*data\.frame\\).*(" 1)))
))

View File

@@ -0,0 +1,21 @@
#!/usr/bin/env escript
%% -*- erlang -*-
%%! -smp enable -sname factorial -mnesia debug verbose
main([String]) ->
try
N = list_to_integer(String),
F = fac(N),
io:format("factorial ~w = ~w\n", [N,F])
catch
_:_ ->
usage()
end;
main(_) ->
usage().
usage() ->
io:format("usage: factorial integer\n"),
halt(1).
fac(0) -> 1;
fac(N) -> N * fac(N-1).

4
samples/Erlang/hello.escript Executable file
View File

@@ -0,0 +1,4 @@
#!/usr/bin/env escript
-export([main/1]).
main([]) -> io:format("Hello, World!~n").

View File

@@ -0,0 +1,136 @@
%% For each header file, it scans thru all records and create helper functions
%% Helper functions are:
%% setters, getters, fields, fields_atom, type
-module(record_helper).
-export([make/1, make/2]).
make(HeaderFiles) ->
make([ atom_to_list(X) || X <- HeaderFiles ], ".").
%% .hrl file, relative to current dir
make(HeaderFiles, OutDir) ->
ModuleName = "record_utils",
HeaderComment = "%% This is auto generated file. Please don't edit it\n\n",
ModuleDeclaration = "-module(" ++ ModuleName ++ ").\n"
++ "-author(\"trung@mdkt.org\").\n"
++ "-compile(export_all).\n"
++ [ "-include(\"" ++ X ++ "\").\n" || X <- HeaderFiles ]
++ "\n",
Src = format_src(lists:sort(lists:flatten([read(X) || X <- HeaderFiles] ++ [generate_type_default_function()]))),
file:write_file(OutDir++"/" ++ ModuleName ++ ".erl", list_to_binary([HeaderComment, ModuleDeclaration, Src])).
read(HeaderFile) ->
try epp:parse_file(HeaderFile,[],[]) of
{ok, Tree} ->
parse(Tree);
{error, Error} ->
{error, {"Error parsing header file", HeaderFile, Error}}
catch
_:Error ->
{catched_error, {"Error parsing header file", HeaderFile, Error}}
end.
format_src([{_, _, _, Src}|T]) when length(T) == 0 ->
Src ++ ".\n\n";
format_src([{Type, _, _, Src}|[{Type, A, B, NSrc}|T]]) ->
Src ++ ";\n\n" ++ format_src([{Type, A, B, NSrc}|T]);
format_src([{_Type, _, _, Src}|[{Type1, A, B, NSrc}|T]]) ->
Src ++ ".\n\n" ++ format_src([{Type1, A, B, NSrc}|T]);
format_src([{_, _, _, Src}|T]) when length(T) > 0 ->
Src ++ ";\n\n" ++ format_src(T).
parse(Tree) ->
[ parse_record(X) || X <- Tree ].
parse_record({attribute, _, record, RecordInfo}) ->
{RecordName, RecordFields} = RecordInfo,
if
length(RecordFields) == 1 ->
lists:flatten([ generate_setter_getter_function(RecordName, X) || X <- RecordFields ]
++ [generate_type_function(RecordName)]);
true ->
lists:flatten([generate_fields_function(RecordName, RecordFields)]
++ [generate_fields_atom_function(RecordName, RecordFields)]
++ [ generate_setter_getter_function(RecordName, X) || X <- RecordFields ]
++ [generate_type_function(RecordName)])
end;
parse_record(_) -> [].
parse_field_name({record_field, _, {atom, _, FieldName}}) ->
{field, "\"" ++ atom_to_list(FieldName) ++ "\""};
parse_field_name({record_field, _, {atom, _, _FieldName}, {record, _, ParentRecordName, _}}) ->
{parent_field, "fields(" ++ atom_to_list(ParentRecordName) ++ ")"};
parse_field_name({record_field, _, {atom, _, FieldName}, _}) ->
{field, "\"" ++ atom_to_list(FieldName) ++ "\""}.
parse_field_name_atom({record_field, _, {atom, _, FieldName}}) ->
atom_to_list(FieldName);
parse_field_name_atom({record_field, _, {atom, _, _FieldName}, {record, _, ParentRecordName, _}}) ->
"fields_atom(" ++ atom_to_list(ParentRecordName) ++ ")";
parse_field_name_atom({record_field, _, {atom, _, FieldName}, _}) ->
atom_to_list(FieldName).
concat([], _S) -> [];
concat([F|T], _S) when length(T) == 0 -> F;
concat([F|T], S) -> F ++ S ++ concat(T, S).
concat_ext([], _S) -> [];
concat_ext([F|T], S) -> F ++ S ++ concat_ext(T, S).
parse_field([], AccFields, AccParentFields) -> concat_ext(AccParentFields, " ++ ") ++ "[" ++ concat(AccFields, ", ") ++ "]";
%parse_field([F|T], AccFields, AccParentFields) when length(T) == 0 -> parse_field_name(F);
parse_field([F|T], AccFields, AccParentFields) ->
case parse_field_name(F) of
{field, Field} ->
parse_field(T, AccFields ++ [Field], AccParentFields);
{parent_field, PField} ->
parse_field(T, AccFields, AccParentFields ++ [PField])
end.
parse_field_atom([F|T]) when length(T) == 0 -> parse_field_name_atom(F);
parse_field_atom([F|T]) ->
parse_field_name_atom(F) ++ ", " ++ parse_field_atom(T).
generate_type_default_function() ->
{type, zzz, 99, "type(_) -> undefined"}.
generate_type_function(RecordName) ->
{type, RecordName, 0, "type(Obj) when is_record(Obj, " ++ atom_to_list(RecordName) ++ ") -> " ++ atom_to_list(RecordName)}.
generate_fields_function(RecordName, RecordFields) ->
Fields = parse_field(RecordFields, [], []),
{field, RecordName, 1, "fields(" ++ atom_to_list(RecordName) ++ ") -> \n\t" ++ Fields}.
generate_fields_atom_function(RecordName, RecordFields) ->
Fields = parse_field_atom(RecordFields),
{field_atom, RecordName, 1, "fields_atom(" ++ atom_to_list(RecordName) ++ ") -> \n\tlists:flatten([" ++ Fields ++ "])"}.
generate_setter_getter_function(RecordName, {record_field, _, {atom, _, FieldName}, {record, _, ParentRecordName, _}}) ->
to_setter_getter_function(atom_to_list(RecordName), atom_to_list(FieldName), atom_to_list(ParentRecordName));
generate_setter_getter_function(RecordName, {record_field, _, {atom, _, FieldName}, _}) ->
to_setter_getter_function(atom_to_list(RecordName), atom_to_list(FieldName));
generate_setter_getter_function(RecordName, {record_field, _, {atom, _, FieldName}}) ->
to_setter_getter_function(atom_to_list(RecordName), atom_to_list(FieldName)).
to_setter_getter_function(RecordName, FieldName) ->
[{setter, RecordName, 1, "set(Obj, " ++ FieldName ++ ", Value) when is_record(Obj, " ++ RecordName ++ ") -> \n"
++ "\tNewObj = Obj#" ++ RecordName ++ "{" ++ FieldName ++ " = Value},\n"
++ "\t{ok, NewObj, {" ++ FieldName ++ ", Value}}"},
{getter, RecordName, 1, "get(Obj, " ++ FieldName ++ ") when is_record(Obj, " ++ RecordName ++ ") -> \n"
++ "\t{ok, Obj#" ++ RecordName ++ "." ++ FieldName ++ "}"}
].
to_setter_getter_function(RecordName, FieldName, ParentRecordName) ->
[{setter, RecordName, 2, "set(Obj, " ++ FieldName ++ ", Value) when is_record(Obj, " ++ RecordName ++ ") and is_record(Value, " ++ ParentRecordName ++ ") -> \n"
++ "\tNewObj = Obj#" ++ RecordName ++ "{" ++ FieldName ++ " = Value},\n"
++ "\t{ok, NewObj, {" ++ FieldName ++ ", Value}};\n\n"
++ "set(Obj, ParentProperty, Value) when is_record(Obj, " ++ RecordName ++ ") and is_atom(ParentProperty) -> \n"
++ "\t{ok, NewParentObject, _} = set(Obj#" ++ RecordName ++ ".parent, ParentProperty, Value),\n"
++ "\tset(Obj, parent, NewParentObject)"},
{getter, RecordName, 2, "get(Obj, " ++ FieldName ++ ") when is_record(Obj, " ++ RecordName ++ ") -> \n"
++ "\t{ok, Obj#" ++ RecordName ++ "." ++ FieldName ++ "};\n\n"
++ "get(Obj, ParentProperty) when is_record(Obj, " ++ RecordName ++ ") and is_atom(ParentProperty) -> \n"
++ "\tget(Obj#" ++ RecordName ++ ".parent, ParentProperty)"}
].

View File

@@ -0,0 +1,100 @@
%% This is auto generated file. Please don't edit it
-module(record_utils).
-compile(export_all).
-include("messages.hrl").
fields(abstract_message) ->
["clientId", "destination", "messageId", "timestamp", "timeToLive", "headers", "body"];
fields(async_message) ->
fields(abstract_message) ++ ["correlationId", "correlationIdBytes"].
fields_atom(abstract_message) ->
lists:flatten([clientId, destination, messageId, timestamp, timeToLive, headers, body]);
fields_atom(async_message) ->
lists:flatten([fields_atom(abstract_message), correlationId, correlationIdBytes]).
get(Obj, body) when is_record(Obj, abstract_message) ->
{ok, Obj#abstract_message.body};
get(Obj, clientId) when is_record(Obj, abstract_message) ->
{ok, Obj#abstract_message.clientId};
get(Obj, destination) when is_record(Obj, abstract_message) ->
{ok, Obj#abstract_message.destination};
get(Obj, headers) when is_record(Obj, abstract_message) ->
{ok, Obj#abstract_message.headers};
get(Obj, messageId) when is_record(Obj, abstract_message) ->
{ok, Obj#abstract_message.messageId};
get(Obj, timeToLive) when is_record(Obj, abstract_message) ->
{ok, Obj#abstract_message.timeToLive};
get(Obj, timestamp) when is_record(Obj, abstract_message) ->
{ok, Obj#abstract_message.timestamp};
get(Obj, correlationId) when is_record(Obj, async_message) ->
{ok, Obj#async_message.correlationId};
get(Obj, correlationIdBytes) when is_record(Obj, async_message) ->
{ok, Obj#async_message.correlationIdBytes};
get(Obj, parent) when is_record(Obj, async_message) ->
{ok, Obj#async_message.parent};
get(Obj, ParentProperty) when is_record(Obj, async_message) and is_atom(ParentProperty) ->
get(Obj#async_message.parent, ParentProperty).
set(Obj, body, Value) when is_record(Obj, abstract_message) ->
NewObj = Obj#abstract_message{body = Value},
{ok, NewObj, {body, Value}};
set(Obj, clientId, Value) when is_record(Obj, abstract_message) ->
NewObj = Obj#abstract_message{clientId = Value},
{ok, NewObj, {clientId, Value}};
set(Obj, destination, Value) when is_record(Obj, abstract_message) ->
NewObj = Obj#abstract_message{destination = Value},
{ok, NewObj, {destination, Value}};
set(Obj, headers, Value) when is_record(Obj, abstract_message) ->
NewObj = Obj#abstract_message{headers = Value},
{ok, NewObj, {headers, Value}};
set(Obj, messageId, Value) when is_record(Obj, abstract_message) ->
NewObj = Obj#abstract_message{messageId = Value},
{ok, NewObj, {messageId, Value}};
set(Obj, timeToLive, Value) when is_record(Obj, abstract_message) ->
NewObj = Obj#abstract_message{timeToLive = Value},
{ok, NewObj, {timeToLive, Value}};
set(Obj, timestamp, Value) when is_record(Obj, abstract_message) ->
NewObj = Obj#abstract_message{timestamp = Value},
{ok, NewObj, {timestamp, Value}};
set(Obj, correlationId, Value) when is_record(Obj, async_message) ->
NewObj = Obj#async_message{correlationId = Value},
{ok, NewObj, {correlationId, Value}};
set(Obj, correlationIdBytes, Value) when is_record(Obj, async_message) ->
NewObj = Obj#async_message{correlationIdBytes = Value},
{ok, NewObj, {correlationIdBytes, Value}};
set(Obj, parent, Value) when is_record(Obj, async_message) and is_record(Value, abstract_message) ->
NewObj = Obj#async_message{parent = Value},
{ok, NewObj, {parent, Value}};
set(Obj, ParentProperty, Value) when is_record(Obj, async_message) and is_atom(ParentProperty) ->
{ok, NewParentObject, _} = set(Obj#async_message.parent, ParentProperty, Value),
set(Obj, parent, NewParentObject).
type(Obj) when is_record(Obj, abstract_message) -> abstract_message;
type(Obj) when is_record(Obj, async_message) -> async_message;
type(_) -> undefined.

View File

@@ -0,0 +1,122 @@
#!/usr/bin/env escript
%%!
%-*-Mode:erlang;coding:utf-8;tab-width:4;c-basic-offset:4;indent-tabs-mode:()-*-
% ex: set ft=erlang fenc=utf-8 sts=4 ts=4 sw=4 et:
%%%
%%%------------------------------------------------------------------------
%%% BSD LICENSE
%%%
%%% Copyright (c) 2013, Michael Truog <mjtruog at gmail dot com>
%%% All rights reserved.
%%%
%%% Redistribution and use in source and binary forms, with or without
%%% modification, are permitted provided that the following conditions are met:
%%%
%%% * Redistributions of source code must retain the above copyright
%%% notice, this list of conditions and the following disclaimer.
%%% * Redistributions in binary form must reproduce the above copyright
%%% notice, this list of conditions and the following disclaimer in
%%% the documentation and/or other materials provided with the
%%% distribution.
%%% * All advertising materials mentioning features or use of this
%%% software must display the following acknowledgment:
%%% This product includes software developed by Michael Truog
%%% * The name of the author may not be used to endorse or promote
%%% products derived from this software without specific prior
%%% written permission
%%%
%%% THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND
%%% CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES,
%%% INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES
%%% OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
%%% DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR
%%% CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
%%% SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING,
%%% BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
%%% SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
%%% INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY,
%%% WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING
%%% NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
%%% OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH
%%% DAMAGE.
%%%------------------------------------------------------------------------
-author('mjtruog [at] gmail (dot) com').
-mode(compile).
main(_) ->
{ok,
[{sys, _} = RelToolConfig,
{target_dir, TargetDir},
{overlay, OverlayConfig}]} = file:consult("reltool.config"),
{ok, Spec} = reltool:get_target_spec([RelToolConfig]),
case file:make_dir(TargetDir) of
ok ->
ok;
{error, eexist} ->
io:format("release already exists? (~p)~n", [TargetDir]),
exit_code(1)
end,
ok = reltool:eval_target_spec(Spec, code:root_dir(), TargetDir),
ok = process_overlay(RelToolConfig, TargetDir, OverlayConfig),
exit_code(0).
shell(Command, Arguments) ->
CommandSuffix = " && echo 0 || echo 1",
case lists:reverse(os:cmd(lists:flatten(
io_lib:format(Command ++ CommandSuffix, Arguments)))) of
[_, $0 | _] ->
ok;
[_, $1 | _] ->
io:format("\"~s\" failed!~n", [io_lib:format(Command, Arguments)]),
error
end.
boot_rel_vsn({sys, Config} = _RelToolConfig) ->
{rel, _Name, Ver, _} = proplists:lookup(rel, Config),
Ver.
%% minimal parsing for handling mustache syntax
mustache(Body, Context) ->
mustache(Body, "", Context).
mustache([], Result, _Context) ->
lists:reverse(Result);
mustache([${, ${ | KeyStr], Result, Context) ->
mustache_key(KeyStr, "", Result, Context);
mustache([C | Rest], Result, Context) ->
mustache(Rest, [C | Result], Context).
mustache_key([$}, $} | Rest], KeyStr, Result, Context) ->
Key = erlang:list_to_existing_atom(lists:reverse(KeyStr)),
{ok, Value} = dict:find(Key, Context),
mustache(Rest, lists:reverse(Value) ++ Result, Context);
mustache_key([C | Rest], KeyStr, Result, Context) ->
mustache_key(Rest, [C | KeyStr], Result, Context).
%% support minimal overlay based on rebar overlays
process_overlay(RelToolConfig, TargetDir, OverlayConfig) ->
BootRelVsn = boot_rel_vsn(RelToolConfig),
OverlayVars =
dict:from_list([{erts_vsn, "erts-" ++ erlang:system_info(version)},
{rel_vsn, BootRelVsn},
{target_dir, TargetDir},
{hostname, net_adm:localhost()}]),
{ok, BaseDir} = file:get_cwd(),
execute_overlay(OverlayConfig, OverlayVars, BaseDir, TargetDir).
execute_overlay([], _Vars, _BaseDir, _TargetDir) ->
ok;
execute_overlay([{mkdir, Out} | Rest], Vars, BaseDir, TargetDir) ->
OutDir = mustache(filename:join(TargetDir, Out), Vars),
ok = shell("mkdir -p ~s", [OutDir]),
execute_overlay(Rest, Vars, BaseDir, TargetDir);
execute_overlay([{copy, In, Out} | Rest], Vars, BaseDir, TargetDir) ->
InFile = mustache(filename:join(BaseDir, In), Vars),
OutFile = mustache(filename:join(TargetDir, Out), Vars),
true = filelib:is_file(InFile),
ok = shell("cp -R ~s ~s", [InFile, OutFile]),
execute_overlay(Rest, Vars, BaseDir, TargetDir).
exit_code(ExitCode) ->
erlang:halt(ExitCode, [{flush, true}]).

View File

@@ -0,0 +1,79 @@
\ KataDiversion in Forth
\ -- utils
\ empty the stack
: EMPTY
DEPTH 0 <> IF BEGIN
DROP DEPTH 0 =
UNTIL
THEN ;
\ power
: ** ( n1 n2 -- n1_pow_n2 ) 1 SWAP ?DUP IF 0 DO OVER * LOOP THEN NIP ;
\ compute the highest power of 2 below N.
\ e.g. : 31 -> 16, 4 -> 4
: MAXPOW2 ( n -- log2_n ) DUP 1 < IF 1 ABORT" Maxpow2 need a positive value."
ELSE DUP 1 = IF 1
ELSE
1 >R
BEGIN ( n |R: i=1)
DUP DUP I - 2 *
( n n 2*[n-i])
R> 2 * >R ( … |R: i*2)
> ( n n>2*[n-i] )
UNTIL
R> 2 /
THEN
THEN NIP ;
\ -- kata
\ test if the given N has two adjacent 1 bits
\ e.g. : 11 -> 1011 -> -1
\ 9 -> 1001 -> 0
: ?NOT-TWO-ADJACENT-1-BITS ( n -- bool )
\ the word uses the following algorithm :
\ (stack|return stack)
\ ( A N | X ) A: 0, X: N LOG2
\ loop: if N-X > 0 then A++ else A=0 ; X /= 2
\ return 0 if A=2
\ if X=1 end loop and return -1
0 SWAP DUP DUP 0 <> IF
MAXPOW2 >R
BEGIN
DUP I - 0 >= IF
SWAP DUP 1 = IF 1+ SWAP
ELSE DROP 1 SWAP I -
THEN
ELSE NIP 0 SWAP
THEN
OVER
2 =
I 1 = OR
R> 2 / >R
UNTIL
R> 2DROP
2 <>
ELSE 2DROP INVERT
THEN ;
\ return the maximum number which can be made with N (given number) bits
: MAX-NB ( n -- m ) DUP 1 < IF DROP 0 ( 0 )
ELSE
DUP IF DUP 2 SWAP ** NIP 1 - ( 2**n - 1 )
THEN
THEN ;
\ return the number of numbers which can be made with N (given number) bits
\ or less, and which have not two adjacent 1 bits.
\ see http://www.codekata.com/2007/01/code_kata_fifte.html
: HOW-MANY-NB-NOT-TWO-ADJACENT-1-BITS ( n -- m )
DUP 1 < IF DUP 0
ELSE
0 SWAP
MAX-NB 1 + 0 DO I ?NOT-TWO-ADJACENT-1-BITS - LOOP
THEN ;

42
samples/Forth/block.fth Normal file
View File

@@ -0,0 +1,42 @@
( Block words. )
variable blk
variable current-block
: block ( n -- addr )
current-block ! 0 ;
: buffer ( n -- addr )
current-block ! 0 ;
\ evaluate (extended semantics)
\ flush ( -- )
: load ( ... n -- ... )
dup current-block !
blk !
save-input
0 >in !
blk @ block ''source ! 1024 ''#source !
( interpret )
restore-input ;
\ save-buffers ( -- )
\ update ( -- )
( Block extension words. )
\ empty-buffers ( -- )
variable scr
: list ( n -- )
dup scr !
dup current-block !
block 1024 bounds do i @ emit loop ;
\ refill (extended semantics)
: thru ( x y -- ) +1 swap do i load loop ;
\ \ (extended semantics)

136
samples/Forth/core-ext.fth Normal file
View File

@@ -0,0 +1,136 @@
\ -*- forth -*- Copyright 2004, 2013 Lars Brinkhoff
\ Kernel: #tib
\ TODO: .r
: .( ( "<string><paren>" -- )
[char] ) parse type ; immediate
: 0<> ( n -- flag ) 0 <> ;
: 0> ( n -- flag ) 0 > ;
\ Kernel: 2>r
: 2r> ( -- x1 x2 ) ( R: x1 x2 -- ) r> r> r> rot >r swap ;
: 2r@ ( -- x1 x2 ) ( R: x1 x2 -- x1 x2 ) 2r> 2dup 2>r ;
: :noname align here 0 c, 15 allot lastxt dup @ , !
[ ' enter >code @ ] literal , 0 , ] lastxt @ ;
\ Kernel: <>
\ : ?do ( n1 n2 -- ) ( R: -- loop-sys ) ( C: -- do-sys )
\ here postpone 2>r unresolved branch here ;
: again ( -- ) ( C: dest -- )
postpone branch , ; immediate
: string+ ( caddr -- addr )
count + aligned ;
: (c") ( -- caddr ) ( R: ret1 -- ret2 )
r> dup string+ >r ;
: c" ( "<string><quote>" -- caddr )
postpone (c") [char] " parse dup c, string, ; immediate
: case ( -- ) ( C: -- case-sys )
0 ;
: compile, ( xt -- )
, ;
\ TODO: convert
: endcase ( x -- ) ( C: case-sys -- )
0 do postpone then loop
postpone drop ;
: endof ( -- ) ( C: case-sys1 of-sys -- case-sys2 )
postpone else swap 1+ ;
\ TODO: erase
\ TODO: expect
: false ( -- 0 )
0 ;
: hex ( -- )
16 base ! ;
\ TODO: marker
\ Kernel: nip
: of ( x x -- | x y -- x ) ( C: -- of-sys )
postpone over postpone = postpone if postpone drop ;
\ Kernel: pad
\ Kernel: parse
: pick ( xn ... x0 n -- xn ... x0 xn )
2 + cells 'SP @ + @ ;
: query ( -- )
tib ''source ! #tib ''#source ! 0 'source-id !
refill drop ;
\ Kernel: refill
\ Kernel: restore-input
\ TODO: roll ( xn xn-1 ... x0 n -- xn-1 ... x0 xn ) ;
\ Kernel: save-input
\ Kernel: source-id
\ TODO: span
\ Kernel: tib
: to ( x "word" -- )
' >body , ;
: true ( -- -1 )
-1 ;
: tuck ( x y -- y x y )
swap over ;
\ TODO: u.r
: u> ( x y -- flag )
2dup u< if 2drop false else <> then ;
\ TODO: unused
: value ( x "word" -- )
create ,
does> ( -- x )
@ ;
: within over - >r - r> u< ;
\ TODO: [compile]
\ Kernel: \
\ ----------------------------------------------------------------------
( Forth2012 core extension words. )
\ TODO: action-of
\ TODO: buffer:
: defer create ['] abort , does> @ execute ;
: defer! ( xt2 xt1 -- ) >body ! ;
: defer@ ( xt1 -- xt2 ) >body @ ;
\ TODO: holds
: is ( xt "word" -- ) ' defer! ;
\ TODO: parse-name
\ TODO: s\"

252
samples/Forth/core.fth Normal file
View File

@@ -0,0 +1,252 @@
: immediate lastxt @ dup c@ negate swap c! ;
: \ source nip >in ! ; immediate \ Copyright 2004, 2012 Lars Brinkhoff
: char \ ( "word" -- char )
bl-word here 1+ c@ ;
: ahead here 0 , ;
: resolve here swap ! ;
: ' bl-word here find 0branch [ ahead ] exit [ resolve ] 0 ;
: postpone-nonimmediate [ ' literal , ' compile, ] literal , ;
: create dovariable_code header, reveal ;
create postponers
' postpone-nonimmediate ,
' abort ,
' , ,
: word \ ( char "<chars>string<char>" -- caddr )
drop bl-word here ;
: postpone \ ( C: "word" -- )
bl word find 1+ cells postponers + @ execute ; immediate
: unresolved \ ( C: "word" -- orig )
postpone postpone postpone ahead ; immediate
: chars \ ( n1 -- n2 )
;
: else \ ( -- ) ( C: orig1 -- orig2 )
unresolved branch swap resolve ; immediate
: if \ ( flag -- ) ( C: -- orig )
unresolved 0branch ; immediate
: then \ ( -- ) ( C: orig -- )
resolve ; immediate
: [char] \ ( "word" -- )
char postpone literal ; immediate
: (does>) lastxt @ dodoes_code over >code ! r> swap >does ! ;
: does> postpone (does>) ; immediate
: begin \ ( -- ) ( C: -- dest )
here ; immediate
: while \ ( x -- ) ( C: dest -- orig dest )
unresolved 0branch swap ; immediate
: repeat \ ( -- ) ( C: orig dest -- )
postpone branch , resolve ; immediate
: until \ ( x -- ) ( C: dest -- )
postpone 0branch , ; immediate
: recurse lastxt @ compile, ; immediate
: pad \ ( -- addr )
here 1024 + ;
: parse \ ( char "string<char>" -- addr n )
pad >r begin
source? if <source 2dup <> else 0 0 then
while
r@ c! r> 1+ >r
repeat 2drop pad r> over - ;
: ( \ ( "string<paren>" -- )
[ char ) ] literal parse 2drop ; immediate
\ TODO: If necessary, refill and keep parsing.
: string, ( addr n -- )
here over allot align swap cmove ;
: (s") ( -- addr n ) ( R: ret1 -- ret2 )
r> dup @ swap cell+ 2dup + aligned >r swap ;
create squote 128 allot
: s" ( "string<quote>" -- addr n )
state @ if
postpone (s") [char] " parse dup , string,
else
[char] " parse >r squote r@ cmove squote r>
then ; immediate
: (abort") ( ... addr n -- ) ( R: ... -- )
cr type cr abort ;
: abort" ( ... x "string<quote>" -- ) ( R: ... -- )
postpone if postpone s" postpone (abort") postpone then ; immediate
\ ----------------------------------------------------------------------
( Core words. )
\ TODO: #
\ TODO: #>
\ TODO: #s
: and ( x y -- x&y ) nand invert ;
: * 1 2>r 0 swap begin r@ while
r> r> swap 2dup dup + 2>r and if swap over + swap then dup +
repeat r> r> 2drop drop ;
\ TODO: */mod
: +loop ( -- ) ( C: nest-sys -- )
postpone (+loop) postpone 0branch , postpone unloop ; immediate
: space bl emit ;
: ?.- dup 0 < if [char] - emit negate then ;
: digit [char] 0 + emit ;
: (.) base @ /mod ?dup if recurse then digit ;
: ." ( "string<quote>" -- ) postpone s" postpone type ; immediate
: . ( x -- ) ?.- (.) space ;
: postpone-number ( caddr -- )
0 0 rot count >number dup 0= if
2drop nip
postpone (literal) postpone (literal) postpone ,
postpone literal postpone ,
else
." Undefined: " type cr abort
then ;
' postpone-number postponers cell+ !
: / ( x y -- x/y ) /mod nip ;
: 0< ( n -- flag ) 0 < ;
: 1- ( n -- n-1 ) -1 + ;
: 2! ( x1 x2 addr -- ) swap over ! cell+ ! ;
: 2* ( n -- 2n ) dup + ;
\ Kernel: 2/
: 2@ ( addr -- x1 x2 ) dup cell+ @ swap @ ;
\ Kernel: 2drop
\ Kernel: 2dup
\ TODO: 2over ( x1 x2 x3 x4 -- x1 x2 x3 x4 x1 x2 )
\ 3 pick 3 pick ;
\ TODO: 2swap
\ TODO: <#
: abs ( n -- |n| )
dup 0< if negate then ;
\ TODO: accept
: c, ( n -- )
here c! 1 chars allot ;
: char+ ( n1 -- n2 )
1+ ;
: constant create , does> @ ;
: decimal ( -- )
10 base ! ;
: depth ( -- n )
data_stack 100 cells + 'SP @ - /cell / 2 - ;
: do ( n1 n2 -- ) ( R: -- loop-sys ) ( C: -- do-sys )
postpone 2>r here ; immediate
\ TODO: environment?
\ TODO: evaluate
\ TODO: fill
\ TODO: fm/mod )
\ TODO: hold
: j ( -- x1 ) ( R: x1 x2 x3 -- x1 x2 x3 )
'RP @ 3 cells + @ ;
\ TODO: leave
: loop ( -- ) ( C: nest-sys -- )
postpone 1 postpone (+loop)
postpone 0branch ,
postpone unloop ; immediate
: lshift begin ?dup while 1- swap dup + swap repeat ;
: rshift 1 begin over while dup + swap 1- swap repeat nip
2>r 0 1 begin r@ while
r> r> 2dup swap dup + 2>r and if swap over + swap then dup +
repeat r> r> 2drop drop ;
: max ( x y -- max[x,y] )
2dup > if drop else nip then ;
\ Kernel: min
\ TODO: mod
\ TODO: move
: (quit) ( R: ... -- )
return_stack 100 cells + 'RP !
0 'source-id ! tib ''source ! #tib ''#source !
postpone [
begin
refill
while
interpret state @ 0= if ." ok" cr then
repeat
bye ;
' (quit) ' quit >body cell+ !
\ TODO: s>d
\ TODO: sign
\ TODO: sm/rem
: spaces ( n -- )
0 do space loop ;
\ TODO: u.
: signbit ( -- n ) -1 1 rshift invert ;
: xor ( x y -- x^y ) 2dup nand >r r@ nand swap r> nand nand ;
: u< ( x y -- flag ) signbit xor swap signbit xor > ;
\ TODO: um/mod
: variable ( "word" -- )
create /cell allot ;
: ['] \ ( C: "word" -- )
' postpone literal ; immediate

View File

@@ -0,0 +1,5 @@
: HELLO ( -- )
." Hello Forth (forth)!" ;
HELLO

View File

@@ -0,0 +1,5 @@
: HELLO ( -- )
." Hello Forth (fth)!" ;
HELLO

133
samples/Forth/tools.fth Normal file
View File

@@ -0,0 +1,133 @@
\ -*- forth -*- Copyright 2004, 2013 Lars Brinkhoff
( Tools words. )
: .s ( -- )
[char] < emit depth (.) ." > "
'SP @ >r r@ depth 1- cells +
begin
dup r@ <>
while
dup @ .
/cell -
repeat r> 2drop ;
: ? @ . ;
: c? c@ . ;
: dump bounds do i ? /cell +loop cr ;
: cdump bounds do i c? loop cr ;
: again postpone branch , ; immediate
: see-find ( caddr -- end xt )
>r here lastxt @
begin
dup 0= abort" Undefined word"
dup r@ word= if r> drop exit then
nip dup >nextxt
again ;
: cabs ( char -- |char| ) dup 127 > if 256 swap - then ;
: xt. ( xt -- )
( >name ) count cabs type ;
: xt? ( xt -- flag )
>r lastxt @ begin
?dup
while
dup r@ = if r> 2drop -1 exit then
>nextxt
repeat r> drop 0 ;
: disassemble ( x -- )
dup xt? if
( >name ) count
dup 127 > if ." postpone " then
cabs type
else
.
then ;
: .addr dup . ;
: see-line ( addr -- )
cr ." ( " .addr ." ) " @ disassemble ;
: see-word ( end xt -- )
>r ." : " r@ xt.
r@ >body do i see-line /cell +loop
." ;" r> c@ 127 > if ." immediate" then ;
: see bl word see-find see-word cr ;
: #body bl word see-find >body - ;
: type-word ( end xt -- flag )
xt. space drop 0 ;
: traverse-dictionary ( in.. xt -- out.. )
\ xt execution: ( in.. end xt2 -- in.. 0 | in.. end xt2 -- out.. true )
>r here lastxt @ begin
?dup
while
r> 2dup >r >r execute
if r> r> 2drop exit then
r> dup >nextxt
repeat r> 2drop ;
: words ( -- )
['] type-word traverse-dictionary cr ;
\ ----------------------------------------------------------------------
( Tools extension words. )
\ ;code
\ assembler
\ in kernel: bye
\ code
\ cs-pick
\ cs-roll
\ editor
: forget ' dup >nextxt lastxt ! 'here ! reveal ;
\ Kernel: state
\ [else]
\ [if]
\ [then]
\ ----------------------------------------------------------------------
( Forth2012 tools extension words. )
\ TODO: n>r
\ TODO: nr>
\ TODO: synonym
: [undefined] bl-word find nip 0= ; immediate
: [defined] postpone [undefined] invert ; immediate
\ ----------------------------------------------------------------------
: @+ ( addr -- addr+/cell x ) dup cell+ swap @ ;
: !+ ( x addr -- addr+/cell ) tuck ! cell+ ;
: -rot swap >r swap r> ;

161
samples/GLSL/SyLens.glsl Normal file
View File

@@ -0,0 +1,161 @@
#version 120
/*
Original Lens Distortion Algorithm from SSontech (Syntheyes)
http://www.ssontech.com/content/lensalg.htm
r2 is radius squared.
r2 = image_aspect*image_aspect*u*u + v*v
f = 1 + r2*(k + kcube*sqrt(r2))
u' = f*u
v' = f*v
*/
// Controls
uniform float kCoeff, kCube, uShift, vShift;
uniform float chroma_red, chroma_green, chroma_blue;
uniform bool apply_disto;
// Uniform inputs
uniform sampler2D input1;
uniform float adsk_input1_w, adsk_input1_h, adsk_input1_aspect, adsk_input1_frameratio;
uniform float adsk_result_w, adsk_result_h;
float distortion_f(float r) {
float f = 1 + (r*r)*(kCoeff + kCube * r);
return f;
}
float inverse_f(float r)
{
// Build a lookup table on the radius, as a fixed-size table.
// We will use a vec3 since we will store the multipled number in the Z coordinate.
// So to recap: x will be the radius, y will be the f(x) distortion, and Z will be x * y;
vec3[48] lut;
// Since out LUT is shader-global check if it's been computed alrite
// Flame has no overflow bbox so we can safely max out at the image edge, plus some cushion
float max_r = sqrt((adsk_input1_frameratio * adsk_input1_frameratio) + 1) + 0.1;
float incr = max_r / 48;
float lut_r = 0;
float f;
for(int i=0; i < 48; i++) {
f = distortion_f(lut_r);
lut[i] = vec3(lut_r, f, lut_r * f);
lut_r += incr;
}
float t;
// Now find the nehgbouring elements
// only iterate to 46 since we will need
// 47 as i+1
for(int i=0; i < 47; i++) {
if(lut[i].z < r && lut[i+1].z > r) {
// BAM! our value is between these two segments
// get the T interpolant and mix
t = (r - lut[i].z) / (lut[i+1].z - lut[i]).z;
return mix(lut[i].y, lut[i+1].y, t );
}
}
}
float aberrate(float f, float chroma)
{
return f + (f * chroma);
}
vec3 chromaticize_and_invert(float f)
{
vec3 rgb_f = vec3(aberrate(f, chroma_red), aberrate(f, chroma_green), aberrate(f, chroma_blue));
// We need to DIVIDE by F when we redistort, and x / y == x * (1 / y)
if(apply_disto) {
rgb_f = 1 / rgb_f;
}
return rgb_f;
}
void main(void)
{
vec2 px, uv;
float f = 1;
float r = 1;
px = gl_FragCoord.xy;
// Make sure we are still centered
px.x -= (adsk_result_w - adsk_input1_w) / 2;
px.y -= (adsk_result_h - adsk_input1_h) / 2;
// Push the destination coordinates into the [0..1] range
uv.x = px.x / adsk_input1_w;
uv.y = px.y / adsk_input1_h;
// And to Syntheyes UV which are [1..-1] on both X and Y
uv.x = (uv.x *2 ) - 1;
uv.y = (uv.y *2 ) - 1;
// Add UV shifts
uv.x += uShift;
uv.y += vShift;
// Make the X value the aspect value, so that the X coordinates go to [-aspect..aspect]
uv.x = uv.x * adsk_input1_frameratio;
// Compute the radius
r = sqrt(uv.x*uv.x + uv.y*uv.y);
// If we are redistorting, account for the oversize plate in the input, assume that
// the input aspect is the same
if(apply_disto) {
r = r / (float(adsk_input1_w) / float(adsk_result_w));
}
// Apply or remove disto, per channel honoring chromatic aberration
if(apply_disto) {
f = inverse_f(r);
} else {
f = distortion_f(r);
}
vec2[3] rgb_uvs = vec2[](uv, uv, uv);
// Compute distortions per component
vec3 rgb_f = chromaticize_and_invert(f);
// Apply the disto coefficients, per component
rgb_uvs[0] = rgb_uvs[0] * rgb_f.rr;
rgb_uvs[1] = rgb_uvs[1] * rgb_f.gg;
rgb_uvs[2] = rgb_uvs[2] * rgb_f.bb;
// Convert all the UVs back to the texture space, per color component
for(int i=0; i < 3; i++) {
uv = rgb_uvs[i];
// Back from [-aspect..aspect] to [-1..1]
uv.x = uv.x / adsk_input1_frameratio;
// Remove UV shifts
uv.x -= uShift;
uv.y -= vShift;
// Back to OGL UV
uv.x = (uv.x + 1) / 2;
uv.y = (uv.y + 1) / 2;
rgb_uvs[i] = uv;
}
// Sample the input plate, per component
vec4 sampled;
sampled.r = texture2D(input1, rgb_uvs[0]).r;
sampled.g = texture2D(input1, rgb_uvs[1]).g;
sampled.b = texture2D(input1, rgb_uvs[2]).b;
// and assign to the output
gl_FragColor.rgba = vec4(sampled.rgb, 1.0 );
}

View File

@@ -0,0 +1,630 @@
//// High quality (Some browsers may freeze or crash)
//#define HIGHQUALITY
//// Medium quality (Should be fine on all systems, works on Intel HD2000 on Win7 but quite slow)
//#define MEDIUMQUALITY
//// Defaults
//#define REFLECTIONS
#define SHADOWS
//#define GRASS
//#define SMALL_WAVES
#define RAGGED_LEAVES
//#define DETAILED_NOISE
//#define LIGHT_AA // 2 sample SSAA
//#define HEAVY_AA // 2x2 RG SSAA
//#define TONEMAP
//// Configurations
#ifdef MEDIUMQUALITY
#define SHADOWS
#define SMALL_WAVES
#define RAGGED_LEAVES
#define TONEMAP
#endif
#ifdef HIGHQUALITY
#define REFLECTIONS
#define SHADOWS
//#define GRASS
#define SMALL_WAVES
#define RAGGED_LEAVES
#define DETAILED_NOISE
#define LIGHT_AA
#define TONEMAP
#endif
// Constants
const float eps = 1e-5;
const float PI = 3.14159265359;
const vec3 sunDir = vec3(0.79057,-0.47434, 0.0);
const vec3 skyCol = vec3(0.3, 0.5, 0.8);
const vec3 sandCol = vec3(0.9, 0.8, 0.5);
const vec3 treeCol = vec3(0.8, 0.65, 0.3);
const vec3 grassCol = vec3(0.4, 0.5, 0.18);
const vec3 leavesCol = vec3(0.3, 0.6, 0.2);
const vec3 leavesPos = vec3(-5.1,13.4, 0.0);
#ifdef TONEMAP
const vec3 sunCol = vec3(1.8, 1.7, 1.6);
#else
const vec3 sunCol = vec3(0.9, 0.85, 0.8);
#endif
const float exposure = 1.1; // Only used when tonemapping
// Description : Array and textureless GLSL 2D/3D/4D simplex
// noise functions.
// Author : Ian McEwan, Ashima Arts.
// License : Copyright (C) 2011 Ashima Arts. All rights reserved.
// Distributed under the MIT License. See LICENSE file.
// https://github.com/ashima/webgl-noise
vec3 mod289(vec3 x) {
return x - floor(x * (1.0 / 289.0)) * 289.0;
}
vec4 mod289(vec4 x) {
return x - floor(x * (1.0 / 289.0)) * 289.0;
}
vec4 permute(vec4 x) {
return mod289(((x*34.0)+1.0)*x);
}
vec4 taylorInvSqrt(vec4 r) {
return 1.79284291400159 - 0.85373472095314 * r;
}
float snoise(vec3 v) {
const vec2 C = vec2(1.0/6.0, 1.0/3.0) ;
const vec4 D = vec4(0.0, 0.5, 1.0, 2.0);
// First corner
vec3 i = floor(v + dot(v, C.yyy) );
vec3 x0 = v - i + dot(i, C.xxx) ;
// Other corners
vec3 g = step(x0.yzx, x0.xyz);
vec3 l = 1.0 - g;
vec3 i1 = min( g.xyz, l.zxy );
vec3 i2 = max( g.xyz, l.zxy );
// x0 = x0 - 0.0 + 0.0 * C.xxx;
// x1 = x0 - i1 + 1.0 * C.xxx;
// x2 = x0 - i2 + 2.0 * C.xxx;
// x3 = x0 - 1.0 + 3.0 * C.xxx;
vec3 x1 = x0 - i1 + C.xxx;
vec3 x2 = x0 - i2 + C.yyy; // 2.0*C.x = 1/3 = C.y
vec3 x3 = x0 - D.yyy; // -1.0+3.0*C.x = -0.5 = -D.y
// Permutations
i = mod289(i);
vec4 p = permute( permute( permute(
i.z + vec4(0.0, i1.z, i2.z, 1.0 ))
+ i.y + vec4(0.0, i1.y, i2.y, 1.0 ))
+ i.x + vec4(0.0, i1.x, i2.x, 1.0 ));
// Gradients: 7x7 points over a square, mapped onto an octahedron.
// The ring size 17*17 = 289 is close to a multiple of 49 (49*6 = 294)
float n_ = 0.142857142857; // 1.0/7.0
vec3 ns = n_ * D.wyz - D.xzx;
vec4 j = p - 49.0 * floor(p * ns.z * ns.z); // mod(p,7*7)
vec4 x_ = floor(j * ns.z);
vec4 y_ = floor(j - 7.0 * x_ ); // mod(j,N)
vec4 x = x_ *ns.x + ns.yyyy;
vec4 y = y_ *ns.x + ns.yyyy;
vec4 h = 1.0 - abs(x) - abs(y);
vec4 b0 = vec4( x.xy, y.xy );
vec4 b1 = vec4( x.zw, y.zw );
//vec4 s0 = vec4(lessThan(b0,0.0))*2.0 - 1.0;
//vec4 s1 = vec4(lessThan(b1,0.0))*2.0 - 1.0;
vec4 s0 = floor(b0)*2.0 + 1.0;
vec4 s1 = floor(b1)*2.0 + 1.0;
vec4 sh = -step(h, vec4(0.0));
vec4 a0 = b0.xzyw + s0.xzyw*sh.xxyy ;
vec4 a1 = b1.xzyw + s1.xzyw*sh.zzww ;
vec3 p0 = vec3(a0.xy,h.x);
vec3 p1 = vec3(a0.zw,h.y);
vec3 p2 = vec3(a1.xy,h.z);
vec3 p3 = vec3(a1.zw,h.w);
//Normalise gradients
vec4 norm = taylorInvSqrt(vec4(dot(p0,p0), dot(p1,p1), dot(p2, p2), dot(p3,p3)));
p0 *= norm.x;
p1 *= norm.y;
p2 *= norm.z;
p3 *= norm.w;
// Mix final noise value
vec4 m = max(0.6 - vec4(dot(x0,x0), dot(x1,x1), dot(x2,x2), dot(x3,x3)), 0.0);
m = m * m;
return 42.0 * dot( m*m, vec4( dot(p0,x0), dot(p1,x1),
dot(p2,x2), dot(p3,x3) ) );
}
// Main
float fbm(vec3 p)
{
float final = snoise(p);
p *= 1.94; final += snoise(p) * 0.5;
#ifdef DETAILED_NOISE
p *= 3.75; final += snoise(p) * 0.25;
return final / 1.75;
#else
return final / 1.5;
#endif
}
float waterHeight(vec3 p)
{
float d = length(p.xz);
float h = sin(d * 1.5 + iGlobalTime * 3.0) * 12.0 / d; // Island waves
#ifdef SMALL_WAVES
h += fbm(p*0.5); // Other waves
#endif
return h;
}
vec3 bump(vec3 pos, vec3 rayDir)
{
float s = 2.0;
// Fade out waves to reduce aliasing
float dist = dot(pos, rayDir);
s *= dist < 2.0 ? 1.0 : 1.4142 / sqrt(dist);
// Calculate normal from heightmap
vec2 e = vec2(1e-2, 0.0);
vec3 p = vec3(pos.x, iGlobalTime*0.5, pos.z)*0.7;
float m = waterHeight(p)*s;
return normalize(vec3(
waterHeight(p+e.xyy)*s-m,
1.0,
waterHeight(p+e.yxy)*s-m
));
}
// Ray intersections
vec4 intersectSphere(vec3 rpos, vec3 rdir, vec3 pos, float rad)
{
vec3 op = pos - rpos;
float b = dot(op, rdir);
float det = b*b - dot(op, op) + rad*rad;
if (det > 0.0)
{
det = sqrt(det);
float t = b - det;
if (t > eps)
return vec4(-normalize(rpos+rdir*t-pos), t);
}
return vec4(0.0);
}
vec4 intersectCylinder(vec3 rpos, vec3 rdir, vec3 pos, float rad)
{
vec3 op = pos - rpos;
vec2 rdir2 = normalize(rdir.yz);
float b = dot(op.yz, rdir2);
float det = b*b - dot(op.yz, op.yz) + rad*rad;
if (det > 0.0)
{
det = sqrt(det);
float t = b - det;
if (t > eps)
return vec4(-normalize(rpos.yz+rdir2*t-pos.yz), 0.0, t);
t = b + det;
if (t > eps)
return vec4(-normalize(rpos.yz+rdir2*t-pos.yz), 0.0, t);
}
return vec4(0.0);
}
vec4 intersectPlane(vec3 rayPos, vec3 rayDir, vec3 n, float d)
{
float t = -(dot(rayPos, n) + d) / dot(rayDir, n);
return vec4(n * sign(dot(rayDir, n)), t);
}
// Helper functions
vec3 rotate(vec3 p, float theta)
{
float c = cos(theta), s = sin(theta);
return vec3(p.x * c + p.z * s, p.y,
p.z * c - p.x * s);
}
float impulse(float k, float x) // by iq
{
float h = k*x;
return h * exp(1.0 - h);
}
// Raymarched parts of scene
float grass(vec3 pos)
{
float h = length(pos - vec3(0.0, -7.0, 0.0)) - 8.0;
if (h > 2.0) return h; // Optimization (Avoid noise if too far away)
return h + snoise(pos * 3.0) * 0.1 + pos.y * 0.9;
}
float tree(vec3 pos)
{
pos.y -= 0.5;
float s = sin(pos.y*0.03);
float c = cos(pos.y*0.03);
mat2 m = mat2(c, -s, s, c);
vec3 p = vec3(m*pos.xy, pos.z);
float width = 1.0 - pos.y * 0.02 - clamp(sin(pos.y * 8.0) * 0.1, 0.05, 0.1);
return max(length(p.xz) - width, pos.y - 12.5);
}
vec2 scene(vec3 pos)
{
float vtree = tree(pos);
#ifdef GRASS
float vgrass = grass(pos);
float v = min(vtree, vgrass);
#else
float v = vtree;
#endif
return vec2(v, v == vtree ? 2.0 : 1.0);
}
vec3 normal(vec3 pos)
{
vec2 eps = vec2(1e-3, 0.0);
float h = scene(pos).x;
return normalize(vec3(
scene(pos-eps.xyy).x-h,
scene(pos-eps.yxy).x-h,
scene(pos-eps.yyx).x-h
));
}
float plantsShadow(vec3 rayPos, vec3 rayDir)
{
// Soft shadow taken from iq
float k = 6.0;
float t = 0.0;
float s = 1.0;
for (int i = 0; i < 30; i++)
{
vec3 pos = rayPos+rayDir*t;
vec2 res = scene(pos);
if (res.x < 0.001) return 0.0;
s = min(s, k*res.x/t);
t += max(res.x, 0.01);
}
return s*s*(3.0 - 2.0*s);
}
// Ray-traced parts of scene
vec4 intersectWater(vec3 rayPos, vec3 rayDir)
{
float h = sin(20.5 + iGlobalTime * 2.0) * 0.03;
float t = -(rayPos.y + 2.5 + h) / rayDir.y;
return vec4(0.0, 1.0, 0.0, t);
}
vec4 intersectSand(vec3 rayPos, vec3 rayDir)
{
return intersectSphere(rayPos, rayDir, vec3(0.0,-24.1,0.0), 24.1);
}
vec4 intersectTreasure(vec3 rayPos, vec3 rayDir)
{
return vec4(0.0);
}
vec4 intersectLeaf(vec3 rayPos, vec3 rayDir, float openAmount)
{
vec3 dir = normalize(vec3(0.0, 1.0, openAmount));
float offset = 0.0;
vec4 res = intersectPlane(rayPos, rayDir, dir, 0.0);
vec3 pos = rayPos+rayDir*res.w;
#ifdef RAGGED_LEAVES
offset = snoise(pos*0.8) * 0.3;
#endif
if (pos.y > 0.0 || length(pos * vec3(0.9, 2.0, 1.0)) > 4.0 - offset) res.w = 0.0;
vec4 res2 = intersectPlane(rayPos, rayDir, vec3(dir.xy, -dir.z), 0.0);
pos = rayPos+rayDir*res2.w;
#ifdef RAGGED_LEAVES
offset = snoise(pos*0.8) * 0.3;
#endif
if (pos.y > 0.0 || length(pos * vec3(0.9, 2.0, 1.0)) > 4.0 - offset) res2.w = 0.0;
if (res2.w > 0.0 && res2.w < res.w || res.w <= 0.0)
res = res2;
return res;
}
vec4 leaves(vec3 rayPos, vec3 rayDir)
{
float t = 1e20;
vec3 n = vec3(0.0);
rayPos -= leavesPos;
float sway = impulse(15.0, fract(iGlobalTime / PI * 0.125));
float upDownSway = sway * -sin(iGlobalTime) * 0.06;
float openAmount = sway * max(-cos(iGlobalTime) * 0.4, 0.0);
float angleOffset = -0.1;
for (float k = 0.0; k < 6.2; k += 0.75)
{
// Left-right
float alpha = k + (k - PI) * sway * 0.015;
vec3 p = rotate(rayPos, alpha);
vec3 d = rotate(rayDir, alpha);
// Up-down
angleOffset *= -1.0;
float theta = -0.4 +
angleOffset +
cos(k) * 0.35 +
upDownSway +
sin(iGlobalTime+k*10.0) * 0.03 * (sway + 0.2);
p = rotate(p.xzy, theta).xzy;
d = rotate(d.xzy, theta).xzy;
// Shift
p -= vec3(5.4, 0.0, 0.0);
// Intersect individual leaf
vec4 res = intersectLeaf(p, d, 1.0+openAmount);
if (res.w > 0.0 && res.w < t)
{
t = res.w;
n = res.xyz;
}
}
return vec4(n, t);
}
// Lighting
float shadow(vec3 rayPos, vec3 rayDir)
{
float s = 1.0;
// Intersect sand
//vec4 resSand = intersectSand(rayPos, rayDir);
//if (resSand.w > 0.0) return 0.0;
// Intersect plants
s = min(s, plantsShadow(rayPos, rayDir));
if (s < 0.0001) return 0.0;
// Intersect leaves
vec4 resLeaves = leaves(rayPos, rayDir);
if (resLeaves.w > 0.0 && resLeaves.w < 1e7) return 0.0;
return s;
}
vec3 light(vec3 p, vec3 n)
{
float s = 1.0;
#ifdef SHADOWS
s = shadow(p-sunDir*0.01, -sunDir);
#endif
vec3 col = sunCol * min(max(dot(n, sunDir), 0.0), s);
col += skyCol * (-n.y * 0.5 + 0.5) * 0.3;
return col;
}
vec3 lightLeaves(vec3 p, vec3 n)
{
float s = 1.0;
#ifdef SHADOWS
s = shadow(p-sunDir*0.01, -sunDir);
#endif
float ao = min(length(p - leavesPos) * 0.1, 1.0);
float ns = dot(n, sunDir);
float d = sqrt(max(ns, 0.0));
vec3 col = sunCol * min(d, s);
col += sunCol * max(-ns, 0.0) * vec3(0.3, 0.3, 0.1) * ao;
col += skyCol * (-n.y * 0.5 + 0.5) * 0.3 * ao;
return col;
}
vec3 sky(vec3 n)
{
return skyCol * (1.0 - n.y * 0.8);
}
// Ray-marching
vec4 plants(vec3 rayPos, vec3 rayDir)
{
float t = 0.0;
for (int i = 0; i < 40; i++)
{
vec3 pos = rayPos+rayDir*t;
vec2 res = scene(pos);
float h = res.x;
if (h < 0.001)
{
vec3 col = res.y == 2.0 ? treeCol : grassCol;
float uvFact = res.y == 2.0 ? 1.0 : 10.0;
vec3 n = normal(pos);
vec2 uv = vec2(n.x, pos.y * 0.5) * 0.2 * uvFact;
vec3 tex = texture2D(iChannel0, uv).rgb * 0.6 + 0.4;
float ao = min(length(pos - leavesPos) * 0.1, 1.0);
return vec4(col * light(pos, n) * ao * tex, t);
}
t += h;
}
return vec4(sky(rayDir), 1e8);
}
// Final combination
vec3 traceReflection(vec3 rayPos, vec3 rayDir)
{
vec3 col = vec3(0.0);
float t = 1e20;
// Intersect plants
vec4 resPlants = plants(rayPos, rayDir);
if (resPlants.w > 0.0 && resPlants.w < t)
{
t = resPlants.w;
col = resPlants.xyz;
}
// Intersect leaves
vec4 resLeaves = leaves(rayPos, rayDir);
if (resLeaves.w > 0.0 && resLeaves.w < t)
{
vec3 pos = rayPos + rayDir * resLeaves.w;
vec2 uv = (pos.xz - leavesPos.xz) * 0.3;
float tex = texture2D(iChannel0, uv).r * 0.6 + 0.5;
t = resLeaves.w;
col = leavesCol * lightLeaves(pos, resLeaves.xyz) * tex;
}
if (t > 1e7) return sky(rayDir);
return col;
}
vec3 trace(vec3 rayPos, vec3 rayDir)
{
vec3 col = vec3(0.0);
float t = 1e20;
// Intersect sand
vec4 resSand = intersectSand(rayPos, rayDir);
if (resSand.w > 0.0)
{
vec3 pos = rayPos + rayDir * resSand.w;
t = resSand.w;
col = sandCol * light(pos, resSand.xyz);
}
// Intersect treasure chest
vec4 resTreasure = intersectTreasure(rayPos, rayDir);
if (resTreasure.w > 0.0 && resTreasure.w < t)
{
vec3 pos = rayPos + rayDir * resTreasure.w;
t = resTreasure.w;
col = leavesCol * light(pos, resTreasure.xyz);
}
// Intersect leaves
vec4 resLeaves = leaves(rayPos, rayDir);
if (resLeaves.w > 0.0 && resLeaves.w < t)
{
vec3 pos = rayPos + rayDir * resLeaves.w;
vec2 uv = (pos.xz - leavesPos.xz) * 0.3;
float tex = texture2D(iChannel0, uv).r * 0.6 + 0.5;
t = resLeaves.w;
col = leavesCol * lightLeaves(pos, resLeaves.xyz) * tex;
}
// Intersect plants
vec4 resPlants = plants(rayPos, rayDir);
if (resPlants.w > 0.0 && resPlants.w < t)
{
t = resPlants.w;
col = resPlants.xyz;
}
// Intersect water
vec4 resWater = intersectWater(rayPos, rayDir);
if (resWater.w > 0.0 && resWater.w < t)
{
vec3 pos = rayPos + rayDir * resWater.w;
float dist = t - resWater.w;
vec3 n = bump(pos, rayDir);
float ct = -min(dot(n,rayDir), 0.0);
float fresnel = 0.9 - 0.9 * pow(1.0 - ct, 5.0);
vec3 trans = col * exp(-dist * vec3(1.0, 0.7, 0.4) * 3.0);
vec3 reflDir = normalize(reflect(rayDir, n));
vec3 refl = sky(reflDir);
#ifdef REFLECTIONS
if (dot(pos, rayDir) < -2.0)
refl = traceReflection(pos, reflDir).rgb;
#endif
t = resWater.t;
col = mix(refl, trans, fresnel);
}
if (t > 1e7) return sky(rayDir);
return col;
}
// Ray-generation
vec3 camera(vec2 px)
{
vec2 rd = (px / iResolution.yy - vec2(iResolution.x/iResolution.y*0.5-0.5, 0.0)) * 2.0 - 1.0;
float t = sin(iGlobalTime * 0.1) * 0.2;
vec3 rayDir = normalize(vec3(rd.x, rd.y, 1.0));
vec3 rayPos = vec3(0.0, 3.0, -18.0);
return trace(rayPos, rayDir);
}
void main(void)
{
#ifdef HEAVY_AA
vec3 col = camera(gl_FragCoord.xy+vec2(0.0,0.5))*0.25;
col += camera(gl_FragCoord.xy+vec2(0.25,0.0))*0.25;
col += camera(gl_FragCoord.xy+vec2(0.5,0.75))*0.25;
col += camera(gl_FragCoord.xy+vec2(0.75,0.25))*0.25;
#else
vec3 col = camera(gl_FragCoord.xy);
#ifdef LIGHT_AA
col = col * 0.5 + camera(gl_FragCoord.xy+vec2(0.5,0.5))*0.5;
#endif
#endif
#ifdef TONEMAP
// Optimized Haarm-Peter Duikers curve
vec3 x = max(vec3(0.0),col*exposure-0.004);
col = (x*(6.2*x+.5))/(x*(6.2*x+1.7)+0.06);
#else
col = pow(col, vec3(0.4545));
#endif
gl_FragColor = vec4(col, 1.0);
}

68
samples/GLSL/shader.fp Normal file
View File

@@ -0,0 +1,68 @@
/*
* Copyright (C) 2010 Josh A. Beam
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
* are met:
* 1. Redistributions of source code must retain the above copyright
* notice, this list of conditions and the following disclaimer.
* 2. Redistributions in binary form must reproduce the above copyright
* notice, this list of conditions and the following disclaimer in the
* documentation and/or other materials provided with the distribution.
*
* THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR
* IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES
* OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED.
* IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
* SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
* PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS;
* OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY,
* WHETHER IN CONTACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR
* OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF
* ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
*/
const int NUM_LIGHTS = 3;
const vec3 AMBIENT = vec3(0.1, 0.1, 0.1);
const float MAX_DIST = 2.5;
const float MAX_DIST_SQUARED = MAX_DIST * MAX_DIST;
uniform vec3 lightColor[NUM_LIGHTS];
varying vec3 fragmentNormal;
varying vec3 cameraVector;
varying vec3 lightVector[NUM_LIGHTS];
void
main()
{
// initialize diffuse/specular lighting
vec3 diffuse = vec3(0.0, 0.0, 0.0);
vec3 specular = vec3(0.0, 0.0, 0.0);
// normalize the fragment normal and camera direction
vec3 normal = normalize(fragmentNormal);
vec3 cameraDir = normalize(cameraVector);
// loop through each light
for(int i = 0; i < NUM_LIGHTS; ++i) {
// calculate distance between 0.0 and 1.0
float dist = min(dot(lightVector[i], lightVector[i]), MAX_DIST_SQUARED) / MAX_DIST_SQUARED;
float distFactor = 1.0 - dist;
// diffuse
vec3 lightDir = normalize(lightVector[i]);
float diffuseDot = dot(normal, lightDir);
diffuse += lightColor[i] * clamp(diffuseDot, 0.0, 1.0) * distFactor;
// specular
vec3 halfAngle = normalize(cameraDir + lightDir);
vec3 specularColor = min(lightColor[i] + 0.5, 1.0);
float specularDot = dot(normal, halfAngle);
specular += specularColor * pow(clamp(specularDot, 0.0, 1.0), 16.0) * distFactor;
}
vec4 sample = vec4(1.0, 1.0, 1.0, 1.0);
gl_FragColor = vec4(clamp(sample.rgb * (diffuse + AMBIENT) + specular, 0.0, 1.0), sample.a);
}

View File

@@ -0,0 +1,6 @@
<div class="entry">
<h1>{{title}}</h1>
<div class="body">
{{body}}
</div>
</div>

View File

@@ -0,0 +1,11 @@
<div class="post">
<h1>By {{fullName author}}</h1>
<div class="body">{{body}}</div>
<h1>Comments</h1>
{{#each comments}}
<h2>By {{fullName author}}</h2>
<div class="body">{{body}}</div>
{{/each}}
</div>

View File

@@ -0,0 +1,10 @@
; editorconfig.org
root = true
[*]
indent_style = space
indent_size = 4
end_of_line = lf
charset = utf-8
trim_trailing_whitespace = true
insert_final_newline = true

42
samples/Idris/Chars.idr Normal file
View File

@@ -0,0 +1,42 @@
module Prelude.Char
import Builtins
isUpper : Char -> Bool
isUpper x = x >= 'A' && x <= 'Z'
isLower : Char -> Bool
isLower x = x >= 'a' && x <= 'z'
isAlpha : Char -> Bool
isAlpha x = isUpper x || isLower x
isDigit : Char -> Bool
isDigit x = (x >= '0' && x <= '9')
isAlphaNum : Char -> Bool
isAlphaNum x = isDigit x || isAlpha x
isSpace : Char -> Bool
isSpace x = x == ' ' || x == '\t' || x == '\r' ||
x == '\n' || x == '\f' || x == '\v' ||
x == '\xa0'
isNL : Char -> Bool
isNL x = x == '\r' || x == '\n'
toUpper : Char -> Char
toUpper x = if (isLower x)
then (prim__intToChar (prim__charToInt x - 32))
else x
toLower : Char -> Char
toLower x = if (isUpper x)
then (prim__intToChar (prim__charToInt x + 32))
else x
isHexDigit : Char -> Bool
isHexDigit x = elem (toUpper x) hexChars where
hexChars : List Char
hexChars = ['0', '1', '2', '3', '4', '5', '6', '7', '8', '9',
'A', 'B', 'C', 'D', 'E', 'F']

267
samples/JSON/composer.lock generated Normal file
View File

@@ -0,0 +1,267 @@
{
"_readme": [
"This file locks the dependencies of your project to a known state",
"Read more about it at http://getcomposer.org/doc/01-basic-usage.md#composer-lock-the-lock-file"
],
"hash": "d8ff8fcb71824f5199f3499bf71862f1",
"packages": [
{
"name": "arbit/system-process",
"version": "1.0",
"source": {
"type": "git",
"url": "https://github.com/Arbitracker/system-process.git",
"reference": "1.0"
},
"dist": {
"type": "zip",
"url": "https://api.github.com/repos/Arbitracker/system-process/zipball/1.0",
"reference": "1.0",
"shasum": ""
},
"type": "library",
"autoload": {
"psr-0": {
"SystemProcess": "src/main/php/"
}
},
"notification-url": "http://packagist.org/downloads/",
"description": "System process execution library",
"time": "2013-03-31 12:42:56"
},
{
"name": "pdepend/staticReflection",
"version": "0.1",
"source": {
"type": "git",
"url": "https://github.com/manuelpichler/staticReflection.git",
"reference": "origin/master"
},
"type": "library"
},
{
"name": "qafoo/rmf",
"version": "dev-master",
"source": {
"type": "git",
"url": "https://github.com/Qafoo/REST-Micro-Framework.git",
"reference": "5f43983f15a8aa12be42ad6068675d4008bfb9ed"
},
"dist": {
"type": "zip",
"url": "https://api.github.com/repos/Qafoo/REST-Micro-Framework/zipball/5f43983f15a8aa12be42ad6068675d4008bfb9ed",
"reference": "5f43983f15a8aa12be42ad6068675d4008bfb9ed",
"shasum": ""
},
"type": "library",
"autoload": {
"psr-0": {
"Qafoo\\RMF": "src/main/"
}
},
"description": "Very simple VC framework which makes it easy to build HTTP applications / REST webservices",
"support": {
"source": "https://github.com/Qafoo/REST-Micro-Framework/tree/master",
"issues": "https://github.com/Qafoo/REST-Micro-Framework/issues"
},
"time": "2012-12-07 13:33:01"
},
{
"name": "twig/twig",
"version": "1.6.0",
"source": {
"type": "git",
"url": "git://github.com/fabpot/Twig.git",
"reference": "v1.6.0"
},
"dist": {
"type": "zip",
"url": "https://github.com/fabpot/Twig/zipball/v1.6.0",
"reference": "v1.6.0",
"shasum": ""
},
"require": {
"php": ">=5.2.4"
},
"type": "library",
"autoload": {
"psr-0": {
"Twig_": "lib/"
}
},
"license": [
"BSD"
],
"authors": [
{
"name": "Fabien Potencier",
"email": "fabien@symfony.com"
},
{
"name": "Armin Ronacher",
"email": "armin.ronacher@active-4.com"
}
],
"description": "Twig, the flexible, fast, and secure template language for PHP",
"homepage": "http://twig.sensiolabs.org",
"keywords": [
"templating"
],
"time": "2012-02-03 23:34:52"
},
{
"name": "twitter/bootstrap",
"version": "0.1",
"source": {
"type": "git",
"url": "https://github.com/twitter/bootstrap/",
"reference": "origin/master"
},
"type": "library"
},
{
"name": "zetacomponents/base",
"version": "1.8",
"source": {
"type": "git",
"url": "https://github.com/zetacomponents/Base.git",
"reference": "1.8"
},
"dist": {
"type": "zip",
"url": "https://github.com/zetacomponents/Base/zipball/1.8",
"reference": "1.8",
"shasum": ""
},
"type": "library",
"autoload": {
"classmap": [
"src"
]
},
"license": [
"apache2"
],
"authors": [
{
"name": "Sergey Alexeev"
},
{
"name": "Sebastian Bergmann"
},
{
"name": "Jan Borsodi"
},
{
"name": "Raymond Bosman"
},
{
"name": "Frederik Holljen"
},
{
"name": "Kore Nordmann"
},
{
"name": "Derick Rethans"
},
{
"name": "Vadym Savchuk"
},
{
"name": "Tobias Schlitt"
},
{
"name": "Alexandru Stanoi"
}
],
"description": "The Base package provides the basic infrastructure that all packages rely on. Therefore every component relies on this package.",
"homepage": "https://github.com/zetacomponents",
"time": "2009-12-21 04:14:16"
},
{
"name": "zetacomponents/graph",
"version": "1.5",
"source": {
"type": "git",
"url": "https://github.com/zetacomponents/Graph.git",
"reference": "1.5"
},
"dist": {
"type": "zip",
"url": "https://github.com/zetacomponents/Graph/zipball/1.5",
"reference": "1.5",
"shasum": ""
},
"type": "library",
"autoload": {
"classmap": [
"src"
]
},
"license": [
"apache2"
],
"authors": [
{
"name": "Sergey Alexeev"
},
{
"name": "Sebastian Bergmann"
},
{
"name": "Jan Borsodi"
},
{
"name": "Raymond Bosman"
},
{
"name": "Frederik Holljen"
},
{
"name": "Kore Nordmann"
},
{
"name": "Derick Rethans"
},
{
"name": "Vadym Savchuk"
},
{
"name": "Tobias Schlitt"
},
{
"name": "Alexandru Stanoi"
},
{
"name": "Lars Jankowski"
},
{
"name": "Elger Thiele"
},
{
"name": "Michael Maclean"
}
],
"description": "A component for creating pie charts, line graphs and other kinds of diagrams.",
"homepage": "https://github.com/zetacomponents",
"time": "2009-12-21 04:26:17"
}
],
"packages-dev": [
],
"aliases": [
],
"minimum-stability": "stable",
"stability-flags": {
"qafoo/rmf": 20,
"arbit/system-process": 0
},
"platform": [
],
"platform-dev": [
]
}

Some files were not shown because too many files have changed in this diff Show More