Compare commits

...

318 Commits

Author SHA1 Message Date
Arfon Smith
39037d5bfb Merge pull request #1803 from github/cut-release-v4.2.0
Bumping to v4.2.0
2014-12-02 12:36:05 -06:00
Arfon Smith
31d882b07e Merge branch 'master' into cut-release-v4.2.0
Conflicts:
	grammars.yml
2014-12-02 10:57:53 -06:00
Arfon Smith
fd9275b213 Merge pull request #1702 from ellemenno/loomscript-support
add language declaration and samples for LoomScript
2014-12-02 10:54:36 -06:00
Arfon Smith
cfa63cff35 Merge pull request #900 from hoelzro/master
Update Perl 6 samples
2014-12-02 10:52:45 -06:00
Arfon Smith
5e6fd11cc2 Updating Oz tmbundle 2014-12-01 15:08:22 -06:00
Arfon Smith
62a8b52df4 Bumping to v4.2.0 2014-12-01 14:19:08 -06:00
Adam Roben
783670095c Merge pull request #1802 from wmertens/master
Add grammar for Nix
2014-12-01 13:05:24 -05:00
Wout Mertens
23cfa86f93 Add grammar for Nix 2014-12-01 18:50:56 +01:00
ellemenno
211cb9567a refactor heuristic tests to use new helper 2014-12-01 01:37:55 -05:00
ellemenno
1e68a45515 add test of ls disambiguation 2014-12-01 01:30:14 -05:00
ellemenno
72c00f869c add textmate scope for loomscript 2014-12-01 01:30:14 -05:00
ellemenno
c76137efc0 improve regex for loomscript 2014-12-01 01:30:13 -05:00
ellemenno
88f196e4d4 add a heuristic to disambiguate LiveScript from LoomScript
Keying off of `package {`, since LoomScript code must be enclosed in a
package definition, whereas that would be invalid LiveScript
2014-12-01 01:28:33 -05:00
ellemenno
4fe5980065 add language declaration and samples for LoomScript
LoomScript is the scripting language for the Loom SDK.
It has an ActionScript3-like syntax with added C#-esque capabilities.

Loom SDK: https://github.com/LoomSDK/LoomSDK
2014-12-01 01:03:03 -05:00
Rob Hoelz
7c7b1fb9c4 Reorder extensions for Perl 6 2014-11-30 22:35:52 -06:00
Rob Hoelz
ed3d38cf05 Create Perl6 heuristic 2014-11-30 22:35:52 -06:00
Rob Hoelz
837e9a6325 Add a bunch of Perl 6 sample files 2014-11-30 22:28:06 -06:00
Rob Hoelz
1364e9be51 Add .t as a valid Perl/Perl6 file extension 2014-11-30 22:28:06 -06:00
Arfon Smith
2fbfaf448d Merge pull request #1800 from github/isabelle-grammar
Isabelle grammar
2014-11-30 21:58:08 -06:00
Arfon Smith
bf82caccfc Merge branch 'master' into isabelle-grammar
Conflicts:
	grammars.yml
2014-11-30 21:53:02 -06:00
Arfon Smith
325dbc8e16 Merge pull request #1698 from sebgod/add-mercury-interpreter
languages.yml: added an interpreter entry to Mercury section
2014-11-30 21:28:04 -06:00
Arfon Smith
bd2fb0af51 Merge pull request #1790 from pchaigno/gradle
Support for gradle files
2014-11-30 21:24:30 -06:00
Arfon Smith
3c904dff61 Merge pull request #1798 from github/google-apps
Google apps
2014-11-30 21:16:50 -06:00
Arfon Smith
9b22b2973f Merge branch 'master' into google-apps
Conflicts:
	lib/linguist/heuristics.rb
	lib/linguist/samples.json
2014-11-30 21:11:59 -06:00
Arfon Smith
025bb35ac7 Merge pull request #1673 from blakeembrey/support-raml
Add spport for RAML language
2014-11-30 21:05:10 -06:00
Arfon Smith
7fb5d0cadd Merge pull request #1782 from anpar/master
Add Oz to recognized languages
2014-11-30 20:57:55 -06:00
Arfon Smith
8157c6f56b Merge pull request #1796 from github/cool
Cool
2014-11-30 20:49:26 -06:00
Arfon Smith
0154c21c3d Adding Cool grammar 2014-11-30 20:45:24 -06:00
Arfon Smith
648596dbb2 Be explicit about tm_scope 2014-11-30 15:24:33 -06:00
Arfon Smith
212c74d8a3 Merge branch 'master' into cool
Conflicts:
	lib/linguist/heuristics.rb
	lib/linguist/languages.yml
	lib/linguist/samples.json
2014-11-30 15:23:09 -06:00
Antoine Paris
4495e15fa7 Misspelling correction 2014-11-30 22:07:55 +01:00
Antoine Paris
da96e11b37 Add grammar for Oz 2014-11-30 22:01:39 +01:00
Antoine Paris
b7a9843770 Corrections by @pchaigno 2014-11-30 21:18:23 +01:00
Arfon Smith
55432774c7 Merge pull request #1795 from github/mercury-vendor
Removing Mercury directory from vendor.yml
2014-11-30 08:00:36 -06:00
Arfon Smith
ca76802ee4 Removing Mercury directory from vendor.yml 2014-11-30 07:55:55 -06:00
Gerwin Klein
cec54837bc add language grammar for Isabelle theorem prover 2014-11-30 17:17:13 +11:00
Arfon Smith
e0c35b0665 Merge pull request #1706 from pchaigno/mm-xml
Add .mm as an XML extension with heuristic rule
2014-11-29 23:07:58 -06:00
Brandon Keepers
865980b8f7 Merge pull request #1791 from pchaigno/remove-old-heuristic
Remove old test forgotten in #1788
2014-11-29 12:32:28 -06:00
Paul Chaignon
9367a4797f Remove old test forgotten in #1788 2014-11-28 23:14:17 -05:00
Paul Chaignon
4ed58c743d Support for gradle files 2014-11-28 23:00:35 -05:00
Arfon Smith
cfd95360cb Merge pull request #1627 from github/1036-local
Disambiguate C, C++, Objective-C
2014-11-28 18:05:16 -06:00
Brandon Keepers
22144e79d3 Merge pull request #1787 from github/move-shebang
Move shebang (updated)
2014-11-28 18:02:04 -06:00
Brandon Keepers
3acbf06beb Merge pull request #1788 from github/refactor-heuristics
Refactor heuristics (updated)
2014-11-28 17:59:43 -06:00
Brandon Keepers
7b41346db8 Merge branch 'refactor-heuristics' into 1036-local
* refactor-heuristics: (43 commits)
  update docs
  Clean up heuristic logic
  Allow disambiguate to return an Array
  Rename .create to .disambiguate
  docs
  Remove inactive heuristics
  Refactor heuristics
  Not going back
  docs
  Move call method into existing Classifier class
  Try strategies until one language is returned
  Remove unneded empty blob check
  Add F# and GLSL samples.  Add Forth and GLSL extension .fs. Add heuristic to disambiguate between F#, Forth, and GLSL.
  byebug requires ruby 2.0
  Remove test for removed extension
  Fix typo in test
  add rake interpreter
  add python3 interpreter
  Remove old wrong_shebang.rb sample
  Add byebug
  ...

Conflicts:
	lib/linguist/heuristics.rb
	test/test_heuristics.rb
2014-11-28 17:58:00 -06:00
Brandon Keepers
878b321b89 Merge remote-tracking branch 'origin/master' into move-shebang
* origin/master:
  Tweak docs
2014-11-28 17:41:10 -06:00
Brandon Keepers
a903123cb8 Merge pull request #1663 from github/strategies
Refactor detection into strategies
2014-11-28 17:40:12 -06:00
Brandon Keepers
577fb95384 Tweak docs 2014-11-28 17:36:14 -06:00
Brandon Keepers
770a1d4553 update docs 2014-11-28 17:07:15 -06:00
Brandon Keepers
c038b51941 Clean up heuristic logic 2014-11-28 17:03:01 -06:00
Brandon Keepers
4bebcef6ef Allow disambiguate to return an Array 2014-11-28 16:55:00 -06:00
Brandon Keepers
b8685103d0 Rename .create to .disambiguate 2014-11-28 14:41:52 -06:00
Brandon Keepers
26d789612b docs 2014-11-28 14:40:02 -06:00
Brandon Keepers
10de952ed6 Remove Linguist.interpreter_from_shebang 2014-11-28 14:14:40 -06:00
Brandon Keepers
2517650ecb Fix shebang without path 2014-11-28 14:14:10 -06:00
Brandon Keepers
47b739527a Treat lines as enumerator and not array 2014-11-28 13:55:55 -06:00
Brandon Keepers
88f08803ee require shebang when building samples 2014-11-28 12:34:41 -06:00
Brandon Keepers
c05717d15c docs 2014-11-28 12:27:48 -06:00
Brandon Keepers
bc66f558b9 Remove inactive heuristics
We can add these back when we’re ready to enable them.
2014-11-28 12:17:52 -06:00
Antoine Paris
71e1bd9af2 Misspelling correction 2014-11-28 17:42:54 +01:00
Antoine Paris
57b0739219 Add some examples for Oz 2014-11-28 17:40:49 +01:00
Antoine Paris
d60241cc86 Add grammar for Oz 2014-11-28 17:28:22 +01:00
Antoine Paris
d725e8e385 Add Oz to languages.yml 2014-11-28 17:16:32 +01:00
Brandon Keepers
034cb25099 Refactor heuristics 2014-11-28 09:43:59 -06:00
Brandon Keepers
fbc0947420 Not going back 2014-11-28 08:14:30 -06:00
Brandon Keepers
9020d7c044 Deprecate find_by_shebang
This class doesn’t need to know about shebangs.
2014-11-27 13:18:51 -05:00
Brandon Keepers
ffe2ccf1f6 Don't bother creating an instance 2014-11-27 13:17:28 -05:00
Brandon Keepers
434ab9f2c0 Add tests for shebangs 2014-11-27 13:09:05 -05:00
Brandon Keepers
cd3defda42 Simplify shebang detection 2014-11-27 12:44:55 -05:00
Brandon Keepers
fd85f7f112 consolidate shebang logic 2014-11-27 12:18:23 -05:00
Brandon Keepers
e42ccf0d82 docs 2014-11-27 11:40:48 -05:00
Brandon Keepers
bf4baff363 Move call method into existing Classifier class 2014-11-27 11:29:38 -05:00
Brandon Keepers
c1a9737313 Try strategies until one language is returned 2014-11-27 11:12:47 -05:00
Brandon Keepers
a4081498f8 Remove unneded empty blob check 2014-11-27 10:55:03 -05:00
Brandon Keepers
9efd923382 Merge remote-tracking branch 'origin/master' into strategies
* origin/master: (165 commits)
  Add F# and GLSL samples.  Add Forth and GLSL extension .fs. Add heuristic to disambiguate between F#, Forth, and GLSL.
  byebug requires ruby 2.0
  Remove test for removed extension
  Fix typo in test
  add rake interpreter
  add python3 interpreter
  Remove old wrong_shebang.rb sample
  Add byebug
  Link to Lightshow in CONTRIBUTING.md
  Switch to a better F# grammar
  Bump Rugged again
  Checkout the master for testing
  Rugged 0.22.0b3
  Reordering
  Bump version to 4.0.3
  Add some docs for tm_scope
  Change NONE to none
  Checking other case for Chart.jS
  Test that all languages have grammars
  Fix RHTML's tm_scope
  ...

Conflicts:
	lib/linguist/language.rb
2014-11-27 10:52:44 -05:00
Arfon Smith
b16149d641 Merge pull request #1758 from larsbrinkhoff/fsharp-glsl
Disambiguate .fs between F#, Forth, and GLSL
2014-11-27 07:41:14 -06:00
Lars Brinkhoff
2d940e72c2 Add F# and GLSL samples. Add Forth and GLSL extension .fs.
Add heuristic to disambiguate between F#, Forth, and GLSL.
2014-11-27 06:56:26 +01:00
Brandon Keepers
9f103abfb5 Merge pull request #1750 from github/interpreters-in-samples
Fix for interpreters from samples
2014-11-26 16:51:08 -05:00
Brandon Keepers
689a209ed9 Merge remote-tracking branch 'origin/master' into interpreters-in-samples
* origin/master:
  byebug requires ruby 2.0
  Remove test for removed extension
  Merge branch 'master' into 1233-local
  Removing pry runtime dependency
  Moving to fixtures
  Language detection test for non-sample files
  Refactoring of Language.detect
  Try shebang detection if the extension is unknown
  Change unknown extension of PHP sample file
2014-11-26 16:25:15 -05:00
Brandon Keepers
d91a451fc7 Merge pull request #1776 from github/fix-failures
Fix failure from #1731
2014-11-26 16:24:25 -05:00
Brandon Keepers
1ae4672230 byebug requires ruby 2.0 2014-11-26 16:12:43 -05:00
Brandon Keepers
3edf5fd770 Remove test for removed extension
This existed when the test was written, but was removed in https://github.com/github/linguist/pull/1734 and the test got lost in a merge somewhere.
2014-11-26 15:59:16 -05:00
Arfon Smith
412af86cb8 Merge pull request #1538 from github/1233-local
Detection based on the shebang (updated)
2014-11-26 14:47:12 -06:00
Brandon Keepers
5b41ab4774 Fix typo in test 2014-11-26 15:40:51 -05:00
Brandon Keepers
06c0cb916b add rake interpreter 2014-11-26 15:40:40 -05:00
Brandon Keepers
b3a49ce627 add python3 interpreter 2014-11-26 15:40:33 -05:00
Brandon Keepers
0651568bfb Remove old wrong_shebang.rb sample
This was added in a69118bd17, but that test has since been removed.
2014-11-26 15:34:03 -05:00
Brandon Keepers
ce31e23006 Merge remote-tracking branch 'origin/master' into interpreters-in-samples
* origin/master: (30 commits)
  Add byebug
  Link to Lightshow in CONTRIBUTING.md
  Switch to a better F# grammar
  Bump Rugged again
  Checkout the master for testing
  Rugged 0.22.0b3
  Reordering
  Bump version to 4.0.3
  Add some docs for tm_scope
  Change NONE to none
  Checking other case for Chart.jS
  Test that all languages have grammars
  Fix RHTML's tm_scope
  Chart JS is vendored
  Switch to a better grammar for Bro
  reorder again…
  put cjsx at the top
  Use a SQF grammar for SQF files
  move cjsx before iced
  move cjsx before iced
  ...

Conflicts:
	lib/linguist/languages.yml
2014-11-26 15:17:08 -05:00
Brandon Keepers
7ccd8caf71 Merge pull request #1774 from github/byebug
Add byebug
2014-11-26 15:16:06 -05:00
Brandon Keepers
598a7028ea Add byebug 2014-11-26 15:12:55 -05:00
Brandon Keepers
4ed1efe9ce Merge pull request #1741 from github/test-helper
Add test helper to make test env consistent
2014-11-26 15:10:24 -05:00
Brandon Keepers
6a4bf3fa65 Merge pull request #1731 from github/multiple-ext-segments
Support for multiple file extension segments
2014-11-26 15:09:15 -05:00
Brandon Keepers
5b2b3a2b53 Merge remote-tracking branch 'origin/master' into test-helper
* origin/master: (31 commits)
  Link to Lightshow in CONTRIBUTING.md
  Switch to a better F# grammar
  Bump Rugged again
  Checkout the master for testing
  Rugged 0.22.0b3
  Reordering
  Bump version to 4.0.3
  Add some docs for tm_scope
  Change NONE to none
  Checking other case for Chart.jS
  Test that all languages have grammars
  Fix RHTML's tm_scope
  Chart JS is vendored
  Switch to a better grammar for Bro
  reorder again…
  put cjsx at the top
  Use a SQF grammar for SQF files
  move cjsx before iced
  move cjsx before iced
  change component name
  ...

Conflicts:
	test/test_language.rb
2014-11-26 15:07:27 -05:00
Adam Roben
596cd9368f Merge pull request #1773 from github/introduce-lightshow
Link to Lightshow in CONTRIBUTING.md
2014-11-26 11:44:05 -05:00
Adam Roben
f8d50faedb Link to Lightshow in CONTRIBUTING.md
This is a tool for testing grammars with GitHub's syntax highlighter.
2014-11-26 11:21:05 -05:00
Adam Roben
ccc9c197ae Merge pull request #1771 from github/better-fsharp-grammar
Switch to a better F# grammar
2014-11-26 09:35:55 -05:00
Adam Roben
ed2dcc35e8 Switch to a better F# grammar
This fixes many bugs with F# highlighting, and the grammar is being
actively developed and maintained by the fsharp organization on GitHub.
2014-11-26 08:56:02 -05:00
Arfon Smith
208a3ff480 Merge branch 'master' into 1233-local
Conflicts:
	lib/linguist/language.rb
2014-11-25 17:04:43 -06:00
Arfon Smith
8de2cd15ed Merge branch 'master' into 1036-local
Conflicts:
	lib/linguist/heuristics.rb
	lib/linguist/languages.yml
	test/test_heuristics.rb
2014-11-25 13:06:11 -06:00
Brandon Keepers
7cbc4bc144 Merge pull request #1751 from roodboi/master
add .cjsx extension for Facebook’s JSX in coffescript
2014-11-24 11:30:04 -05:00
Vicent Marti
d239e71826 Merge pull request #1765 from github/vmg/rugged-22b3
Rugged 0.22.0b3
2014-11-24 13:37:09 +01:00
Vicent Marti
ecaa2a41c9 Bump Rugged again 2014-11-24 13:32:37 +01:00
Vicent Marti
fc71805489 Checkout the master for testing 2014-11-24 13:25:49 +01:00
Vicent Marti
74d94781cb Rugged 0.22.0b3 2014-11-24 13:05:42 +01:00
Arfon Smith
b556425037 Reordering 2014-11-21 13:10:45 -06:00
Arfon Smith
6131d17c02 Merge pull request #1748 from mrego/xht-extension
Add support for .xht extension which is used in some XHTML files
2014-11-21 12:48:58 -06:00
Adam Roben
875b3157bf Merge pull request #1757 from github/cut-release-v4.0.3
Bump version to 4.0.3
2014-11-21 12:06:38 -05:00
Adam Roben
4ce9048f8d Bump version to 4.0.3 2014-11-21 11:56:17 -05:00
Adam Roben
04f1b1df48 Merge pull request #1756 from github/test-for-grammars
Test that all languages have grammars
2014-11-21 11:54:46 -05:00
Adam Roben
f9c36345c3 Add some docs for tm_scope 2014-11-21 11:53:52 -05:00
Adam Roben
ec3967d080 Change NONE to none
NONE is a little shouty.
2014-11-21 11:52:29 -05:00
Arfon Smith
05a88b5b7e Merge pull request #1754 from github/chart-js
Chart js
2014-11-21 09:30:20 -06:00
Arfon Smith
b6b2cf04a7 Checking other case for Chart.jS 2014-11-21 09:29:28 -06:00
Adam Roben
49247e9ec2 Test that all languages have grammars
This will make CI fail if someone adds a new language but neglects to
add a new grammar for it. This should make it easier for people to
review PRs, as CI will help them to make sure a new grammar gets added.

However, we currently support some languages that have no grammars, and
we may support more in the future. So you can explicitly mark the
language as having no grammar by setting `tm_scope: NONE` in
languages.yml.
2014-11-21 09:48:52 -05:00
Adam Roben
6629b75aa6 Merge pull request #1755 from github/fix-rhtml-scope
Fix RHTML's tm_scope
2014-11-21 09:31:07 -05:00
Adam Roben
e702b453ec Fix RHTML's tm_scope
I missed this back in 9595e2ba7e.
2014-11-21 09:29:06 -05:00
Arfon Smith
38190d92fc Chart JS is vendored 2014-11-21 08:24:33 -06:00
Adam Roben
109ca5735b Merge pull request #1753 from github/better-bro-grammar
Switch to a better grammar for Bro
2014-11-21 09:23:28 -05:00
Adam Roben
4dde499f51 Switch to a better grammar for Bro
This grammar seems to be replacing the other ones out there and is
maintained by the Bro organization.
2014-11-21 09:17:19 -05:00
Vicent Marti
5fd18a215e Merge pull request #1752 from github/sqf-grammar
Use a SQF grammar for SQF files
2014-11-21 11:31:50 +01:00
Dimitri Kennedy
b283548c0f reorder again… 2014-11-20 18:36:08 -05:00
Dimitri Kennedy
2352ce77c9 put cjsx at the top 2014-11-20 17:38:38 -05:00
Adam Roben
2054afc741 Use a SQF grammar for SQF files
This produces better highlighting than using the C++ grammar.

The grammar is licensed under the Apache 2.0 license.
2014-11-20 17:22:55 -05:00
Dimitri Kennedy
9d3b9964b5 move cjsx before iced 2014-11-20 17:08:21 -05:00
Dimitri Kennedy
79c1d21a0f move cjsx before iced 2014-11-20 17:08:10 -05:00
Dimitri Kennedy
1d69228872 change component name 2014-11-20 16:49:48 -05:00
Dimitri Kennedy
f5953a09da add example cjsx file 2014-11-20 16:48:22 -05:00
Dimitri Kennedy
a17f6c8ae1 add .cjsx extension for Facebook’s JSX in coffescript 2014-11-20 14:56:09 -05:00
Brandon Keepers
9823af0cb4 Fix for shebang with relative bin
`#!/usr/bin/env bin/linguist` is a valid shebang
2014-11-20 12:50:35 -05:00
Brandon Keepers
45384bd498 More missing interpreters 2014-11-20 12:29:16 -05:00
Brandon Keepers
56bfde998b Only strip minor version off of interpreters
This used to turn `python2.4` into `python`, which causes trouble with
`perl6`, which is a different language definition.
2014-11-20 12:28:30 -05:00
Brandon Keepers
870feb8592 Add missing interpreters 2014-11-20 11:27:54 -05:00
Brandon Keepers
2670e2b035 Test that interpreters are defined in languages.yml 2014-11-20 11:21:52 -05:00
Brandon Keepers
eccea65641 Fix for interpreters not getting add to samples.json 2014-11-20 11:14:05 -05:00
Brandon Keepers
231ad86176 sync cached gems 2014-11-20 08:51:56 -05:00
Florian Kaiser
9658b02502 add Chart.js as vendor
http://www.chartjs.org
2014-11-20 10:02:45 +01:00
Manuel Rego Casasnovas
30c6b6e5a1 Add XHTML example file 2014-11-20 00:30:21 +01:00
Manuel Rego Casasnovas
b44e58dd7f Add support for .xht extension which is used in some XHTML files 2014-11-19 23:08:51 +01:00
Vicent Marti
bce31e8b51 Merge pull request #1747 from github/cut-release-v4.0.2
Cut release v4.0.2
2014-11-19 18:12:07 +01:00
Adam Roben
011c654c2a Bump version to v4.0.2 2014-11-19 12:08:49 -05:00
Adam Roben
2457b52658 Update grammars.yml 2014-11-19 12:08:43 -05:00
Arfon Smith
a3adaa6a7b Merge pull request #1745 from github/f-case
Fix failures on case-insensitive filesystem
2014-11-19 07:04:58 -06:00
Brandon Keepers
a6f168d1ac Rename file to avoid case-insensitive collision 2014-11-18 23:22:10 -05:00
Arfon Smith
f792029a20 Merge pull request #1743 from github/codemirror
Codemirror should be considered vendored
2014-11-18 20:02:12 -06:00
Arfon Smith
2a5dd5b224 Adding test for codemirror 2014-11-18 19:34:41 -06:00
Martín Gaitán
fb7dcfd62d Exclude codemirror
An example of a wrong detection due to codemirror is my project.  https://github.com/mgaitan/waliki
2014-11-18 20:17:15 -03:00
Brandon Keepers
245a1a92cf Merge remote-tracking branch 'origin/master' into test-helper
* origin/master:
  Add Gemfile.lock sample
  Remove deprecated method
  #all_extensions already includes primary extension
  typo
  remove unused assertion
  Symlink ant.xml to build.xml
  Avoid shadowing variable name
  Update comment
  Make missing sample failure message similar
  Remove blank extensions property
  Fix sample tests
  Add Forth extensions .f and .for; add heuristics for Forth and FORTRAN.
  Add FORTRAN and Forth samples.
  Extensions aren't actually required
  Fix errors from pedantic test
  Make pedantic test actually pedantic
  Removing extensions when they should be filenames
  Adding sample pom.xml files
  Link to contributing docs
  require samples if filename matches multiple languages

Conflicts:
	test/test_pedantic.rb
2014-11-18 16:48:26 -05:00
Brandon Keepers
aa7ab2065b Add test helper to make test env consistent 2014-11-18 16:46:09 -05:00
Brandon Keepers
719f6e876b Merge pull request #1732 from github/filename-matches-multiple-langages
Require samples if filename matches multiple languages
2014-11-18 16:31:19 -05:00
Brandon Keepers
8724dc8ccc Merge pull request #889 from larsbrinkhoff/fortran
FIX: .f misidentified as Fortran
2014-11-18 16:05:23 -05:00
Brandon Keepers
63f9d0bdeb Add Gemfile.lock sample
Gemfile.lock should not actually get classified as Ruby, but we can fix that in another PR.
2014-11-18 15:36:42 -05:00
Brandon Keepers
d7fd12cb32 Remove deprecated method 2014-11-18 15:19:23 -05:00
Brandon Keepers
850ab6dedb #all_extensions already includes primary extension 2014-11-18 15:10:07 -05:00
Brandon Keepers
b20fa497b9 typo 2014-11-18 15:07:36 -05:00
Brandon Keepers
1abc7ee2ef remove unused assertion 2014-11-18 15:04:12 -05:00
Brandon Keepers
d7a032afcd Symlink ant.xml to build.xml
We require samples for explicitly defined filenames that matches multiple languages. This is generally a good thing, but in this case they will be identical.
2014-11-18 15:02:59 -05:00
Brandon Keepers
587c764950 Avoid shadowing variable name 2014-11-18 14:57:39 -05:00
Brandon Keepers
1abbcb6435 Update comment 2014-11-18 14:57:32 -05:00
Brandon Keepers
17f3d7005a Make missing sample failure message similar 2014-11-18 14:55:15 -05:00
Brandon Keepers
ac59620728 Remove blank extensions property 2014-11-18 14:48:43 -05:00
Brandon Keepers
ba8b55391d Fix sample tests 2014-11-18 14:48:21 -05:00
Lars Brinkhoff
03c1e725ce Add Forth extensions .f and .for; add heuristics for Forth and FORTRAN. 2014-11-18 20:21:19 +01:00
Lars Brinkhoff
4cefaf2808 Add FORTRAN and Forth samples. 2014-11-18 20:12:39 +01:00
Brandon Keepers
757801e32f Merge remote-tracking branch 'origin/master' into filename-matches-multiple-langages
* origin/master:
  Allow mime-types 2.x to be used with Linguist
  Upgrade to rugged 0.22.0b1
  Mention that languages need to be quite popular
  fix vendor/cache
  Gemfile.lock is nolonger considered generated
  Tests for BlobHelper#empty?
  remove reference to empty.js
  Remove more empty samples
  Bail earlier if the file is empty.
  Moving comments
  Use heuristics earlier to inform the rest of the classification process
  Removing inconsistency of `find_by_heuristics` (was sometimes returning nil and sometimes returning and empty array)
  Removing unused array of candidate languages.
  Reworking most heuristics to only return one match
2014-11-18 14:09:15 -05:00
Brandon Keepers
749ea2a580 Merge pull request #1734 from github/just-filenames
Removing extensions when they should be filenames
2014-11-18 14:01:57 -05:00
Adam Roben
dc373fb51f Merge pull request #1737 from github/relax-mime-types
Allow mime-types 2.x to be used with Linguist
2014-11-18 11:47:35 -05:00
Arfon Smith
0443c4db2d Merge pull request #1674 from github/rework-heuristics
Rework heuristics
2014-11-18 10:43:01 -06:00
Adam Roben
d699ba3a98 Allow mime-types 2.x to be used with Linguist
The API is compatible for our purposes, and this allows Linguist to be
used in apps that pull in newer versions of mime-types through other
gems.
2014-11-18 10:46:04 -05:00
Adam Roben
92d2782ceb Merge pull request #1738 from github/update-rugged
Upgrade to rugged 0.22.0b1
2014-11-18 10:45:38 -05:00
Adam Roben
e76ebb1a74 Upgrade to rugged 0.22.0b1
0.21.2 was just released but doesn't contain the Repository::Attributes
code we depend on. 0.22.0b1 has this code.
2014-11-18 10:40:37 -05:00
Arfon Smith
cacde403c0 Merge pull request #1736 from github/aroben-patch-1
Mention that languages need to be quite popular
2014-11-18 08:07:51 -06:00
Adam Roben
906b0ee30e Mention that languages need to be quite popular
The precedent seems to be "hundreds of repos".
2014-11-18 08:48:00 -05:00
Brandon Keepers
cd7549390e Extensions aren't actually required 2014-11-17 20:00:09 -05:00
Brandon Keepers
f30cab30f4 fix vendor/cache 2014-11-17 19:42:22 -05:00
Paul Chaignon
1356d4e579 Remove heuristic rules for .mm files 2014-11-17 19:20:45 -05:00
Brandon Keepers
63c83d014b Fix errors from pedantic test 2014-11-17 18:53:14 -05:00
Brandon Keepers
b8e426d3a3 Make pedantic test actually pedantic
What do you call someone that thinks they are pedantic but actually
aren’t? All the crazy custom parsing in this test was making so it
wasn’t actually doing anything.
2014-11-17 18:52:53 -05:00
Arfon Smith
c5344da2ba Removing extensions when they should be filenames 2014-11-17 16:44:39 -06:00
Arfon Smith
7606a70bb8 Merge pull request #1733 from github/gemfile-lock-not-generated
Gemfile.lock is nolonger considered generated
2014-11-17 16:35:07 -06:00
Arfon Smith
7d850d7c09 Gemfile.lock is nolonger considered generated 2014-11-17 16:31:47 -06:00
Arfon Smith
c1b704075e Adding sample pom.xml files 2014-11-17 16:25:03 -06:00
Brandon Keepers
07a6411a75 Link to contributing docs 2014-11-17 16:30:39 -05:00
Brandon Keepers
b32bc5ef47 require samples if filename matches multiple languages 2014-11-17 16:18:56 -05:00
Brandon Keepers
6c106b88c0 Avoid using singular #extension 2014-11-17 15:47:21 -05:00
Adam Roben
f2c9581bac Merge pull request #1730 from github/more-docs
Add CONTRIBUTING.md
2014-11-17 15:28:32 -05:00
Brandon Keepers
c46667581d Use the first extension with languages defined 2014-11-17 15:15:39 -05:00
Adam Roben
59e5ba351c Mention that grammars should be licensed 2014-11-17 15:14:36 -05:00
Adam Roben
a8a710f863 Add a link to CONTRIBUTING.md from the README 2014-11-17 15:10:09 -05:00
Adam Roben
f603b731a9 Add CONTRIBUTING.md
This document tries to explain how to file various common kinds of bug
reports or enhancements.
2014-11-17 15:05:33 -05:00
Brandon Keepers
3ca872cea8 Support for multiple file extension segments 2014-11-17 14:54:22 -05:00
Adam Roben
970953ca12 Merge pull request #1727 from pchaigno/lexer-inform7
Lexer for Inform 7
2014-11-17 14:45:46 -05:00
Vicent Marti
7cf6372519 Version 4.0.1 2014-11-17 18:09:26 +01:00
Paul Chaignon
1d381233e0 Update tm_scope to match case used in Sublime-Inform 2014-11-17 11:19:23 -05:00
Paul Chaignon
6f0c24b90b Remove grammar for Inform 6 2014-11-17 10:56:38 -05:00
Brandon Keepers
f29c172267 Merge pull request #1726 from github/makefile-tests
Fix tests for Makefile change
2014-11-17 10:52:39 -05:00
Paul Chaignon
e9c5598254 Add lexer for Inform 7 using download-grammars script 2014-11-17 10:50:03 -05:00
Adam Roben
dd5728a441 Merge pull request #1728 from github/new-pike-url
Update the URL for the source.pike grammar
2014-11-17 10:45:48 -05:00
Adam Roben
ec1d77c32e Update the URL for the source.pike grammar
It's now hosted on GitHub and has a clearer license.
2014-11-17 10:43:36 -05:00
Paul Chaignon
40887930f9 Lexer for Inform 7 2014-11-17 09:41:35 -05:00
Brandon Keepers
6bf8243014 Fix tests for Makefile change 2014-11-17 08:15:17 -05:00
Brandon Keepers
419805ce9f Merge pull request #1724 from pchaigno/make-type
Programming type for Makefile
2014-11-16 23:17:30 -05:00
Paul Chaignon
81089416a2 Makefile set to programming type 2014-11-16 23:13:31 -05:00
Vicent Marti
efc7799960 Clojure grammar from Atom 2014-11-16 18:29:58 +01:00
Vicent Marti
fcbef97e39 Typo in README 2014-11-16 14:42:56 +01:00
Vicent Marti
8beef260da Merge pull request #1722 from github/vmg/grammar-fixes
Misc. grammar fixes
2014-11-16 14:41:40 +01:00
Vicent Marti
618a5b62ee Revert the changes in download-grammars 2014-11-16 14:40:48 +01:00
Vicent Marti
c579924485 DOCS 2014-11-16 14:25:11 +01:00
Vicent Marti
9b9fadfa19 Use a Racket grammar for Racket 2014-11-16 13:47:19 +01:00
Vicent Marti
daf64010f9 Merge pull request #1714 from github/vmg/new-languages
Some new TM powered languages
2014-11-14 20:24:21 +01:00
Vicent Marti
f0bd24f810 DOT was already a thing 2014-11-14 19:20:47 +01:00
Vicent Marti
5969a8b679 More samples 2014-11-14 19:18:43 +01:00
Vicent Marti
6b3ba29558 Reindent 2014-11-14 19:11:11 +01:00
Vicent Marti
f217047ac0 Rename 2014-11-14 19:06:41 +01:00
Vicent Marti
935c852364 Add Dockerfile sample 2014-11-14 19:05:42 +01:00
Vicent Marti
9e28965259 Rename Dockerfile 2014-11-14 19:04:11 +01:00
Vicent Marti
a829f3143a Add DOT sample 2014-11-14 19:04:06 +01:00
Vicent Marti
3fc01d09ce Hah Parrot was already a thing 2014-11-14 19:00:21 +01:00
Vicent Marti
a4ae90e2e9 Add Thrift 2014-11-14 18:58:30 +01:00
Vicent Marti
4928828874 Add Ninja 2014-11-14 18:56:34 +01:00
Vicent Marti
af90ac3758 add Maven buildfiles 2014-11-14 18:54:27 +01:00
Vicent Marti
d4e6798ba8 add Graphviz 2014-11-14 18:48:19 +01:00
Vicent Marti
03b250990d Add Cap'n Proto 2014-11-14 18:46:16 +01:00
Vicent Marti
5bc0ce0888 Add Bison 2014-11-14 18:44:12 +01:00
Vicent Marti
a0bbf7df6f Add Ant 2014-11-14 18:41:36 +01:00
Vicent Marti
6b90f22cef Add Parrot IR 2014-11-14 18:37:54 +01:00
Vicent Marti
d290576543 Add Docker Files as a language 2014-11-14 18:16:51 +01:00
Vicent Marti
75871e52ea Merge pull request #1707 from github/vmg/lol-pygments
Remove the Pygments dependency
2014-11-14 17:39:51 +01:00
Vicent Marti
b40459335b ...actually... This is 4.0.0 because of breaking changes 2014-11-14 17:38:39 +01:00
Vicent Marti
51b16ca965 oops 2014-11-14 17:37:12 +01:00
Vicent Marti
5dafa937de Remove lexers from languages.yml 2014-11-14 17:37:12 +01:00
Vicent Marti
2307c2e9fc Bump version to 3.6.0 2014-11-14 17:37:12 +01:00
Vicent Marti
d12aff9776 Unused test 2014-11-14 17:37:12 +01:00
Vicent Marti
fcd26da282 Remove outdated gems 2014-11-14 17:37:12 +01:00
Vicent Marti
4a10b27611 Remove Pygments 2014-11-14 17:37:12 +01:00
Vicent Marti
201fe54b0c Merge pull request #1710 from github/grammars
Add github-linguist-grammars gem
2014-11-14 16:12:22 +01:00
Adam Roben
1618a3b02a Use the original Kotlin package instead of a fork
The fork is identical to the original.
2014-11-13 14:26:06 -05:00
Adam Roben
3be97ccaa3 Update SCSS bundle location
The old URL redirects to this one.
2014-11-13 14:24:47 -05:00
Adam Roben
879e4977e4 Handle includes like source.c#block 2014-11-13 13:45:02 -05:00
Adam Roben
613b71719f Add back some accidentally pruned grammars
A bug in the prune-grammars script caused these to be removed.
2014-11-13 13:42:36 -05:00
Adam Roben
2870f6d038 Prune unused grammars
script/prune-grammars will remove any grammars that aren't needed from
grammars.yml.
2014-11-13 13:16:24 -05:00
Adam Roben
046fb18980 Add github-linguist-grammars gem
The purpose of this gem is to package up the language grammars that are
used for syntax highlighting on github.com. The grammars are TextMate,
Sublime Text, or Atom language grammars, converted to JSON and given the
filename SCOPE.json, where SCOPE is the language scope that the grammar
defines.

The github-linguist-grammars gem packages up all the grammars, and also
exports a Linguist::Grammars.path method to locate the directory
containing the grammars.

To build the gem, simply run `rake build_grammars_gem`. The grammars.yml
file lists all the repositories we download grammars from, as well as
which scopes are defined by each repository. The
script/download-grammars script takes that list and downloads and
processes the grammars into the format expected by the gem.
2014-11-13 11:03:53 -05:00
Brandon Keepers
d133d9eccb Merge pull request #1709 from github/emacs-lisp-assertion
Add assertion for Emacs Lisp
2014-11-13 10:47:52 -05:00
Brandon Keepers
296473507f Add assert for Emacs lisp
/cc https://github.com/github/linguist/pull/1499
2014-11-13 10:40:58 -05:00
Adam Roben
ff8821080a Merge pull request #1708 from github/fortran-modern
Use source.fortran.modern TM scope for FORTRAN
2014-11-13 09:53:33 -05:00
Adam Roben
9acf41b0fe Use source.fortran.modern TM scope for FORTRAN
This is technically only for FORTRAN 90 and newer, but seems to do just fine with older variants.
2014-11-13 09:52:08 -05:00
Paul Chaignon
9c64f72f35 Add .mm as an XML extension with heuristic rule 2014-11-12 19:38:54 -05:00
Adam Roben
9385e70d2d Merge pull request #1705 from github/cut-release-v3.5.2
Bump to version v3.5.2
2014-11-12 13:51:59 -05:00
Adam Roben
9469e188c8 Bump to version v3.5.2 2014-11-12 13:39:05 -05:00
Vicent Marti
6e57ca6fbc Update the TM scope for the Zephir language 2014-11-12 18:19:10 +01:00
Adam Roben
d5e3ebaef3 Merge pull request #1704 from github/gas-tmscope
Add a tm_scope for GAS
2014-11-12 12:08:14 -05:00
Adam Roben
a9eac8a832 Add a tm_scope for GAS
The source.asm.x86 grammar does a decent job of parsing this.
2014-11-12 12:07:23 -05:00
Adam Roben
1c7f5368cf Merge pull request #1703 from github/less-tmscope
Fix the tm_scope for Less
2014-11-12 11:45:01 -05:00
Adam Roben
960ff73c7f Fix the tm_scope for Less
The source.css.less grammar actually understands Less syntax.
2014-11-12 11:43:52 -05:00
Sebastian Godelet
95777055d1 languages.yml: added an interpreter entry to Mercury section 2014-11-11 23:28:07 +08:00
Brandon Keepers
e1ce30c3ce Merge pull request #1653 from baroquebobcat/patch-1
add pants BUILD file highlighting to languages.yml
2014-11-11 01:39:55 -05:00
Brandon Keepers
89b442c751 Merge pull request #1657 from techniq/patch-1
Add .NET config files as XML
2014-11-11 01:39:19 -05:00
Adam Roben
6b41059cdf Merge pull request #1696 from github/cut-release-v3.5.1
Bump to 3.5.1
2014-11-10 15:19:28 -05:00
Adam Roben
62cb42eee5 Bump to 3.5.1 2014-11-10 15:15:15 -05:00
Adam Roben
6bbb56db00 Merge pull request #1695 from github/nil-safety
Make it safe to pass nil to Language.find_by_name/alias again
2014-11-10 15:13:28 -05:00
Adam Roben
160598b9ef Make it safe to pass nil to Language.find_by_name/alias again
This restores compatibility with v3.4.x.
2014-11-10 15:12:29 -05:00
Brandon Keepers
33d75d9623 Tests for BlobHelper#empty? 2014-11-06 15:14:03 -06:00
Brandon Keepers
a0cc2c4c86 remove reference to empty.js 2014-11-06 14:59:34 -06:00
Brandon Keepers
754bc4ef6d Remove more empty samples 2014-11-06 14:56:19 -06:00
Brandon Keepers
df55043500 Bail earlier if the file is empty.
This will change behavior for empty files with unique extensions, returning nil instead of the language.
2014-11-06 14:49:24 -06:00
Arfon Smith
f22524a615 Moving comments 2014-11-06 14:27:49 -06:00
Arfon Smith
1831390429 Use heuristics earlier to inform the rest of the classification process 2014-11-06 14:09:19 -06:00
Arfon Smith
f4c7661cc6 Removing inconsistency of find_by_heuristics (was sometimes returning nil and sometimes returning and empty array) 2014-11-06 14:08:42 -06:00
Arfon Smith
0ab88919c9 Removing unused array of candidate languages. 2014-11-06 13:31:34 -06:00
Arfon Smith
9107d3c243 Reworking most heuristics to only return one match 2014-11-06 13:26:40 -06:00
Blake Embrey
42e9131b4f Add RAML support 2014-11-06 11:47:00 -06:00
Nick Howard
729a174eb6 add pants BUILD file highlighting to languages.yml
the pants build tool uses python files named BUILD. This adds highlighting for them.
2014-11-03 12:11:14 -07:00
Brandon Keepers
74fa4b9b75 docs 2014-11-03 08:54:11 -05:00
Sean Lynch
87df17309c Fix package.config to packages.config 2014-11-03 08:35:14 -05:00
Brandon Keepers
815337299a Extract empty blob strategy 2014-11-03 08:21:46 -05:00
Brandon Keepers
fd32938cd8 Extract strategies for detecting the language 2014-11-03 08:17:02 -05:00
Brandon Keepers
8d7b4f81b4 Extract filename strategy 2014-11-02 22:15:52 -05:00
Sean Lynch
b5cacbba9f Add .NET config files as XML 2014-11-02 10:13:52 -05:00
Arfon Smith
2bc546eadf Merge branch 'master' into 1233-local
Conflicts:
	lib/linguist/language.rb
2014-11-01 10:05:45 -05:00
Arfon Smith
9e50e188a8 Merge branch 'master' into 1233-local
Conflicts:
	lib/linguist/language.rb
	lib/linguist/languages.yml
	lib/linguist/samples.json
2014-11-01 10:04:22 -05:00
Arfon Smith
0c05a6c3ac Merge branch 'master' into 1036-local 2014-10-29 20:06:40 -05:00
Arfon Smith
d4d6ef314d Merge branch 'master' into 1036-local 2014-10-28 19:14:43 -05:00
Arfon Smith
322b21e0d0 Updating regexes 2014-10-28 19:14:32 -05:00
Arfon Smith
32de8a4d19 Only exact matches 2014-10-23 13:59:36 +01:00
Arfon Smith
cf9998f3e4 Merge branch 'master' into 1036-local 2014-10-23 12:16:51 +01:00
Arfon Smith
89320b1ca4 Merge branch 'master' into 1036-local
Conflicts:
	lib/linguist/heuristics.rb
	lib/linguist/samples.json
2014-10-23 12:05:18 +01:00
Arfon Smith
12b78c5357 Removing pry runtime dependency 2014-09-18 13:22:02 -05:00
Arfon Smith
e70cd33323 Moving to fixtures 2014-09-17 08:37:00 -05:00
Arfon Smith
302af86363 Merge branch 'master' into 1233-local
Conflicts:
	lib/linguist/language.rb
	lib/linguist/samples.json
2014-09-16 16:36:10 -05:00
Rachel Mant
44eebde394 Added @property and @end as an Obj-C heuristic for issue #1344 2014-09-03 18:33:24 +01:00
DX-MON
498c102414 Fixed the tests that broke, but this may have re-broken a couple of repositories - I can't yet tell 2014-09-03 18:14:15 +01:00
DX-MON
79cd77454b Merge remote-tracking branch 'source/master' 2014-09-03 18:09:52 +01:00
Michael Johnson
410aace222 Adding Google Apps Script (.gs) as a JavaScript extension. 2014-08-24 17:00:37 -04:00
dx-mon
dad4b974b7 Merge branch 'master' of https://github.com/github/linguist 2014-07-14 22:08:16 +01:00
DX-MON
f1cb16648f Added samples for .C and .H files to fix the pedantic tests for sample presense 2014-06-27 12:49:11 +01:00
DX-MON
1276f10b67 Fixed languages.yml so the pedantic test on extension ordering passes 2014-06-26 23:04:10 +01:00
DX-MON
c3da262bd0 Merge branch 'master' of https://github.com/github/linguist 2014-06-26 22:57:07 +01:00
Paul Chaignon
2143699aab Language detection test for non-sample files 2014-06-14 11:53:45 +02:00
Paul Chaignon
b1c2820299 Merge conflicts from master fixed 2014-06-13 18:50:32 +02:00
DX-MON
624fd74f83 Added C header samples from https://github.com/MiJyn/nightmare/ to fix the misclassifications as C++ that were occuring 2014-06-10 18:54:10 +01:00
DX-MON
cd878522d9 Added the GLKit GLKMatrix4 header as a C sample as this fixes 5 misclassifications - 4 as Obj-C and one as C++ 2014-06-10 18:51:29 +01:00
DX-MON
10fed43c27 Added .H and .C as C file extensions as well as them already being C++ ones. This fixes #1054 2014-06-10 11:59:11 +01:00
DX-MON
1d50adf87a Added a sample that fixes comment two on issue #1264. 2014-06-10 10:41:00 +01:00
Rachel Mant
614a61b0b0 Update heuristics.rb
Added the iostream headers and std:: to the C++ heuristics. This covers issues 1250.
2014-06-05 10:37:23 +01:00
Paul Chaignon
bd380f44cc Refactoring of Language.detect 2014-06-03 09:52:24 +02:00
Paul Chaignon
8a546d2a7a Try shebang detection if the extension is unknown 2014-06-01 20:00:49 +02:00
Paul Chaignon
1148a9746a Change unknown extension of PHP sample file 2014-06-01 19:59:57 +02:00
Trey Deitch
913cd6c309 Add support for Cool
This change includes a brief (non-sensical) sample program I wrote to
illustrate many of Cool's language constructs, as well as a simple rule
to distinguish Cool files from Common Lisp or OpenCL (it has a line that
starts with the word 'class'). Further, it includes a second example
program adapted from an example contained in the Cool distribution
(list.cl), which contains a few further language constructs and captures
the style of a Cool program.
2014-05-08 13:27:22 -07:00
DX-MON
492aa12cad Updated the samples database file as recommended 2014-04-03 11:18:36 +01:00
DX-MON
e79e45a74e Removed the matches variable from find_by_heuristics without re-breaking anything 2014-04-02 22:22:22 +01:00
dx-mon
e661470bbb Merge branch 'master' of github.com:DX-MON/linguist 2014-04-02 21:41:13 +01:00
DX-MON
af30a80702 Added two new heuristics tests for the new C/C++/Obj-C heuristics 2014-04-02 21:41:02 +01:00
Rachel Mant
bab7ee4fcb Found my new heuristic was still not being used because heuristics had been switched off 2014-04-02 20:17:33 +01:00
DX-MON
6524ac3588 Fixed the C++ class matching regex that was breaking the test for C/jni_layer.h 2014-04-02 20:08:47 +01:00
DX-MON
c432cd67fc Found out that nothing was ever getting returned from the heuristic function "find_by_heuristics", and that headers matching C, Obj-C and C++ were never getting checked heuristically 2014-04-02 19:55:24 +01:00
DX-MON
5c071a2e07 More regex goodness to improve the detection of C++ vs C 2014-04-02 19:48:44 +01:00
DX-MON
cb10c53dee Fixed the failing patten for detecting C++-only headers 2014-04-02 17:57:58 +01:00
Rachel Mant
dfba2a31a5 Added the end statements for the two new if statmeents
Did not know ends were required on one-liner ifs. Fixed.
2014-04-02 13:44:17 +01:00
Rachel Mant
667f3de26b Improved the Obj-C heuristic with a Regex matching multiple unique keywords
Also improved the C++ heuristic by checking for class without an @ on the front.
2014-04-02 13:09:17 +01:00
Rachel Mant
fd585beb07 Improved the C++ heuristic for detecting based on included headers 2014-04-02 12:55:29 +01:00
115 changed files with 11818 additions and 836 deletions

4
.gitignore vendored
View File

@@ -1,4 +1,6 @@
Gemfile.lock /Gemfile.lock
.bundle/ .bundle/
benchmark/ benchmark/
lib/linguist/samples.json lib/linguist/samples.json
/grammars
/node_modules

View File

@@ -2,6 +2,7 @@ before_install:
- git fetch origin master:master - git fetch origin master:master
- git fetch origin v2.0.0:v2.0.0 - git fetch origin v2.0.0:v2.0.0
- git fetch origin test/attributes:test/attributes - git fetch origin test/attributes:test/attributes
- git fetch origin test/master:test/master
- sudo apt-get install libicu-dev -y - sudo apt-get install libicu-dev -y
rvm: rvm:
- 1.9.3 - 1.9.3

31
CONTRIBUTING.md Normal file
View File

@@ -0,0 +1,31 @@
## Contributing
The majority of contributions won't need to touch any Ruby code at all. The [master language list][languages] is just a YAML configuration file.
Almost all bug fixes or new language additions should come with some additional code samples. Just drop them under [`samples/`][samples] in the correct subdirectory and our test suite will automatically test them. In most cases you shouldn't need to add any new assertions.
### My code is detected as the wrong language
This can usually be solved either by adding a new filename or file name extension to the language's entry in [`languages.yml`][languages] or adding more [samples][samples] for your language to the repository to make Linguist's classifier smarter.
### Syntax highlighting looks wrong
Assuming your code is being detected as the right language (see above), in most cases this is due to a bug in the language grammar rather than a bug in Linguist. [`grammars.yml`][grammars] lists all the grammars we use for syntax highlighting on github.com. Find the one corresponding to your code's programming language and submit a bug report upstream.
You can also try to fix the bug yourself and submit a Pull Request. [This piece from TextMate's documentation](http://manual.macromates.com/en/language_grammars) offers a good introduction on how to work with TextMate-compatible grammars. You can test grammars using [Lightshow](https://lightshow.githubapp.com).
Once the bug has been fixed upstream, please let us know and we'll pick it up for GitHub.
### I want to add support for the `X` programming language
Great! You'll need to:
0. Add an entry for your language to [`languages.yml`][languages].
0. Add a grammar for your language to [`grammars.yml`][grammars] by running `script/download-grammars --add URL`. Please only add grammars that have a license that permits redistribution.
0. Add samples for your language to the [samples directory][samples].
We try only to add languages once they have some usage on GitHub, so please note in-the-wild usage examples in your pull request. In most cases we prefer that languages already be in use in hundreds of repositories before supporting them in Linguist.
[grammars]: /grammars.yml
[languages]: /lib/linguist/languages.yml
[samples]: /samples

View File

@@ -1,3 +1,5 @@
source 'https://rubygems.org' source 'https://rubygems.org'
gemspec gemspec :name => "github-linguist"
gemspec :name => "github-linguist-grammars"
gem 'test-unit', require: false if RUBY_VERSION >= '2.2' gem 'test-unit', require: false if RUBY_VERSION >= '2.2'
gem 'byebug' if RUBY_VERSION >= '2.0'

View File

@@ -1,12 +1,14 @@
# Linguist # Linguist
We use this library at GitHub to detect blob languages, highlight code, ignore binary files, suppress generated files in diffs, and generate language breakdown graphs. We use this library at GitHub to detect blob languages, ignore binary files, suppress generated files in diffs, and generate language breakdown graphs.
Tips for filing issues and creating pull requests can be found in [`CONTRIBUTING.md`](/CONTRIBUTING.md).
## Features ## Features
### Language detection ### Language detection
Linguist defines a list of all languages known to GitHub in a [yaml file](https://github.com/github/linguist/blob/master/lib/linguist/languages.yml). In order for a file to be highlighted, a language and a lexer must be defined there. Linguist defines a list of all languages known to GitHub in a [yaml file](https://github.com/github/linguist/blob/master/lib/linguist/languages.yml).
Most languages are detected by their file extension. For disambiguating between files with common extensions, we first apply some common-sense heuristics to pick out obvious languages. After that, we use a Most languages are detected by their file extension. For disambiguating between files with common extensions, we first apply some common-sense heuristics to pick out obvious languages. After that, we use a
[statistical [statistical
@@ -24,7 +26,9 @@ See [lib/linguist/language.rb](https://github.com/github/linguist/blob/master/li
### Syntax Highlighting ### Syntax Highlighting
The actual syntax highlighting is handled by our Pygments wrapper, [pygments.rb](https://github.com/tmm1/pygments.rb). It also provides a [Lexer abstraction](https://github.com/tmm1/pygments.rb/blob/master/lib/pygments/lexer.rb) that determines which highlighter should be used on a file. Syntax highlighting in GitHub is performed using TextMate-compatible grammars. These are the same grammars that TextMate, Sublime Text and Atom use.
Every language in `languages.yml` is mapped to its corresponding TM `scope`. This scope will be used when picking up a grammar for highlighting. **When adding a new language to Linguist, please add its corresponding scope too (assuming there's an existing TextMate bundle, Sublime Text package, or Atom package) so syntax highlighting works for it**.
### Stats ### Stats
@@ -143,14 +147,6 @@ To run the tests:
bundle exec rake test bundle exec rake test
## Contributing
The majority of contributions won't need to touch any Ruby code at all. The [master language list](https://github.com/github/linguist/blob/master/lib/linguist/languages.yml) is just a YAML configuration file.
We try to only add languages once they have some usage on GitHub, so please note in-the-wild usage examples in your pull request.
Almost all bug fixes or new language additions should come with some additional code samples. Just drop them under [`samples/`](https://github.com/github/linguist/tree/master/samples) in the correct subdirectory and our test suite will automatically test them. In most cases you shouldn't need to add any new assertions.
### A note on language extensions ### A note on language extensions
Linguist has a number of methods available to it for identifying the language of a particular file. The initial lookup is based upon the extension of the file, possible file extensions are defined in an array called `extensions`. Take a look at this example for example for `Perl`: Linguist has a number of methods available to it for identifying the language of a particular file. The initial lookup is based upon the extension of the file, possible file extensions are defined in an array called `extensions`. Take a look at this example for example for `Perl`:

View File

@@ -31,6 +31,12 @@ task :build_gem => :samples do
File.delete("lib/linguist/languages.json") File.delete("lib/linguist/languages.json")
end end
task :build_grammars_gem do
rm_rf "grammars"
sh "script/download-grammars"
sh "gem", "build", "github-linguist-grammars.gemspec"
end
namespace :benchmark do namespace :benchmark do
benchmark_path = "benchmark/results" benchmark_path = "benchmark/results"

View File

@@ -0,0 +1,14 @@
require File.expand_path('../lib/linguist/version', __FILE__)
Gem::Specification.new do |s|
s.name = 'github-linguist-grammars'
s.version = Linguist::VERSION
s.summary = "Language grammars for use with github-linguist"
s.authors = "GitHub"
s.homepage = "https://github.com/github/linguist"
s.files = ['lib/linguist/grammars.rb'] + Dir['grammars/*']
s.add_development_dependency 'plist', '~>3.1'
end

View File

@@ -10,14 +10,13 @@ Gem::Specification.new do |s|
s.homepage = "https://github.com/github/linguist" s.homepage = "https://github.com/github/linguist"
s.license = "MIT" s.license = "MIT"
s.files = Dir['lib/**/*'] s.files = Dir['lib/**/*'] - ['lib/linguist/grammars.rb']
s.executables << 'linguist' s.executables << 'linguist'
s.add_dependency 'charlock_holmes', '~> 0.7.3' s.add_dependency 'charlock_holmes', '~> 0.7.3'
s.add_dependency 'escape_utils', '~> 1.0.1' s.add_dependency 'escape_utils', '~> 1.0.1'
s.add_dependency 'mime-types', '~> 1.19' s.add_dependency 'mime-types', '>= 1.19'
s.add_dependency 'pygments.rb', '~> 0.6.0' s.add_dependency 'rugged', '~> 0.22.0b4'
s.add_dependency 'rugged', '~> 0.21.1b2'
s.add_development_dependency 'mocha' s.add_development_dependency 'mocha'
s.add_development_dependency 'pry' s.add_development_dependency 'pry'

422
grammars.yml Normal file
View File

@@ -0,0 +1,422 @@
---
http://svn.edgewall.org/repos/genshi/contrib/textmate/Genshi.tmbundle/Syntaxes/Markup%20Template%20%28XML%29.tmLanguage:
- text.xml.genshi
http://svn.textmate.org/trunk/Review/Bundles/BlitzMax.tmbundle:
- source.blitzmax
http://svn.textmate.org/trunk/Review/Bundles/Cython.tmbundle:
- source.cython
http://svn.textmate.org/trunk/Review/Bundles/Forth.tmbundle:
- source.forth
http://svn.textmate.org/trunk/Review/Bundles/Parrot.tmbundle:
- source.parrot.pir
http://svn.textmate.org/trunk/Review/Bundles/Ruby%20Sass.tmbundle:
- source.sass
http://svn.textmate.org/trunk/Review/Bundles/SecondLife%20LSL.tmbundle:
- source.lsl
http://svn.textmate.org/trunk/Review/Bundles/VHDL.tmbundle:
- source.vhdl
http://svn.textmate.org/trunk/Review/Bundles/XQuery.tmbundle:
- source.xquery
https://bitbucket.org/Clams/sublimesystemverilog/get/default.tar.gz:
- source.systemverilog
- source.ucfconstraints
https://bitbucket.org/bitlang/sublime_cobol/raw/b0e9c44ac5f7a2fb553421aa986b35854cbfda4a/COBOL.tmLanguage:
- source.cobol
https://fan.googlecode.com/hg-history/Build%201.0.55/adm/tools/textmate/Fan.tmbundle/Syntaxes/Fan.tmLanguage:
- source.fan
https://github.com/AlanQuatermain/go-tmbundle:
- source.go
https://github.com/Anomareh/PHP-Twig.tmbundle:
- text.html.twig
https://github.com/Cirru/sublime-cirru/raw/master/Cirru.tmLanguage:
- source.cirru
https://github.com/Cykey/Sublime-Logos:
- source.logos
https://github.com/Drako/SublimeBrainfuck/raw/master/Brainfuck.tmLanguage:
- source.bf
https://github.com/JohnNilsson/awk-sublime/raw/master/AWK.tmLanguage:
- source.awk
https://github.com/JonBons/Sublime-SQF-Language:
- source.sqf
https://github.com/MarioRicalde/SCSS.tmbundle:
- source.scss
https://github.com/Oldes/Sublime-REBOL:
- source.rebol
https://github.com/PogiNate/Sublime-Inform:
- source.Inform7
https://github.com/Red-Nova-Technologies/autoitv3-tmbundle:
- source.autoit.3
https://github.com/SalGnt/Sublime-VimL:
- source.viml
https://github.com/Shammah/boo-sublime/raw/master/Boo.tmLanguage:
- source.boo
https://github.com/SublimeText/ColdFusion:
- source.cfscript
- source.cfscript.cfc
- text.cfml.basic
- text.html.cfm
https://github.com/SublimeText/NSIS:
- source.nsis
https://github.com/Varriount/NimLime:
- source.nimrod
- source.nimrod_filter
- source.nimrodcfg
https://github.com/alkemist/gradle.tmbundle:
- source.groovy.gradle
https://github.com/ambethia/Sublime-Loom:
- source.loomscript
https://github.com/angryant0007/VBDotNetSyntax:
- source.vbnet
https://github.com/anunayk/cool-tmbundle:
- source.cool
https://github.com/aroben/ada.tmbundle/raw/c45eed4d5f98fe3bcbbffbb9e436601ab5bbde4b/Syntaxes/Ada.plist:
- source.ada
https://github.com/aroben/ruby.tmbundle@4636a3023153c3034eb6ffc613899ba9cf33b41f:
- source.ruby
- text.html.erb
https://github.com/asbjornenge/Docker.tmbundle:
- source.dockerfile
https://github.com/atom/language-clojure:
- source.clojure
https://github.com/atom/language-coffee-script:
- source.coffee
- source.litcoffee
https://github.com/atom/language-csharp:
- source.cs
- source.csx
- source.nant-build
https://github.com/atom/language-javascript:
- source.js
- source.js.regexp
https://github.com/atom/language-python:
- source.python
- source.regexp.python
- text.python.traceback
https://github.com/atom/language-shellscript:
- source.shell
- text.shell-session
https://github.com/austinwagner/sublime-sourcepawn:
- source.sp
https://github.com/bfad/Sublime-Lasso:
- file.lasso
https://github.com/bholt/chapel-tmbundle:
- source.chapel
https://github.com/brandonwamboldt/sublime-nginx:
- source.nginx
https://github.com/bro/bro-sublime:
- source.bro
https://github.com/carsonoid/sublime_man_page_support/raw/master/man-groff.tmLanguage:
- text.groff
https://github.com/ccreutzig/sublime-MuPAD:
- source.mupad
https://github.com/cdwilson/nesC.tmbundle:
- source.nesc
https://github.com/christophevg/racket-tmbundle:
- source.racket
https://github.com/clemos/haxe-sublime-bundle:
- source.erazor
- source.haxe.2
- source.hss.1
- source.hxml
- source.nmml
https://github.com/cucumber/cucumber-tmbundle:
- source.ruby.rspec.cucumber.steps
- text.gherkin.feature
https://github.com/daaain/Handlebars/raw/master/Handlebars.tmLanguage:
- text.html.handlebars
https://github.com/davidpeckham/powershell.tmbundle:
- source.powershell
https://github.com/davidrios/jade-tmbundle:
- source.jade
- source.pyjade
https://github.com/elixir-lang/elixir-tmbundle:
- source.elixir
- text.elixir
- text.html.elixir
https://github.com/ericzou/ebundles/raw/master/Bundles/MSDOS%20batch%20file.tmbundle/Syntaxes/MSDOS%20batch%20file.tmLanguage:
- source.dosbatch
https://github.com/euler0/sublime-glsl/raw/master/GLSL.tmLanguage:
- source.glsl
https://github.com/fancy-lang/fancy-tmbundle:
- source.fancy
https://github.com/fsharp/fsharpbinding:
- source.fsharp
https://github.com/gingerbeardman/monkey.tmbundle:
- source.monkey
https://github.com/guillermooo/dart-sublime-bundle/raw/master/Dart.tmLanguage:
- source.dart
https://github.com/harrism/sublimetext-cuda-cpp/raw/master/cuda-c%2B%2B.tmLanguage:
- source.cuda-c++
https://github.com/hww3/pike-textmate:
- source.pike
https://github.com/jeancharles-roger/ceylon-sublimetext/raw/master/Ceylon.tmLanguage:
- source.ceylon
https://github.com/jfairbank/Sublime-Text-2-OpenEdge-ABL:
- source.abl
https://github.com/jhasse/sublime-rust:
- source.rust
https://github.com/johanasplund/sublime-befunge/raw/master/Befunge-93.tmLanguage:
- source.befunge
https://github.com/joshaven/RDoc.tmbundle:
- text.rdoc
https://github.com/jpcamara/Textmate-Gosu-Bundle/raw/master/Gosu.tmbundle/Syntaxes/Gosu.tmLanguage:
- source.gosu.2
https://github.com/kswedberg/jquery-tmbundle:
- source.js.jquery
https://github.com/laughedelic/sublime-idris/raw/master/Idris.tmLanguage:
- source.idris
https://github.com/lavrton/sublime-better-typescript:
- source.ts
https://github.com/leafo/moonscript-tmbundle:
- source.moonscript
https://github.com/lsf37/Isabelle.tmbundle:
- source.isabelle.theory
https://github.com/lunixbochs/x86-assembly-textmate-bundle:
- source.asm.x86
https://github.com/macekond/Alloy.tmbundle:
- source.alloy
https://github.com/mads379/opa.tmbundle:
- source.opa
https://github.com/mads379/scala.tmbundle:
- source.sbt
- source.scala
https://github.com/marconi/mako-tmbundle:
- text.html.mako
https://github.com/mattfoster/gnuplot-tmbundle:
- source.gnuplot
https://github.com/mgalloy/idl.tmbundle:
- source.idl
- source.idl-dlm
- text.idl-idldoc
https://github.com/michaeledgar/protobuf-tmbundle:
- source.protobuf
https://github.com/mkolosick/Sublime-Coq/raw/master/Coq.tmLanguage:
- source.coq
https://github.com/mokus0/Agda.tmbundle:
- source.agda
https://github.com/nanoant/Julia.tmbundle:
- source.julia
https://github.com/nanoant/assembly.tmbundle/raw/master/Syntaxes/objdump%20C%2B%2B.tmLanguage:
- objdump.x86asm
https://github.com/nilium/ooc.tmbundle:
- source.ooc
https://github.com/paulmillr/LiveScript.tmbundle:
- source.livescript
https://github.com/pferruggiaro/sublime-tea:
- source.tea
https://github.com/puppet-textmate-bundle/puppet-textmate-bundle:
- source.puppet
https://github.com/pvl/abap.tmbundle:
- source.abap
https://github.com/scalate/Scalate.tmbundle:
- source.scaml
- text.html.ssp
https://github.com/shadanan/mathematica-tmbundle:
- source.mathematica
https://github.com/shellderp/sublime-robot-plugin:
- text.robot
https://github.com/simongregory/actionscript3-tmbundle:
- source.actionscript.3
- text.html.asdoc
- text.xml.flex-config
https://github.com/skozlovf/Sublime-QML:
- source.qml
https://github.com/slash-lang/Slash.tmbundle:
- text.html.slash
https://github.com/slavapestov/factor/raw/master/misc/Factor.tmbundle/Syntaxes/Factor.tmLanguage:
- source.factor
https://github.com/slim-template/ruby-slim.tmbundle:
- text.slim
https://github.com/staltz/SublimeXtend:
- source.xtend
https://github.com/statatmbundle/Stata.tmbundle:
- source.mata
- source.stata
https://github.com/technosophos/Vala-TMBundle:
- source.vala
https://github.com/textmate/ant.tmbundle:
- text.xml.ant
https://github.com/textmate/antlr.tmbundle:
- source.antlr
https://github.com/textmate/apache.tmbundle:
- source.apache-config
- source.apache-config.mod_perl
https://github.com/textmate/applescript.tmbundle:
- source.applescript
https://github.com/textmate/asp.tmbundle:
- source.asp
- text.html.asp
https://github.com/textmate/bison.tmbundle:
- source.bison
https://github.com/textmate/c.tmbundle:
- source.c
- source.c++
- source.c.platform
https://github.com/textmate/capnproto.tmbundle:
- source.capnp
https://github.com/textmate/cmake.tmbundle:
- source.cache.cmake
- source.cmake
https://github.com/textmate/cpp-qt.tmbundle:
- source.c++.qt
- source.qmake
https://github.com/textmate/css.tmbundle:
- source.css
https://github.com/textmate/d.tmbundle:
- source.d
https://github.com/textmate/diff.tmbundle:
- source.diff
https://github.com/textmate/dylan.tmbundle:
- source.dylan
- source.lid
- source.makegen
https://github.com/textmate/eiffel.tmbundle:
- source.eiffel
https://github.com/textmate/erlang.tmbundle:
- source.erlang
- text.html.erlang.yaws
https://github.com/textmate/fortran.tmbundle:
- source.fortran
- source.fortran.modern
https://github.com/textmate/gettext.tmbundle:
- source.po
https://github.com/textmate/graphviz.tmbundle:
- source.dot
https://github.com/textmate/groovy.tmbundle:
- source.groovy
https://github.com/textmate/haskell.tmbundle:
- source.haskell
- text.tex.latex.haskell
https://github.com/textmate/html.tmbundle:
- text.html.basic
https://github.com/textmate/ini.tmbundle:
- source.ini
https://github.com/textmate/io.tmbundle:
- source.io
https://github.com/textmate/java.tmbundle:
- source.java
- source.java-properties
- text.html.jsp
- text.junit-test-report
https://github.com/textmate/javadoc.tmbundle:
- text.html.javadoc
https://github.com/textmate/javascript-objective-j.tmbundle:
- source.js.objj
https://github.com/textmate/json.tmbundle:
- source.json
https://github.com/textmate/latex.tmbundle:
- text.bibtex
- text.log.latex
- text.tex
- text.tex.latex
- text.tex.latex.beamer
- text.tex.latex.memoir
https://github.com/textmate/less.tmbundle:
- source.css.less
https://github.com/textmate/lilypond.tmbundle:
- source.lilypond
https://github.com/textmate/lisp.tmbundle:
- source.lisp
https://github.com/textmate/logtalk.tmbundle:
- source.logtalk
https://github.com/textmate/lua.tmbundle:
- source.lua
https://github.com/textmate/make.tmbundle:
- source.makefile
https://github.com/textmate/markdown.tmbundle:
- text.html.markdown
https://github.com/textmate/matlab.tmbundle:
- source.matlab
- source.octave
https://github.com/textmate/maven.tmbundle:
- text.xml.pom
https://github.com/textmate/nemerle.tmbundle:
- source.nemerle
https://github.com/textmate/ninja.tmbundle:
- source.ninja
https://github.com/textmate/objective-c.tmbundle:
- source.objc
- source.objc++
- source.objc.platform
- source.strings
https://github.com/textmate/ocaml.tmbundle:
- source.camlp4.ocaml
- source.ocaml
- source.ocamllex
- source.ocamlyacc
https://github.com/textmate/pascal.tmbundle:
- source.pascal
https://github.com/textmate/perl.tmbundle:
- source.perl
https://github.com/textmate/php-smarty.tmbundle:
- source.smarty
https://github.com/textmate/php.tmbundle:
- text.html.php
https://github.com/textmate/postscript.tmbundle:
- source.postscript
https://github.com/textmate/processing.tmbundle:
- source.processing
https://github.com/textmate/prolog.tmbundle:
- source.prolog
https://github.com/textmate/python-django.tmbundle:
- source.python.django
- text.html.django
https://github.com/textmate/r.tmbundle:
- source.r
- text.tex.latex.rd
https://github.com/textmate/restructuredtext.tmbundle:
- text.restructuredtext
https://github.com/textmate/ruby-haml.tmbundle:
- text.haml
https://github.com/textmate/ruby-on-rails-tmbundle:
- source.js.erb.rails
- source.ruby.rails
- source.ruby.rails.rjs
- source.sql.ruby
- text.html.erb.rails
https://github.com/textmate/scheme.tmbundle:
- source.scheme
https://github.com/textmate/scilab.tmbundle:
- source.scilab
https://github.com/textmate/sql.tmbundle:
- source.sql
https://github.com/textmate/standard-ml.tmbundle:
- source.cm
- source.ml
https://github.com/textmate/swift.tmbundle:
- source.swift
https://github.com/textmate/tcl.tmbundle:
- source.tcl
- text.html.tcl
https://github.com/textmate/text.tmbundle:
- text.plain
https://github.com/textmate/textile.tmbundle:
- text.html.textile
https://github.com/textmate/textmate.tmbundle:
- source.regexp.oniguruma
- source.tm-properties
https://github.com/textmate/thrift.tmbundle:
- source.thrift
https://github.com/textmate/toml.tmbundle:
- source.toml
https://github.com/textmate/verilog.tmbundle:
- source.verilog
https://github.com/textmate/xml.tmbundle:
- text.xml
- text.xml.xsl
https://github.com/textmate/yaml.tmbundle:
- source.yaml
https://github.com/tomas-stefano/smalltalk-tmbundle:
- source.smalltalk
https://github.com/vic/ioke-outdated/raw/master/share/TextMate/Ioke.tmbundle/Syntaxes/Ioke.tmLanguage:
- source.ioke
https://github.com/vkostyukov/kotlin-sublime-package:
- source.Kotlin
https://github.com/vmg/zephir-sublime:
- source.php.zephir
https://github.com/whitequark/llvm.tmbundle:
- source.llvm
https://github.com/wmertens/sublime-nix:
- source.nix
https://raw.githubusercontent.com/eregon/oz-tmbundle/master/Syntaxes/Oz.tmLanguage:
- source.oz

View File

@@ -4,4 +4,5 @@ require 'linguist/heuristics'
require 'linguist/language' require 'linguist/language'
require 'linguist/repository' require 'linguist/repository'
require 'linguist/samples' require 'linguist/samples'
require 'linguist/shebang'
require 'linguist/version' require 'linguist/version'

View File

@@ -2,7 +2,6 @@ require 'linguist/generated'
require 'charlock_holmes' require 'charlock_holmes'
require 'escape_utils' require 'escape_utils'
require 'mime/types' require 'mime/types'
require 'pygments'
require 'yaml' require 'yaml'
module Linguist module Linguist
@@ -147,6 +146,13 @@ module Linguist
end end
end end
# Public: Is the blob empty?
#
# Return true or false
def empty?
data.nil? || data == ""
end
# Public: Is the blob text? # Public: Is the blob text?
# #
# Return true or false # Return true or false
@@ -193,10 +199,6 @@ module Linguist
# Public: Is the blob safe to colorize? # Public: Is the blob safe to colorize?
# #
# We use Pygments for syntax highlighting blobs. Pygments
# can be too slow for very large blobs or for certain
# corner-case blobs.
#
# Return true or false # Return true or false
def safe_to_colorize? def safe_to_colorize?
!large? && text? && !high_ratio_of_long_lines? !large? && text? && !high_ratio_of_long_lines?
@@ -204,9 +206,6 @@ module Linguist
# Internal: Does the blob have a ratio of long lines? # Internal: Does the blob have a ratio of long lines?
# #
# These types of files are usually going to make Pygments.rb
# angry if we try to colorize them.
#
# Return true or false # Return true or false
def high_ratio_of_long_lines? def high_ratio_of_long_lines?
return false if loc == 0 return false if loc == 0
@@ -314,28 +313,9 @@ module Linguist
@language ||= Language.detect(self) @language ||= Language.detect(self)
end end
# Internal: Get the lexer of the blob.
#
# Returns a Lexer.
def lexer
language ? language.lexer : Pygments::Lexer.find_by_name('Text only')
end
# Internal: Get the TextMate compatible scope for the blob # Internal: Get the TextMate compatible scope for the blob
def tm_scope def tm_scope
language && language.tm_scope language && language.tm_scope
end end
# Public: Highlight syntax of blob
#
# options - A Hash of options (defaults to {})
#
# Returns html String
def colorize(options = {})
return unless safe_to_colorize?
options[:options] ||= {}
options[:options][:encoding] ||= encoding
lexer.highlight(data, options)
end
end end
end end

View File

@@ -3,6 +3,25 @@ require 'linguist/tokenizer'
module Linguist module Linguist
# Language bayesian classifier. # Language bayesian classifier.
class Classifier class Classifier
# Public: Use the classifier to detect language of the blob.
#
# blob - An object that quacks like a blob.
# possible_languages - Array of Language objects
#
# Examples
#
# Classifier.call(FileBlob.new("path/to/file"), [
# Language["Ruby"], Language["Python"]
# ])
#
# Returns an Array of Language objects, most probable first.
def self.call(blob, possible_languages)
language_names = possible_languages.map(&:name)
classify(Samples.cache, blob.data, language_names).map do |name, _|
Language[name] # Return the actual Language objects
end
end
# Public: Train classifier that data is a certain language. # Public: Train classifier that data is a certain language.
# #
# db - Hash classifier database object # db - Hash classifier database object

View File

@@ -57,14 +57,20 @@ module Linguist
# #
# Returns a String. # Returns a String.
def extension def extension
# File.extname returns nil if the filename is an extension. extensions.last || ""
extension = File.extname(name) end
basename = File.basename(name)
# Checks if the filename is an extension. # Public: Return an array of the file extensions
if extension.empty? && basename[0] == "." #
basename # >> Linguist::FileBlob.new("app/views/things/index.html.erb").extensions
else # => [".html.erb", ".erb"]
extension #
# Returns an Array
def extensions
basename, *segments = File.basename(name).split(".")
segments.map.with_index do |segment, index|
"." + segments[index..-1].join(".")
end end
end end
end end

View File

@@ -51,26 +51,25 @@ module Linguist
# #
# Return true or false # Return true or false
def generated? def generated?
name == 'Gemfile.lock' || minified_files? ||
minified_files? || compiled_coffeescript? ||
compiled_coffeescript? || xcode_file? ||
xcode_file? || generated_parser? ||
generated_parser? || generated_net_docfile? ||
generated_net_docfile? || generated_net_designer_file? ||
generated_net_designer_file? || generated_postscript? ||
generated_postscript? || generated_protocol_buffer? ||
generated_protocol_buffer? || generated_jni_header? ||
generated_jni_header? || composer_lock? ||
composer_lock? || node_modules? ||
node_modules? || godeps? ||
godeps? || vcr_cassette? ||
vcr_cassette? || generated_by_zephir?
generated_by_zephir?
end end
# Internal: Is the blob an Xcode file? # Internal: Is the blob an Xcode file?
# #
# Generated if the file extension is an Xcode # Generated if the file extension is an Xcode
# file extension. # file extension.
# #
# Returns true of false. # Returns true of false.
@@ -265,4 +264,3 @@ module Linguist
end end
end end
end end

13
lib/linguist/grammars.rb Normal file
View File

@@ -0,0 +1,13 @@
# Note: This file is included in the github-linguist-grammars gem, not the
# github-linguist gem.
module Linguist
module Grammars
# Get the path to the directory containing the language grammar JSON files.
#
# Returns a String.
def self.path
File.expand_path("../../../grammars", __FILE__)
end
end
end

View File

@@ -1,131 +1,160 @@
module Linguist module Linguist
# A collection of simple heuristics that can be used to better analyze languages. # A collection of simple heuristics that can be used to better analyze languages.
class Heuristics class Heuristics
ACTIVE = true # Public: Use heuristics to detect language of the blob.
#
# blob - An object that quacks like a blob.
# possible_languages - Array of Language objects
#
# Examples
#
# Heuristics.call(FileBlob.new("path/to/file"), [
# Language["Ruby"], Language["Python"]
# ])
#
# Returns an Array of languages, or empty if none matched or were inconclusive.
def self.call(blob, languages)
data = blob.data
# Public: Given an array of String language names, @heuristics.each do |heuristic|
# apply heuristics against the given data and return an array return Array(heuristic.call(data)) if heuristic.matches?(languages)
# of matching languages, or nil. end
[] # No heuristics matched
end
# Internal: Define a new heuristic.
# #
# data - Array of tokens or String data to analyze. # languages - String names of languages to disambiguate.
# languages - Array of language name Strings to restrict to. # heuristic - Block which takes data as an argument and returns a Language or nil.
# #
# Returns an array of Languages or [] # Examples
def self.find_by_heuristics(data, languages) #
if active? # disambiguate "Perl", "Prolog" do |data|
if languages.all? { |l| ["Perl", "Prolog"].include?(l) } # if data.include?("use strict")
result = disambiguate_pl(data, languages) # Language["Perl"]
end # elsif data.include?(":-")
if languages.all? { |l| ["ECL", "Prolog"].include?(l) } # Language["Prolog"]
result = disambiguate_ecl(data, languages) # end
end # end
if languages.all? { |l| ["IDL", "Prolog"].include?(l) } #
result = disambiguate_pro(data, languages) def self.disambiguate(*languages, &heuristic)
end @heuristics << new(languages, &heuristic)
if languages.all? { |l| ["Common Lisp", "OpenCL"].include?(l) } end
result = disambiguate_cl(data, languages)
end # Internal: Array of defined heuristics
if languages.all? { |l| ["Hack", "PHP"].include?(l) } @heuristics = []
result = disambiguate_hack(data, languages)
end # Internal
if languages.all? { |l| ["Scala", "SuperCollider"].include?(l) } def initialize(languages, &heuristic)
result = disambiguate_sc(data, languages) @languages = languages
end @heuristic = heuristic
if languages.all? { |l| ["AsciiDoc", "AGS Script"].include?(l) } end
result = disambiguate_asc(data, languages)
end # Internal: Check if this heuristic matches the candidate languages.
return result def matches?(candidates)
candidates.all? { |l| @languages.include?(l.name) }
end
# Internal: Perform the heuristic
def call(data)
@heuristic.call(data)
end
disambiguate "Objective-C", "C++", "C" do |data|
if (/@(interface|class|protocol|property|end|synchronised|selector|implementation)\b/.match(data))
Language["Objective-C"]
elsif (/^\s*#\s*include <(cstdint|string|vector|map|list|array|bitset|queue|stack|forward_list|unordered_map|unordered_set|(i|o|io)stream)>/.match(data) ||
/^\s*template\s*</.match(data) || /^[^@]class\s+\w+/.match(data) || /^[^@](private|public|protected):$/.match(data) || /std::.+$/.match(data))
Language["C++"]
end end
end end
# .h extensions are ambiguous between C, C++, and Objective-C. disambiguate "Perl", "Perl6", "Prolog" do |data|
# We want to shortcut look for Objective-C _and_ now C++ too! if data.include?("use v6")
# Language["Perl6"]
# Returns an array of Languages or [] elsif data.include?("use strict")
def self.disambiguate_c(data, languages) Language["Perl"]
matches = [] elsif data.include?(":-")
matches << Language["Objective-C"] if data.include?("@interface") Language["Prolog"]
matches << Language["C++"] if data.include?("#include <cstdint>") end
matches
end end
def self.disambiguate_pl(data, languages) disambiguate "ECL", "Prolog" do |data|
matches = [] if data.include?(":-")
matches << Language["Prolog"] if data.include?(":-") Language["Prolog"]
matches << Language["Perl"] if data.include?("use strict") elsif data.include?(":=")
matches Language["ECL"]
end
end end
def self.disambiguate_ecl(data, languages) disambiguate "IDL", "Prolog" do |data|
matches = [] if data.include?(":-")
matches << Language["Prolog"] if data.include?(":-") Language["Prolog"]
matches << Language["ECL"] if data.include?(":=")
matches
end
def self.disambiguate_pro(data, languages)
matches = []
if (data.include?(":-"))
matches << Language["Prolog"]
else else
matches << Language["IDL"] Language["IDL"]
end end
matches
end end
def self.disambiguate_ts(data, languages) disambiguate "Common Lisp", "OpenCL", "Cool" do |data|
matches = [] if data.include?("(defun ")
if (data.include?("</translation>")) Language["Common Lisp"]
matches << Language["XML"] elsif /^class/x.match(data)
else Language["Cool"]
matches << Language["TypeScript"] elsif /\/\* |\/\/ |^\}/.match(data)
Language["OpenCL"]
end end
matches
end end
def self.disambiguate_cl(data, languages) disambiguate "Hack", "PHP" do |data|
matches = []
matches << Language["Common Lisp"] if data.include?("(defun ")
matches << Language["OpenCL"] if /\/\* |\/\/ |^\}/.match(data)
matches
end
def self.disambiguate_r(data, languages)
matches = []
matches << Language["Rebol"] if /\bRebol\b/i.match(data)
matches << Language["R"] if data.include?("<-")
matches
end
def self.disambiguate_hack(data, languages)
matches = []
if data.include?("<?hh") if data.include?("<?hh")
matches << Language["Hack"] Language["Hack"]
elsif /<?[^h]/.match(data) elsif /<?[^h]/.match(data)
matches << Language["PHP"] Language["PHP"]
end end
matches
end end
def self.disambiguate_sc(data, languages) disambiguate "Scala", "SuperCollider" do |data|
matches = [] if /\^(this|super)\./.match(data) || /^\s*(\+|\*)\s*\w+\s*{/.match(data) || /^\s*~\w+\s*=\./.match(data)
if (/\^(this|super)\./.match(data) || /^\s*(\+|\*)\s*\w+\s*{/.match(data) || /^\s*~\w+\s*=\./.match(data)) Language["SuperCollider"]
matches << Language["SuperCollider"] elsif /^\s*import (scala|java)\./.match(data) || /^\s*val\s+\w+\s*=/.match(data) || /^\s*class\b/.match(data)
Language["Scala"]
end end
if (/^\s*import (scala|java)\./.match(data) || /^\s*val\s+\w+\s*=/.match(data) || /^\s*class\b/.match(data))
matches << Language["Scala"]
end
matches
end end
def self.disambiguate_asc(data, languages) disambiguate "AsciiDoc", "AGS Script" do |data|
matches = [] Language["AsciiDoc"] if /^=+(\s|\n)/.match(data)
matches << Language["AsciiDoc"] if /^=+(\s|\n)/.match(data)
matches
end end
def self.active? disambiguate "FORTRAN", "Forth" do |data|
!!ACTIVE if /^: /.match(data)
Language["Forth"]
elsif /^([c*][^a-z]| subroutine\s)/i.match(data)
Language["FORTRAN"]
end
end end
disambiguate "F#", "Forth", "GLSL" do |data|
if /^(: |new-device)/.match(data)
Language["Forth"]
elsif /^(#light|import|let|module|namespace|open|type)/.match(data)
Language["F#"]
elsif /^(#include|#pragma|precision|uniform|varying|void)/.match(data)
Language["GLSL"]
end
end
disambiguate "Gosu", "JavaScript" do |data|
Language["Gosu"] if /^uses java\./.match(data)
end
disambiguate "LoomScript", "LiveScript" do |data|
if /^\s*package\s*[\w\.\/\*\s]*\s*{/.match(data)
Language["LoomScript"]
else
Language["LiveScript"]
end
end
end end
end end

View File

@@ -1,5 +1,4 @@
require 'escape_utils' require 'escape_utils'
require 'pygments'
require 'yaml' require 'yaml'
begin begin
require 'yajl' require 'yajl'
@@ -11,6 +10,8 @@ require 'linguist/heuristics'
require 'linguist/samples' require 'linguist/samples'
require 'linguist/file_blob' require 'linguist/file_blob'
require 'linguist/blob_helper' require 'linguist/blob_helper'
require 'linguist/strategy/filename'
require 'linguist/shebang'
module Linguist module Linguist
# Language names that are recognizable by GitHub. Defined languages # Language names that are recognizable by GitHub. Defined languages
@@ -92,6 +93,13 @@ module Linguist
language language
end end
STRATEGIES = [
Linguist::Strategy::Filename,
Linguist::Shebang,
Linguist::Heuristics,
Linguist::Classifier
]
# Public: Detects the Language of the blob. # Public: Detects the Language of the blob.
# #
# blob - an object that includes the Linguist `BlobHelper` interface; # blob - an object that includes the Linguist `BlobHelper` interface;
@@ -99,51 +107,22 @@ module Linguist
# #
# Returns Language or nil. # Returns Language or nil.
def self.detect(blob) def self.detect(blob)
name = blob.name.to_s # Bail early if the blob is binary or empty.
return nil if blob.likely_binary? || blob.binary? || blob.empty?
# Check if the blob is possibly binary and bail early; this is a cheap # Call each strategy until one candidate is returned.
# test that uses the extension name to guess a binary binary mime type. STRATEGIES.reduce([]) do |languages, strategy|
# candidates = strategy.call(blob, languages)
# We'll perform a more comprehensive test later which actually involves if candidates.size == 1
# looking for binary characters in the blob return candidates.first
return nil if blob.likely_binary? || blob.binary? elsif candidates.size > 1
# More than one candidate was found, pass them to the next strategy.
# A bit of an elegant hack. If the file is executable but extensionless, candidates
# append a "magic" extension so it can be classified with other else
# languages that have shebang scripts. # No candiates were found, pass on languages from the previous strategy.
extension = FileBlob.new(name).extension languages
if extension.empty? && blob.mode && (blob.mode.to_i(8) & 05) == 05
name += ".script!"
end
# First try to find languages that match based on filename.
possible_languages = find_by_filename(name)
# If there is more than one possible language with that extension (or no
# extension at all, in the case of extensionless scripts), we need to continue
# our detection work
if possible_languages.length > 1
data = blob.data
possible_language_names = possible_languages.map(&:name)
# Don't bother with binary contents or an empty file
if data.nil? || data == ""
nil
# Check if there's a shebang line and use that as authoritative
elsif (result = find_by_shebang(data)) && !result.empty?
result.first
# No shebang. Still more work to do. Try to find it with our heuristics.
elsif (determined = Heuristics.find_by_heuristics(data, possible_language_names)) && !determined.empty?
determined.first
# Lastly, fall back to the probabilistic classifier.
elsif classified = Classifier.classify(Samples.cache, data, possible_language_names).first
# Return the actual Language object based of the string language name (i.e., first element of `#classify`)
Language[classified[0]]
end end
else end.first
# Simplest and most common case, we can just return the one match based on extension
possible_languages.first
end
end end
# Public: Get all Languages # Public: Get all Languages
@@ -164,7 +143,7 @@ module Linguist
# #
# Returns the Language or nil if none was found. # Returns the Language or nil if none was found.
def self.find_by_name(name) def self.find_by_name(name)
@name_index[name.downcase] name && @name_index[name.downcase]
end end
# Public: Look up Language by one of its aliases. # Public: Look up Language by one of its aliases.
@@ -178,7 +157,7 @@ module Linguist
# #
# Returns the Lexer or nil if none was found. # Returns the Lexer or nil if none was found.
def self.find_by_alias(name) def self.find_by_alias(name)
@alias_index[name.downcase] name && @alias_index[name.downcase]
end end
# Public: Look up Languages by filename. # Public: Look up Languages by filename.
@@ -193,8 +172,13 @@ module Linguist
# Returns all matching Languages or [] if none were found. # Returns all matching Languages or [] if none were found.
def self.find_by_filename(filename) def self.find_by_filename(filename)
basename = File.basename(filename) basename = File.basename(filename)
extname = FileBlob.new(filename).extension
(@filename_index[basename] + find_by_extension(extname)).compact.uniq # find the first extension with language definitions
extname = FileBlob.new(filename).extensions.detect do |e|
!@extension_index[e].empty?
end
(@filename_index[basename] + @extension_index[extname]).compact.uniq
end end
# Public: Look up Languages by file extension. # Public: Look up Languages by file extension.
@@ -215,20 +199,26 @@ module Linguist
@extension_index[extname] @extension_index[extname]
end end
# Public: Look up Languages by shebang line. # DEPRECATED
def self.find_by_shebang(data)
@interpreter_index[Shebang.interpreter(data)]
end
# Public: Look up Languages by interpreter.
# #
# data - Array of tokens or String data to analyze. # interpreter - String of interpreter name
# #
# Examples # Examples
# #
# Language.find_by_shebang("#!/bin/bash\ndate;") # Language.find_by_interpreter("bash")
# # => [#<Language name="Bash">] # # => [#<Language name="Bash">]
# #
# Returns the matching Language # Returns the matching Language
def self.find_by_shebang(data) def self.find_by_interpreter(interpreter)
@interpreter_index[Linguist.interpreter_from_shebang(data)] @interpreter_index[interpreter]
end end
# Public: Look up Language by its name or lexer. # Public: Look up Language by its name or lexer.
# #
# name - The String name of the Language # name - The String name of the Language
@@ -243,7 +233,7 @@ module Linguist
# #
# Returns the Language or nil if none was found. # Returns the Language or nil if none was found.
def self.[](name) def self.[](name)
@index[name.downcase] name && @index[name.downcase]
end end
# Public: A List of popular languages # Public: A List of popular languages
@@ -302,10 +292,7 @@ module Linguist
# Set aliases # Set aliases
@aliases = [default_alias_name] + (attributes[:aliases] || []) @aliases = [default_alias_name] + (attributes[:aliases] || [])
# Lookup Lexer object # Load the TextMate scope name or try to guess one
@lexer = Pygments::Lexer.find_by_name(attributes[:lexer] || name) ||
raise(ArgumentError, "#{@name} is missing lexer")
@tm_scope = attributes[:tm_scope] || begin @tm_scope = attributes[:tm_scope] || begin
context = case @type context = case @type
when :data, :markup, :prose when :data, :markup, :prose
@@ -437,11 +424,6 @@ module Linguist
# Returns the extensions Array # Returns the extensions Array
attr_reader :filenames attr_reader :filenames
# Public: Return all possible extensions for language
def all_extensions
(extensions + [primary_extension]).uniq
end
# Deprecated: Get primary extension # Deprecated: Get primary extension
# #
# Defaults to the first extension but can be overridden # Defaults to the first extension but can be overridden
@@ -599,9 +581,9 @@ module Linguist
:ace_mode => options['ace_mode'], :ace_mode => options['ace_mode'],
:wrap => options['wrap'], :wrap => options['wrap'],
:group_name => options['group'], :group_name => options['group'],
:searchable => options.key?('searchable') ? options['searchable'] : true, :searchable => options.fetch('searchable', true),
:search_term => options['search_term'], :search_term => options['search_term'],
:extensions => [options['extensions'].first] + options['extensions'][1..-1].sort, :extensions => Array(options['extensions']),
:interpreters => options['interpreters'].sort, :interpreters => options['interpreters'].sort,
:filenames => options['filenames'], :filenames => options['filenames'],
:popular => popular.include?(name) :popular => popular.include?(name)

File diff suppressed because it is too large Load Diff

View File

@@ -6,6 +6,7 @@ end
require 'linguist/md5' require 'linguist/md5'
require 'linguist/classifier' require 'linguist/classifier'
require 'linguist/shebang'
module Linguist module Linguist
# Model for accessing classifier training data. # Model for accessing classifier training data.
@@ -52,14 +53,16 @@ module Linguist
}) })
end end
else else
path = File.join(dirname, filename)
if File.extname(filename) == "" if File.extname(filename) == ""
raise "#{File.join(dirname, filename)} is missing an extension, maybe it belongs in filenames/ subdir" raise "#{path} is missing an extension, maybe it belongs in filenames/ subdir"
end end
yield({ yield({
:path => File.join(dirname, filename), :path => path,
:language => category, :language => category,
:interpreter => File.exist?(filename) ? Linguist.interpreter_from_shebang(File.read(filename)) : nil, :interpreter => Shebang.interpreter(File.read(path)),
:extname => File.extname(filename) :extname => File.extname(filename)
}) })
end end
@@ -112,40 +115,4 @@ module Linguist
db db
end end
end end
# Used to retrieve the interpreter from the shebang line of a file's
# data.
def self.interpreter_from_shebang(data)
lines = data.lines.to_a
if lines.any? && (match = lines[0].match(/(.+)\n?/)) && (bang = match[0]) =~ /^#!/
bang.sub!(/^#! /, '#!')
tokens = bang.split(' ')
pieces = tokens.first.split('/')
if pieces.size > 1
script = pieces.last
else
script = pieces.first.sub('#!', '')
end
script = script == 'env' ? tokens[1] : script
# "python2.6" -> "python"
if script =~ /((?:\d+\.?)+)/
script.sub! $1, ''
end
# Check for multiline shebang hacks that call `exec`
if script == 'sh' &&
lines[0...5].any? { |l| l.match(/exec (\w+).+\$0.+\$@/) }
script = $1
end
script
else
nil
end
end
end end

44
lib/linguist/shebang.rb Normal file
View File

@@ -0,0 +1,44 @@
module Linguist
class Shebang
# Public: Use shebang to detect language of the blob.
#
# blob - An object that quacks like a blob.
#
# Examples
#
# Shebang.call(FileBlob.new("path/to/file"))
#
# Returns an Array with one Language if the blob has a shebang with a valid
# interpreter, or empty if there is no shebang.
def self.call(blob, _ = nil)
Language.find_by_interpreter interpreter(blob.data)
end
# Public: Get the interpreter from the shebang
#
# Returns a String or nil
def self.interpreter(data)
lines = data.lines
return unless match = /^#! ?(.*)$/.match(lines.first)
tokens = match[1].split(' ')
script = tokens.first.split('/').last
script = tokens[1] if script == 'env'
# If script has an invalid shebang, we might get here
return unless script
# "python2.6" -> "python2"
script.sub! $1, '' if script =~ /(\.\d+)$/
# Check for multiline shebang hacks that call `exec`
if script == 'sh' &&
lines.first(5).any? { |l| l.match(/exec (\w+).+\$0.+\$@/) }
script = $1
end
File.basename(script)
end
end
end

View File

@@ -0,0 +1,20 @@
module Linguist
module Strategy
# Detects language based on filename and/or extension
class Filename
def self.call(blob, _)
name = blob.name.to_s
# A bit of an elegant hack. If the file is executable but extensionless,
# append a "magic" extension so it can be classified with other
# languages that have shebang scripts.
extensions = FileBlob.new(name).extensions
if extensions.empty? && blob.mode && (blob.mode.to_i(8) & 05) == 05
name += ".script!"
end
Language.find_by_filename(name)
end
end
end
end

View File

@@ -110,6 +110,12 @@
# MathJax # MathJax
- (^|/)MathJax/ - (^|/)MathJax/
# Chart.js
- (^|/)Chart\.js$
# Codemirror
- (^|/)[Cc]ode[Mm]irror/(lib|mode|theme|addon|keymap)
# SyntaxHighlighter - http://alexgorbatchev.com/ # SyntaxHighlighter - http://alexgorbatchev.com/
- (^|/)shBrush([^.]*)\.js$ - (^|/)shBrush([^.]*)\.js$
- (^|/)shCore\.js$ - (^|/)shCore\.js$
@@ -226,9 +232,6 @@
# .DS_Store's # .DS_Store's
- .[Dd][Ss]_[Ss]tore$ - .[Dd][Ss]_[Ss]tore$
# Mercury --use-subdirs
- Mercury/
# R packages # R packages
- ^vignettes/ - ^vignettes/
- ^inst/extdata/ - ^inst/extdata/

View File

@@ -1,3 +1,3 @@
module Linguist module Linguist
VERSION = "3.5.0" VERSION = "4.2.0"
end end

6
package.json Normal file
View File

@@ -0,0 +1,6 @@
{
"repository": "https://github.com/github/linguist",
"dependencies": {
"season": "~>3.0"
}
}

View File

@@ -0,0 +1,110 @@
<?xml version="1.0" encoding="iso-8859-1"?>
<project name="WebBuild">
<!-- generate timestamps -->
<tstamp />
<!-- Debugging Macro -->
<import file="echopath.xml" />
<!-- JS build files macro -->
<import file="rhinoscript.xml" />
<!-- Component Build Files -->
<import file="setup.xml" />
<import file="clean.xml" />
<import file="copy.xml" />
<import file="file.transform.xml" />
<import file="external.tools.xml" />
<import file="rename.xml" />
<import file="js.xml" />
<import file="css.xml" />
<import file="img.xml" />
<import file="png8.xml" />
<import file="yui.xml" />
<import file="cdn.xml" />
<import file="datauri.xml" />
<import file="devlive.xml" />
<!-- This dirname is the only complete path we know for sure, everything builds off of it -->
<dirname property="dir.build" file="${ant.file.WebBuild}" />
<!-- get name for newly built folder -->
<basename property="app.name" file="${basedir}" />
<!-- read global properties file -->
<property file="${dir.build}\build.properties" />
<!-- Build Directories -->
<property name="dir.build.js" location="${dir.build}/js" />
<!-- App Directories -->
<property name="dir.app" location="${dir.result}/${app.name}" />
<property name="dir.app.temp" location="${dir.temp}/${app.name}" />
<property name="dir.app.files" location="${dir.app.temp}/${dir.files}" />
<!-- Files -->
<property name="mapping.js" location="${dir.app.temp}/${mapping.file.js}" />
<property name="mapping.css" location="${dir.app.temp}/${mapping.file.css}" />
<property name="mapping.img" location="${dir.app.temp}/${mapping.file.img}" />
<property name="mapping.swf" location="${dir.app.temp}/${mapping.file.swf}" />
<property name="mapping.fonts" location="${dir.app.temp}/${mapping.file.fonts}" />
<!-- Tool Directories -->
<property name="dir.bin" location="${dir.build}/Bin" />
<property name="dir.jar" location="${dir.bin}/jar" />
<!-- Tool Files -->
<property name="tools.compressor" location="${dir.jar}/${tools.file.compressor}" />
<property name="tools.cssembed" location="${dir.jar}/${tools.file.cssembed}" />
<property name="tools.filetransform" location="${dir.jar}/${tools.file.filetransform}" />
<property name="tools.optipng" location="${dir.bin}/${tools.file.optipng}" />
<property name="tools.jpegtran" location="${dir.bin}/${tools.file.jpegtran}" />
<!-- BUILD TARGETS -->
<!-- low level utility build targets -->
<!-- Build the tools -->
<target name="-setup.build.tools"
depends="-define.filetransform, -define.cssembed, -define.yuicompressor, -define.jsclasspath"
/>
<!-- set up filesystem properties -->
<target
name="-setup"
depends="-setup.mode, -setup.conditions, -setup.js, -setup.css, -setup.swf, -setup.img, -setup.fonts, -setup.yui"
/>
<!-- utility-ish targets -->
<target name="copy" depends="clean, tools, -copy" />
<target name="tools" depends="-setup.build.tools" />
<target name="finalize" depends="copy, -finalize" />
<target name="-prepare" depends="copy, -setup" />
<!-- individual component build targets (empty descriptions are to make sure they show in "ant -p") -->
<target name="devlive" depends="-prepare, -devlive" description="" />
<target name="js" depends="-prepare, -js" description="" />
<target name="css" depends="-prepare, -css" description="" />
<target name="rename" depends="-prepare, -rename" description="" />
<target name="yui" depends="-prepare, rename, -yui" description="" />
<target name="cdn" depends="-prepare, -cdn" description="" />
<!-- high level build targets (Excluding of images is on purpose here, it's slow) -->
<target name="core"
depends="devlive, js, css, cdn, rename, yui, -js.inline"
description="Core build work"
/>
<target name="prod"
depends="core, finalize"
description="Full Production Build"
/>
<!-- debug target -->
<target name="debug" depends="-setup">
<echoproperties/>
</target>
</project>

View File

@@ -0,0 +1 @@
ant.xml

86
samples/C++/16F88.h Normal file
View File

@@ -0,0 +1,86 @@
/*
* This file is part of PIC
* Copyright © 2012 Rachel Mant (dx-mon@users.sourceforge.net)
*
* PIC is free software: you can redistribute it and/or modify
* it under the terms of the GNU Lesser General Public License as published by
* the Free Software Foundation, either version 3 of the License, or
* (at your option) any later version.
*
* PIC is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU Lesser General Public License for more details.
*
* You should have received a copy of the GNU Lesser General Public License
* along with this program. If not, see <http://www.gnu.org/licenses/>.
*/
enum PIC16F88Instruction
{
ADDWF,
ANDWF,
CLRF,
CLRW,
COMF,
DECF,
DECFSZ,
INCF,
INCFSZ,
IORWF,
MOVF,
MOVWF,
NOP,
RLF,
RRF,
SUBWF,
SWAPF,
XORWF,
BCF,
BSF,
BTFSC,
BTFSS,
ADDLW,
ANDLW,
CALL,
CLRWDT,
GOTO,
IORLW,
MOVLW,
RETFIE,
RETLW,
RETURN,
SLEEP,
SUBLW,
XORLW
};
class PIC16F88
{
public:
PIC16F88(ROM *ProgramMemory);
void Step();
private:
uint8_t q;
bool nextIsNop, trapped;
Memory *memory;
ROM *program;
Stack<uint16_t, 8> *CallStack;
Register<uint16_t> *PC;
Register<> *WREG, *PCL, *STATUS, *PCLATCH;
PIC16F88Instruction inst;
uint16_t instrWord;
private:
void DecodeInstruction();
void ProcessInstruction();
uint8_t GetBank();
uint8_t GetMemoryContents(uint8_t partialAddress);
void SetMemoryContents(uint8_t partialAddress, uint8_t newVal);
void CheckZero(uint8_t value);
void StoreValue(uint8_t value, bool updateZero);
uint8_t SetCarry(bool val);
uint16_t GetPCHFinalBits();
};

32
samples/C++/Memory16F88.h Normal file
View File

@@ -0,0 +1,32 @@
/*
* This file is part of PIC
* Copyright © 2012 Rachel Mant (dx-mon@users.sourceforge.net)
*
* PIC is free software: you can redistribute it and/or modify
* it under the terms of the GNU Lesser General Public License as published by
* the Free Software Foundation, either version 3 of the License, or
* (at your option) any later version.
*
* PIC is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU Lesser General Public License for more details.
*
* You should have received a copy of the GNU Lesser General Public License
* along with this program. If not, see <http://www.gnu.org/licenses/>.
*/
#include "Memory.h"
class Memory16F88 : public Memory
{
private:
uint8_t memory[512];
std::map<uint32_t, MemoryLocation *> memoryMap;
public:
Memory16F88();
uint8_t Dereference(uint8_t bank, uint8_t partialAddress);
uint8_t *Reference(uint8_t bank, uint8_t partialAddress);
uint8_t *operator [](uint32_t ref);
};

View File

@@ -0,0 +1,76 @@
/*
* This file is part of IRCBot
* Copyright © 2014 Rachel Mant (dx-mon@users.sourceforge.net)
*
* IRCBot is free software: you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation, either version 3 of the License, or
* (at your option) any later version.
*
* IRCBot is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program. If not, see <http://www.gnu.org/licenses/>.
*/
#ifndef __THREADED_QUEUE_H__
#define __THREADED_QUEUE_H__
#include <pthread.h>
#include <queue>
template<class T>
class ThreadedQueue : public std::queue<T>
{
private:
pthread_mutex_t queueMutex;
pthread_cond_t queueCond;
public:
ThreadedQueue()
{
pthread_mutexattr_t mutexAttrs;
pthread_condattr_t condAttrs;
pthread_mutexattr_init(&mutexAttrs);
pthread_mutexattr_settype(&mutexAttrs, PTHREAD_MUTEX_ERRORCHECK);
pthread_mutex_init(&queueMutex, &mutexAttrs);
pthread_mutexattr_destroy(&mutexAttrs);
pthread_condattr_init(&condAttrs);
pthread_condattr_setpshared(&condAttrs, PTHREAD_PROCESS_PRIVATE);
pthread_cond_init(&queueCond, &condAttrs);
pthread_condattr_destroy(&condAttrs);
}
~ThreadedQueue()
{
pthread_cond_destroy(&queueCond);
pthread_mutex_destroy(&queueMutex);
}
void waitItems()
{
pthread_mutex_lock(&queueMutex);
pthread_cond_wait(&queueCond, &queueMutex);
pthread_mutex_unlock(&queueMutex);
}
void signalItems()
{
pthread_mutex_lock(&queueMutex);
pthread_cond_broadcast(&queueCond);
pthread_mutex_unlock(&queueMutex);
}
void push(T item)
{
std::queue<T>::push(item);
signalItems();
}
};
#endif /*__THREADED_QUEUE_H__*/

145
samples/C/2D.C Normal file
View File

@@ -0,0 +1,145 @@
#include "2D.h"
#include <math.h>
void set_vgabasemem(void)
{
ULONG vgabase;
SELECTOR tmp;
asm mov [tmp], ds
dpmi_get_sel_base(&vgabase, tmp);
vgabasemem = (char *)(-vgabase + 0xa0000);
}
void drw_chdis(int mode) // change the display!
{
regs.b.ah = 0x00; // seet theh display moode
regs.b.al = mode; // change it to the mode like innit
regs.h.flags = 0x72;// Set the dingoes kidneys out of FLAGS eh?
regs.h.ss = 0; // Like, totally set the stack segment
regs.h.sp = 0; // Set tha stack pointaaaaahhhhh!!!
dpmi_simulate_real_interrupt(0x10, &regs);
}
void drw_pix(int x, int y, enum COLORS col)
{
*VGAPIX(x, y) = col;
}
void drw_line(int x0, int y0, int x1, int y1, enum COLORS col)
{
// Going for the optimized version of bresenham's line algo.
int stp = (abs(y0 - y1) > abs(x0 - x1));
int tmp, dx, dy, err, yi, i, j; // yi = y excrement
if (stp) {
// swappity swap
tmp = y0;
y0 = x0;
x0 = tmp;
tmp = y1;
y1 = x1;
x1 = tmp;
}
// AAAAND NOW WE MUST DO ZEES AGAIN :(
// I'm sure there was a func somewhere that does this? :P
if (x0 > x1) {
tmp = x0;
x0 = x1;
x1 = tmp;
tmp = y0;
y0 = y1;
y1 = tmp;
}
dx = (x1 - x0);
dy = (abs(y1 - y0));
err = (dx / 2);
if (y0 < y1)
yi = 1;
else
yi = -1;
j = y0;
for (i = x0; i < x1; i++)
{
if (stp)
*VGAPIX(j, i) = col;
else
*VGAPIX(i, j) = col;
err -= dy;
if (err < 0) {
j += yi;
err += dx;
}
}
}
void drw_rectl(int x, int y, int w, int h, enum COLORS col)
{
drw_line(x, y, x+w, y, col);
drw_line(x+w, y, x+w, y+h, col);
drw_line(x, y, x, y+h, col);
drw_line(x, y+h, x+w+1, y+h, col);
}
void drw_rectf(int x, int y, int w, int h, enum COLORS col)
{
int i, j;
for (j = y; j < x+h; j++) {
for (i = x; i < y+w; i++) {
*VGAPIX(i, j) = col;
}
}
}
void drw_circl(int x, int y, int rad, enum COLORS col)
{
int mang, i; // max angle, haha
int px, py;
mang = 360; // Yeah yeah I'll switch to rad later
for (i = 0; i <= mang; i++)
{
px = cos(i)*rad + x; // + px; // causes some really cools effects! :D
py = sin(i)*rad + y; // + py;
*VGAPIX(px, py) = col;
}
}
void drw_tex(int x, int y, int w, int h, enum COLORS tex[])
{ // i*w+j
int i, j;
for (i = 0; i < w; i++)
{
for (j = 0; j < h; j++)
{
*VGAPIX(x+i, y+j) = tex[j*w+i];
}
}
}
void 2D_init(void)
{
set_vgabasemem();
drw_chdis(0x13);
}
void 2D_exit(void)
{
drw_chdis(3);
}
/*
int main()
{
set_vgabasemem();
drw_chdis(0x13);
while(!kbhit()) {
if ((getch()) == 0x1b) // escape
break;
}
drw_chdis(3);
return 0;
}
*/

29
samples/C/2D.H Normal file
View File

@@ -0,0 +1,29 @@
#ifndef __2DGFX
#define __2DGFX
// Includes
#include <stdio.h>
#include <math.h>
#include <conio.h>
#include <dpmi.h>
// Defines
#define VGAPIX(x,y) (vgabasemem + (x) + (y) * 320)
// Variables
char * vgabasemem;
DPMI_REGS regs;
// Drawing functions:
//void setvgabasemem(void);
void drw_chdis(int mode); // draw_func_change_display
void drw_pix(int x, int y, enum COLORS col);
void drw_line(int x0, int y0, int x1, int y1, enum COLORS col);
void drw_rectl(int x, int y, int w, int h, enum COLORS col);
void drw_rectf(int x, int y, int w, int h, enum COLORS col);
void drw_cirl(int x, int y, int rad, enum COLORS col);
void drw_tex(int x, int y, int w, int h, enum COLORS tex[]);
void 2D_init(void);
void 2D_exit(void);
#endif

93
samples/C/ArrowLeft.h Normal file
View File

@@ -0,0 +1,93 @@
/*
* This file is part of GTK++ (libGTK++)
* Copyright © 2012 Rachel Mant (dx-mon@users.sourceforge.net)
*
* GTK++ is free software: you can redistribute it and/or modify
* it under the terms of the GNU Lesser General Public License as published by
* the Free Software Foundation, either version 3 of the License, or
* (at your option) any later version.
*
* GTK++ is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU Lesser General Public License for more details.
*
* You should have received a copy of the GNU Lesser General Public License
* along with this program. If not, see <http://www.gnu.org/licenses/>.
*/
/* GdkPixbuf RGBA C-Source image dump */
#ifdef __SUNPRO_C
#pragma align 4 (ArrowLeft)
#endif
#ifdef __GNUC__
static const uint8_t ArrowLeft[] __attribute__ ((__aligned__ (4))) =
#else
static const uint8_t ArrowLeft[] =
#endif
{ ""
/* Pixbuf magic (0x47646b50) */
"GdkP"
/* length: header (24) + pixel_data (1600) */
"\0\0\6X"
/* pixdata_type (0x1010002) */
"\1\1\0\2"
/* rowstride (80) */
"\0\0\0P"
/* width (20) */
"\0\0\0\24"
/* height (20) */
"\0\0\0\24"
/* pixel_data: */
"\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0"
"\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0"
"\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0"
"\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0"
"\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0"
"\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0"
"\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0"
"\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0"
"\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0"
"\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0"
"\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\377\0\0\0\0\0"
"\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0"
"\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0"
"\0\0\377\0\0\0\377\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0"
"\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0"
"\0\0\0\0\0\0\0\0\0\0\0\0\377\0\0\0\377\0\0\0\377\0\0\0\0\0\0\0\0\0\0"
"\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0"
"\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\377\0\0\0\377\0\0\0\377"
"\0\0\0\377\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0"
"\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0"
"\377\0\0\0\377\0\0\0\377\0\0\0\377\0\0\0\377\0\0\0\0\0\0\0\0\0\0\0\0"
"\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0"
"\0\0\0\0\0\0\0\0\0\0\0\0\0\377\0\0\0\377\0\0\0\377\0\0\0\377\0\0\0\377"
"\0\0\0\377\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0"
"\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0"
"\377\0\0\0\377\0\0\0\377\0\0\0\377\0\0\0\377\0\0\0\0\0\0\0\0\0\0\0\0"
"\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0"
"\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\377\0\0\0\377\0\0\0\377\0"
"\0\0\377\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0"
"\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0"
"\0\0\0\0\0\0\0\377\0\0\0\377\0\0\0\377\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0"
"\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0"
"\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\377\0\0\0\377\0"
"\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0"
"\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0"
"\0\0\0\0\0\0\0\0\0\0\377\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0"
"\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0"
"\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0"
"\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0"
"\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0"
"\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0"
"\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0"
"\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0"
"\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0"
"\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0"
"\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0"
"\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0"
"\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0"
"\0\0"};

903
samples/C/GLKMatrix4.h Normal file
View File

@@ -0,0 +1,903 @@
//
// GLKMatrix4.h
// GLKit
//
// Copyright (c) 2011, Apple Inc. All rights reserved.
//
#ifndef __GLK_MATRIX_4_H
#define __GLK_MATRIX_4_H
#include <stddef.h>
#include <stdbool.h>
#include <math.h>
#if defined(__ARM_NEON__)
#include <arm_neon.h>
#endif
#include <GLKit/GLKMathTypes.h>
#include <GLKit/GLKVector3.h>
#include <GLKit/GLKVector4.h>
#include <GLKit/GLKQuaternion.h>
#ifdef __cplusplus
extern "C" {
#endif
#pragma mark -
#pragma mark Prototypes
#pragma mark -
extern const GLKMatrix4 GLKMatrix4Identity;
/*
m30, m31, and m32 correspond to the translation values tx, ty, tz, respectively.
*/
static __inline__ GLKMatrix4 GLKMatrix4Make(float m00, float m01, float m02, float m03,
float m10, float m11, float m12, float m13,
float m20, float m21, float m22, float m23,
float m30, float m31, float m32, float m33);
/*
m03, m13, and m23 correspond to the translation values tx, ty, tz, respectively.
*/
static __inline__ GLKMatrix4 GLKMatrix4MakeAndTranspose(float m00, float m01, float m02, float m03,
float m10, float m11, float m12, float m13,
float m20, float m21, float m22, float m23,
float m30, float m31, float m32, float m33);
/*
m[12], m[13], and m[14] correspond to the translation values tx, ty, and tz, respectively.
*/
static __inline__ GLKMatrix4 GLKMatrix4MakeWithArray(float values[16]);
/*
m[3], m[7], and m[11] correspond to the translation values tx, ty, and tz, respectively.
*/
static __inline__ GLKMatrix4 GLKMatrix4MakeWithArrayAndTranspose(float values[16]);
/*
row0, row1, and row2's last component should correspond to the translation values tx, ty, and tz, respectively.
*/
static __inline__ GLKMatrix4 GLKMatrix4MakeWithRows(GLKVector4 row0,
GLKVector4 row1,
GLKVector4 row2,
GLKVector4 row3);
/*
column3's first three components should correspond to the translation values tx, ty, and tz.
*/
static __inline__ GLKMatrix4 GLKMatrix4MakeWithColumns(GLKVector4 column0,
GLKVector4 column1,
GLKVector4 column2,
GLKVector4 column3);
/*
The quaternion will be normalized before conversion.
*/
static __inline__ GLKMatrix4 GLKMatrix4MakeWithQuaternion(GLKQuaternion quaternion);
static __inline__ GLKMatrix4 GLKMatrix4MakeTranslation(float tx, float ty, float tz);
static __inline__ GLKMatrix4 GLKMatrix4MakeScale(float sx, float sy, float sz);
static __inline__ GLKMatrix4 GLKMatrix4MakeRotation(float radians, float x, float y, float z);
static __inline__ GLKMatrix4 GLKMatrix4MakeXRotation(float radians);
static __inline__ GLKMatrix4 GLKMatrix4MakeYRotation(float radians);
static __inline__ GLKMatrix4 GLKMatrix4MakeZRotation(float radians);
/*
Equivalent to gluPerspective.
*/
static __inline__ GLKMatrix4 GLKMatrix4MakePerspective(float fovyRadians, float aspect, float nearZ, float farZ);
/*
Equivalent to glFrustum.
*/
static __inline__ GLKMatrix4 GLKMatrix4MakeFrustum(float left, float right,
float bottom, float top,
float nearZ, float farZ);
/*
Equivalent to glOrtho.
*/
static __inline__ GLKMatrix4 GLKMatrix4MakeOrtho(float left, float right,
float bottom, float top,
float nearZ, float farZ);
/*
Equivalent to gluLookAt.
*/
static __inline__ GLKMatrix4 GLKMatrix4MakeLookAt(float eyeX, float eyeY, float eyeZ,
float centerX, float centerY, float centerZ,
float upX, float upY, float upZ);
/*
Returns the upper left 3x3 portion of the 4x4 matrix.
*/
static __inline__ GLKMatrix3 GLKMatrix4GetMatrix3(GLKMatrix4 matrix);
/*
Returns the upper left 2x2 portion of the 4x4 matrix.
*/
static __inline__ GLKMatrix2 GLKMatrix4GetMatrix2(GLKMatrix4 matrix);
/*
GLKMatrix4GetRow returns vectors for rows 0, 1, and 2 whose last component will be the translation value tx, ty, and tz, respectively.
Valid row values range from 0 to 3, inclusive.
*/
static __inline__ GLKVector4 GLKMatrix4GetRow(GLKMatrix4 matrix, int row);
/*
GLKMatrix4GetColumn returns a vector for column 3 whose first three components will be the translation values tx, ty, and tz.
Valid column values range from 0 to 3, inclusive.
*/
static __inline__ GLKVector4 GLKMatrix4GetColumn(GLKMatrix4 matrix, int column);
/*
GLKMatrix4SetRow expects that the vector for row 0, 1, and 2 will have a translation value as its last component.
Valid row values range from 0 to 3, inclusive.
*/
static __inline__ GLKMatrix4 GLKMatrix4SetRow(GLKMatrix4 matrix, int row, GLKVector4 vector);
/*
GLKMatrix4SetColumn expects that the vector for column 3 will contain the translation values tx, ty, and tz as its first three components, respectively.
Valid column values range from 0 to 3, inclusive.
*/
static __inline__ GLKMatrix4 GLKMatrix4SetColumn(GLKMatrix4 matrix, int column, GLKVector4 vector);
static __inline__ GLKMatrix4 GLKMatrix4Transpose(GLKMatrix4 matrix);
GLKMatrix4 GLKMatrix4Invert(GLKMatrix4 matrix, bool *isInvertible);
GLKMatrix4 GLKMatrix4InvertAndTranspose(GLKMatrix4 matrix, bool *isInvertible);
static __inline__ GLKMatrix4 GLKMatrix4Multiply(GLKMatrix4 matrixLeft, GLKMatrix4 matrixRight);
static __inline__ GLKMatrix4 GLKMatrix4Add(GLKMatrix4 matrixLeft, GLKMatrix4 matrixRight);
static __inline__ GLKMatrix4 GLKMatrix4Subtract(GLKMatrix4 matrixLeft, GLKMatrix4 matrixRight);
static __inline__ GLKMatrix4 GLKMatrix4Translate(GLKMatrix4 matrix, float tx, float ty, float tz);
static __inline__ GLKMatrix4 GLKMatrix4TranslateWithVector3(GLKMatrix4 matrix, GLKVector3 translationVector);
/*
The last component of the GLKVector4, translationVector, is ignored.
*/
static __inline__ GLKMatrix4 GLKMatrix4TranslateWithVector4(GLKMatrix4 matrix, GLKVector4 translationVector);
static __inline__ GLKMatrix4 GLKMatrix4Scale(GLKMatrix4 matrix, float sx, float sy, float sz);
static __inline__ GLKMatrix4 GLKMatrix4ScaleWithVector3(GLKMatrix4 matrix, GLKVector3 scaleVector);
/*
The last component of the GLKVector4, scaleVector, is ignored.
*/
static __inline__ GLKMatrix4 GLKMatrix4ScaleWithVector4(GLKMatrix4 matrix, GLKVector4 scaleVector);
static __inline__ GLKMatrix4 GLKMatrix4Rotate(GLKMatrix4 matrix, float radians, float x, float y, float z);
static __inline__ GLKMatrix4 GLKMatrix4RotateWithVector3(GLKMatrix4 matrix, float radians, GLKVector3 axisVector);
/*
The last component of the GLKVector4, axisVector, is ignored.
*/
static __inline__ GLKMatrix4 GLKMatrix4RotateWithVector4(GLKMatrix4 matrix, float radians, GLKVector4 axisVector);
static __inline__ GLKMatrix4 GLKMatrix4RotateX(GLKMatrix4 matrix, float radians);
static __inline__ GLKMatrix4 GLKMatrix4RotateY(GLKMatrix4 matrix, float radians);
static __inline__ GLKMatrix4 GLKMatrix4RotateZ(GLKMatrix4 matrix, float radians);
/*
Assumes 0 in the w component.
*/
static __inline__ GLKVector3 GLKMatrix4MultiplyVector3(GLKMatrix4 matrixLeft, GLKVector3 vectorRight);
/*
Assumes 1 in the w component.
*/
static __inline__ GLKVector3 GLKMatrix4MultiplyVector3WithTranslation(GLKMatrix4 matrixLeft, GLKVector3 vectorRight);
/*
Assumes 1 in the w component and divides the resulting vector by w before returning.
*/
static __inline__ GLKVector3 GLKMatrix4MultiplyAndProjectVector3(GLKMatrix4 matrixLeft, GLKVector3 vectorRight);
/*
Assumes 0 in the w component.
*/
static __inline__ void GLKMatrix4MultiplyVector3Array(GLKMatrix4 matrix, GLKVector3 *vectors, size_t vectorCount);
/*
Assumes 1 in the w component.
*/
static __inline__ void GLKMatrix4MultiplyVector3ArrayWithTranslation(GLKMatrix4 matrix, GLKVector3 *vectors, size_t vectorCount);
/*
Assumes 1 in the w component and divides the resulting vector by w before returning.
*/
static __inline__ void GLKMatrix4MultiplyAndProjectVector3Array(GLKMatrix4 matrix, GLKVector3 *vectors, size_t vectorCount);
static __inline__ GLKVector4 GLKMatrix4MultiplyVector4(GLKMatrix4 matrixLeft, GLKVector4 vectorRight);
static __inline__ void GLKMatrix4MultiplyVector4Array(GLKMatrix4 matrix, GLKVector4 *vectors, size_t vectorCount);
#pragma mark -
#pragma mark Implementations
#pragma mark -
static __inline__ GLKMatrix4 GLKMatrix4Make(float m00, float m01, float m02, float m03,
float m10, float m11, float m12, float m13,
float m20, float m21, float m22, float m23,
float m30, float m31, float m32, float m33)
{
GLKMatrix4 m = { m00, m01, m02, m03,
m10, m11, m12, m13,
m20, m21, m22, m23,
m30, m31, m32, m33 };
return m;
}
static __inline__ GLKMatrix4 GLKMatrix4MakeAndTranspose(float m00, float m01, float m02, float m03,
float m10, float m11, float m12, float m13,
float m20, float m21, float m22, float m23,
float m30, float m31, float m32, float m33)
{
GLKMatrix4 m = { m00, m10, m20, m30,
m01, m11, m21, m31,
m02, m12, m22, m32,
m03, m13, m23, m33 };
return m;
}
static __inline__ GLKMatrix4 GLKMatrix4MakeWithArray(float values[16])
{
GLKMatrix4 m = { values[0], values[1], values[2], values[3],
values[4], values[5], values[6], values[7],
values[8], values[9], values[10], values[11],
values[12], values[13], values[14], values[15] };
return m;
}
static __inline__ GLKMatrix4 GLKMatrix4MakeWithArrayAndTranspose(float values[16])
{
#if defined(__ARM_NEON__)
float32x4x4_t m = vld4q_f32(values);
return *(GLKMatrix4 *)&m;
#else
GLKMatrix4 m = { values[0], values[4], values[8], values[12],
values[1], values[5], values[9], values[13],
values[2], values[6], values[10], values[14],
values[3], values[7], values[11], values[15] };
return m;
#endif
}
static __inline__ GLKMatrix4 GLKMatrix4MakeWithRows(GLKVector4 row0,
GLKVector4 row1,
GLKVector4 row2,
GLKVector4 row3)
{
GLKMatrix4 m = { row0.v[0], row1.v[0], row2.v[0], row3.v[0],
row0.v[1], row1.v[1], row2.v[1], row3.v[1],
row0.v[2], row1.v[2], row2.v[2], row3.v[2],
row0.v[3], row1.v[3], row2.v[3], row3.v[3] };
return m;
}
static __inline__ GLKMatrix4 GLKMatrix4MakeWithColumns(GLKVector4 column0,
GLKVector4 column1,
GLKVector4 column2,
GLKVector4 column3)
{
#if defined(__ARM_NEON__)
float32x4x4_t m;
m.val[0] = vld1q_f32(column0.v);
m.val[1] = vld1q_f32(column1.v);
m.val[2] = vld1q_f32(column2.v);
m.val[3] = vld1q_f32(column3.v);
return *(GLKMatrix4 *)&m;
#else
GLKMatrix4 m = { column0.v[0], column0.v[1], column0.v[2], column0.v[3],
column1.v[0], column1.v[1], column1.v[2], column1.v[3],
column2.v[0], column2.v[1], column2.v[2], column2.v[3],
column3.v[0], column3.v[1], column3.v[2], column3.v[3] };
return m;
#endif
}
static __inline__ GLKMatrix4 GLKMatrix4MakeWithQuaternion(GLKQuaternion quaternion)
{
quaternion = GLKQuaternionNormalize(quaternion);
float x = quaternion.q[0];
float y = quaternion.q[1];
float z = quaternion.q[2];
float w = quaternion.q[3];
float _2x = x + x;
float _2y = y + y;
float _2z = z + z;
float _2w = w + w;
GLKMatrix4 m = { 1.0f - _2y * y - _2z * z,
_2x * y + _2w * z,
_2x * z - _2w * y,
0.0f,
_2x * y - _2w * z,
1.0f - _2x * x - _2z * z,
_2y * z + _2w * x,
0.0f,
_2x * z + _2w * y,
_2y * z - _2w * x,
1.0f - _2x * x - _2y * y,
0.0f,
0.0f,
0.0f,
0.0f,
1.0f };
return m;
}
static __inline__ GLKMatrix4 GLKMatrix4MakeTranslation(float tx, float ty, float tz)
{
GLKMatrix4 m = GLKMatrix4Identity;
m.m[12] = tx;
m.m[13] = ty;
m.m[14] = tz;
return m;
}
static __inline__ GLKMatrix4 GLKMatrix4MakeScale(float sx, float sy, float sz)
{
GLKMatrix4 m = GLKMatrix4Identity;
m.m[0] = sx;
m.m[5] = sy;
m.m[10] = sz;
return m;
}
static __inline__ GLKMatrix4 GLKMatrix4MakeRotation(float radians, float x, float y, float z)
{
GLKVector3 v = GLKVector3Normalize(GLKVector3Make(x, y, z));
float cos = cosf(radians);
float cosp = 1.0f - cos;
float sin = sinf(radians);
GLKMatrix4 m = { cos + cosp * v.v[0] * v.v[0],
cosp * v.v[0] * v.v[1] + v.v[2] * sin,
cosp * v.v[0] * v.v[2] - v.v[1] * sin,
0.0f,
cosp * v.v[0] * v.v[1] - v.v[2] * sin,
cos + cosp * v.v[1] * v.v[1],
cosp * v.v[1] * v.v[2] + v.v[0] * sin,
0.0f,
cosp * v.v[0] * v.v[2] + v.v[1] * sin,
cosp * v.v[1] * v.v[2] - v.v[0] * sin,
cos + cosp * v.v[2] * v.v[2],
0.0f,
0.0f,
0.0f,
0.0f,
1.0f };
return m;
}
static __inline__ GLKMatrix4 GLKMatrix4MakeXRotation(float radians)
{
float cos = cosf(radians);
float sin = sinf(radians);
GLKMatrix4 m = { 1.0f, 0.0f, 0.0f, 0.0f,
0.0f, cos, sin, 0.0f,
0.0f, -sin, cos, 0.0f,
0.0f, 0.0f, 0.0f, 1.0f };
return m;
}
static __inline__ GLKMatrix4 GLKMatrix4MakeYRotation(float radians)
{
float cos = cosf(radians);
float sin = sinf(radians);
GLKMatrix4 m = { cos, 0.0f, -sin, 0.0f,
0.0f, 1.0f, 0.0f, 0.0f,
sin, 0.0f, cos, 0.0f,
0.0f, 0.0f, 0.0f, 1.0f };
return m;
}
static __inline__ GLKMatrix4 GLKMatrix4MakeZRotation(float radians)
{
float cos = cosf(radians);
float sin = sinf(radians);
GLKMatrix4 m = { cos, sin, 0.0f, 0.0f,
-sin, cos, 0.0f, 0.0f,
0.0f, 0.0f, 1.0f, 0.0f,
0.0f, 0.0f, 0.0f, 1.0f };
return m;
}
static __inline__ GLKMatrix4 GLKMatrix4MakePerspective(float fovyRadians, float aspect, float nearZ, float farZ)
{
float cotan = 1.0f / tanf(fovyRadians / 2.0f);
GLKMatrix4 m = { cotan / aspect, 0.0f, 0.0f, 0.0f,
0.0f, cotan, 0.0f, 0.0f,
0.0f, 0.0f, (farZ + nearZ) / (nearZ - farZ), -1.0f,
0.0f, 0.0f, (2.0f * farZ * nearZ) / (nearZ - farZ), 0.0f };
return m;
}
static __inline__ GLKMatrix4 GLKMatrix4MakeFrustum(float left, float right,
float bottom, float top,
float nearZ, float farZ)
{
float ral = right + left;
float rsl = right - left;
float tsb = top - bottom;
float tab = top + bottom;
float fan = farZ + nearZ;
float fsn = farZ - nearZ;
GLKMatrix4 m = { 2.0f * nearZ / rsl, 0.0f, 0.0f, 0.0f,
0.0f, 2.0f * nearZ / tsb, 0.0f, 0.0f,
ral / rsl, tab / tsb, -fan / fsn, -1.0f,
0.0f, 0.0f, (-2.0f * farZ * nearZ) / fsn, 0.0f };
return m;
}
static __inline__ GLKMatrix4 GLKMatrix4MakeOrtho(float left, float right,
float bottom, float top,
float nearZ, float farZ)
{
float ral = right + left;
float rsl = right - left;
float tab = top + bottom;
float tsb = top - bottom;
float fan = farZ + nearZ;
float fsn = farZ - nearZ;
GLKMatrix4 m = { 2.0f / rsl, 0.0f, 0.0f, 0.0f,
0.0f, 2.0f / tsb, 0.0f, 0.0f,
0.0f, 0.0f, -2.0f / fsn, 0.0f,
-ral / rsl, -tab / tsb, -fan / fsn, 1.0f };
return m;
}
static __inline__ GLKMatrix4 GLKMatrix4MakeLookAt(float eyeX, float eyeY, float eyeZ,
float centerX, float centerY, float centerZ,
float upX, float upY, float upZ)
{
GLKVector3 ev = { eyeX, eyeY, eyeZ };
GLKVector3 cv = { centerX, centerY, centerZ };
GLKVector3 uv = { upX, upY, upZ };
GLKVector3 n = GLKVector3Normalize(GLKVector3Add(ev, GLKVector3Negate(cv)));
GLKVector3 u = GLKVector3Normalize(GLKVector3CrossProduct(uv, n));
GLKVector3 v = GLKVector3CrossProduct(n, u);
GLKMatrix4 m = { u.v[0], v.v[0], n.v[0], 0.0f,
u.v[1], v.v[1], n.v[1], 0.0f,
u.v[2], v.v[2], n.v[2], 0.0f,
GLKVector3DotProduct(GLKVector3Negate(u), ev),
GLKVector3DotProduct(GLKVector3Negate(v), ev),
GLKVector3DotProduct(GLKVector3Negate(n), ev),
1.0f };
return m;
}
static __inline__ GLKMatrix3 GLKMatrix4GetMatrix3(GLKMatrix4 matrix)
{
GLKMatrix3 m = { matrix.m[0], matrix.m[1], matrix.m[2],
matrix.m[4], matrix.m[5], matrix.m[6],
matrix.m[8], matrix.m[9], matrix.m[10] };
return m;
}
static __inline__ GLKMatrix2 GLKMatrix4GetMatrix2(GLKMatrix4 matrix)
{
GLKMatrix2 m = { matrix.m[0], matrix.m[1],
matrix.m[4], matrix.m[5] };
return m;
}
static __inline__ GLKVector4 GLKMatrix4GetRow(GLKMatrix4 matrix, int row)
{
GLKVector4 v = { matrix.m[row], matrix.m[4 + row], matrix.m[8 + row], matrix.m[12 + row] };
return v;
}
static __inline__ GLKVector4 GLKMatrix4GetColumn(GLKMatrix4 matrix, int column)
{
#if defined(__ARM_NEON__)
float32x4_t v = vld1q_f32(&(matrix.m[column * 4]));
return *(GLKVector4 *)&v;
#else
GLKVector4 v = { matrix.m[column * 4 + 0], matrix.m[column * 4 + 1], matrix.m[column * 4 + 2], matrix.m[column * 4 + 3] };
return v;
#endif
}
static __inline__ GLKMatrix4 GLKMatrix4SetRow(GLKMatrix4 matrix, int row, GLKVector4 vector)
{
matrix.m[row] = vector.v[0];
matrix.m[row + 4] = vector.v[1];
matrix.m[row + 8] = vector.v[2];
matrix.m[row + 12] = vector.v[3];
return matrix;
}
static __inline__ GLKMatrix4 GLKMatrix4SetColumn(GLKMatrix4 matrix, int column, GLKVector4 vector)
{
#if defined(__ARM_NEON__)
float *dst = &(matrix.m[column * 4]);
vst1q_f32(dst, vld1q_f32(vector.v));
return matrix;
#else
matrix.m[column * 4 + 0] = vector.v[0];
matrix.m[column * 4 + 1] = vector.v[1];
matrix.m[column * 4 + 2] = vector.v[2];
matrix.m[column * 4 + 3] = vector.v[3];
return matrix;
#endif
}
static __inline__ GLKMatrix4 GLKMatrix4Transpose(GLKMatrix4 matrix)
{
#if defined(__ARM_NEON__)
float32x4x4_t m = vld4q_f32(matrix.m);
return *(GLKMatrix4 *)&m;
#else
GLKMatrix4 m = { matrix.m[0], matrix.m[4], matrix.m[8], matrix.m[12],
matrix.m[1], matrix.m[5], matrix.m[9], matrix.m[13],
matrix.m[2], matrix.m[6], matrix.m[10], matrix.m[14],
matrix.m[3], matrix.m[7], matrix.m[11], matrix.m[15] };
return m;
#endif
}
static __inline__ GLKMatrix4 GLKMatrix4Multiply(GLKMatrix4 matrixLeft, GLKMatrix4 matrixRight)
{
#if defined(__ARM_NEON__)
float32x4x4_t iMatrixLeft = *(float32x4x4_t *)&matrixLeft;
float32x4x4_t iMatrixRight = *(float32x4x4_t *)&matrixRight;
float32x4x4_t m;
m.val[0] = vmulq_n_f32(iMatrixLeft.val[0], vgetq_lane_f32(iMatrixRight.val[0], 0));
m.val[1] = vmulq_n_f32(iMatrixLeft.val[0], vgetq_lane_f32(iMatrixRight.val[1], 0));
m.val[2] = vmulq_n_f32(iMatrixLeft.val[0], vgetq_lane_f32(iMatrixRight.val[2], 0));
m.val[3] = vmulq_n_f32(iMatrixLeft.val[0], vgetq_lane_f32(iMatrixRight.val[3], 0));
m.val[0] = vmlaq_n_f32(m.val[0], iMatrixLeft.val[1], vgetq_lane_f32(iMatrixRight.val[0], 1));
m.val[1] = vmlaq_n_f32(m.val[1], iMatrixLeft.val[1], vgetq_lane_f32(iMatrixRight.val[1], 1));
m.val[2] = vmlaq_n_f32(m.val[2], iMatrixLeft.val[1], vgetq_lane_f32(iMatrixRight.val[2], 1));
m.val[3] = vmlaq_n_f32(m.val[3], iMatrixLeft.val[1], vgetq_lane_f32(iMatrixRight.val[3], 1));
m.val[0] = vmlaq_n_f32(m.val[0], iMatrixLeft.val[2], vgetq_lane_f32(iMatrixRight.val[0], 2));
m.val[1] = vmlaq_n_f32(m.val[1], iMatrixLeft.val[2], vgetq_lane_f32(iMatrixRight.val[1], 2));
m.val[2] = vmlaq_n_f32(m.val[2], iMatrixLeft.val[2], vgetq_lane_f32(iMatrixRight.val[2], 2));
m.val[3] = vmlaq_n_f32(m.val[3], iMatrixLeft.val[2], vgetq_lane_f32(iMatrixRight.val[3], 2));
m.val[0] = vmlaq_n_f32(m.val[0], iMatrixLeft.val[3], vgetq_lane_f32(iMatrixRight.val[0], 3));
m.val[1] = vmlaq_n_f32(m.val[1], iMatrixLeft.val[3], vgetq_lane_f32(iMatrixRight.val[1], 3));
m.val[2] = vmlaq_n_f32(m.val[2], iMatrixLeft.val[3], vgetq_lane_f32(iMatrixRight.val[2], 3));
m.val[3] = vmlaq_n_f32(m.val[3], iMatrixLeft.val[3], vgetq_lane_f32(iMatrixRight.val[3], 3));
return *(GLKMatrix4 *)&m;
#else
GLKMatrix4 m;
m.m[0] = matrixLeft.m[0] * matrixRight.m[0] + matrixLeft.m[4] * matrixRight.m[1] + matrixLeft.m[8] * matrixRight.m[2] + matrixLeft.m[12] * matrixRight.m[3];
m.m[4] = matrixLeft.m[0] * matrixRight.m[4] + matrixLeft.m[4] * matrixRight.m[5] + matrixLeft.m[8] * matrixRight.m[6] + matrixLeft.m[12] * matrixRight.m[7];
m.m[8] = matrixLeft.m[0] * matrixRight.m[8] + matrixLeft.m[4] * matrixRight.m[9] + matrixLeft.m[8] * matrixRight.m[10] + matrixLeft.m[12] * matrixRight.m[11];
m.m[12] = matrixLeft.m[0] * matrixRight.m[12] + matrixLeft.m[4] * matrixRight.m[13] + matrixLeft.m[8] * matrixRight.m[14] + matrixLeft.m[12] * matrixRight.m[15];
m.m[1] = matrixLeft.m[1] * matrixRight.m[0] + matrixLeft.m[5] * matrixRight.m[1] + matrixLeft.m[9] * matrixRight.m[2] + matrixLeft.m[13] * matrixRight.m[3];
m.m[5] = matrixLeft.m[1] * matrixRight.m[4] + matrixLeft.m[5] * matrixRight.m[5] + matrixLeft.m[9] * matrixRight.m[6] + matrixLeft.m[13] * matrixRight.m[7];
m.m[9] = matrixLeft.m[1] * matrixRight.m[8] + matrixLeft.m[5] * matrixRight.m[9] + matrixLeft.m[9] * matrixRight.m[10] + matrixLeft.m[13] * matrixRight.m[11];
m.m[13] = matrixLeft.m[1] * matrixRight.m[12] + matrixLeft.m[5] * matrixRight.m[13] + matrixLeft.m[9] * matrixRight.m[14] + matrixLeft.m[13] * matrixRight.m[15];
m.m[2] = matrixLeft.m[2] * matrixRight.m[0] + matrixLeft.m[6] * matrixRight.m[1] + matrixLeft.m[10] * matrixRight.m[2] + matrixLeft.m[14] * matrixRight.m[3];
m.m[6] = matrixLeft.m[2] * matrixRight.m[4] + matrixLeft.m[6] * matrixRight.m[5] + matrixLeft.m[10] * matrixRight.m[6] + matrixLeft.m[14] * matrixRight.m[7];
m.m[10] = matrixLeft.m[2] * matrixRight.m[8] + matrixLeft.m[6] * matrixRight.m[9] + matrixLeft.m[10] * matrixRight.m[10] + matrixLeft.m[14] * matrixRight.m[11];
m.m[14] = matrixLeft.m[2] * matrixRight.m[12] + matrixLeft.m[6] * matrixRight.m[13] + matrixLeft.m[10] * matrixRight.m[14] + matrixLeft.m[14] * matrixRight.m[15];
m.m[3] = matrixLeft.m[3] * matrixRight.m[0] + matrixLeft.m[7] * matrixRight.m[1] + matrixLeft.m[11] * matrixRight.m[2] + matrixLeft.m[15] * matrixRight.m[3];
m.m[7] = matrixLeft.m[3] * matrixRight.m[4] + matrixLeft.m[7] * matrixRight.m[5] + matrixLeft.m[11] * matrixRight.m[6] + matrixLeft.m[15] * matrixRight.m[7];
m.m[11] = matrixLeft.m[3] * matrixRight.m[8] + matrixLeft.m[7] * matrixRight.m[9] + matrixLeft.m[11] * matrixRight.m[10] + matrixLeft.m[15] * matrixRight.m[11];
m.m[15] = matrixLeft.m[3] * matrixRight.m[12] + matrixLeft.m[7] * matrixRight.m[13] + matrixLeft.m[11] * matrixRight.m[14] + matrixLeft.m[15] * matrixRight.m[15];
return m;
#endif
}
static __inline__ GLKMatrix4 GLKMatrix4Add(GLKMatrix4 matrixLeft, GLKMatrix4 matrixRight)
{
#if defined(__ARM_NEON__)
float32x4x4_t iMatrixLeft = *(float32x4x4_t *)&matrixLeft;
float32x4x4_t iMatrixRight = *(float32x4x4_t *)&matrixRight;
float32x4x4_t m;
m.val[0] = vaddq_f32(iMatrixLeft.val[0], iMatrixRight.val[0]);
m.val[1] = vaddq_f32(iMatrixLeft.val[1], iMatrixRight.val[1]);
m.val[2] = vaddq_f32(iMatrixLeft.val[2], iMatrixRight.val[2]);
m.val[3] = vaddq_f32(iMatrixLeft.val[3], iMatrixRight.val[3]);
return *(GLKMatrix4 *)&m;
#else
GLKMatrix4 m;
m.m[0] = matrixLeft.m[0] + matrixRight.m[0];
m.m[1] = matrixLeft.m[1] + matrixRight.m[1];
m.m[2] = matrixLeft.m[2] + matrixRight.m[2];
m.m[3] = matrixLeft.m[3] + matrixRight.m[3];
m.m[4] = matrixLeft.m[4] + matrixRight.m[4];
m.m[5] = matrixLeft.m[5] + matrixRight.m[5];
m.m[6] = matrixLeft.m[6] + matrixRight.m[6];
m.m[7] = matrixLeft.m[7] + matrixRight.m[7];
m.m[8] = matrixLeft.m[8] + matrixRight.m[8];
m.m[9] = matrixLeft.m[9] + matrixRight.m[9];
m.m[10] = matrixLeft.m[10] + matrixRight.m[10];
m.m[11] = matrixLeft.m[11] + matrixRight.m[11];
m.m[12] = matrixLeft.m[12] + matrixRight.m[12];
m.m[13] = matrixLeft.m[13] + matrixRight.m[13];
m.m[14] = matrixLeft.m[14] + matrixRight.m[14];
m.m[15] = matrixLeft.m[15] + matrixRight.m[15];
return m;
#endif
}
static __inline__ GLKMatrix4 GLKMatrix4Subtract(GLKMatrix4 matrixLeft, GLKMatrix4 matrixRight)
{
#if defined(__ARM_NEON__)
float32x4x4_t iMatrixLeft = *(float32x4x4_t *)&matrixLeft;
float32x4x4_t iMatrixRight = *(float32x4x4_t *)&matrixRight;
float32x4x4_t m;
m.val[0] = vsubq_f32(iMatrixLeft.val[0], iMatrixRight.val[0]);
m.val[1] = vsubq_f32(iMatrixLeft.val[1], iMatrixRight.val[1]);
m.val[2] = vsubq_f32(iMatrixLeft.val[2], iMatrixRight.val[2]);
m.val[3] = vsubq_f32(iMatrixLeft.val[3], iMatrixRight.val[3]);
return *(GLKMatrix4 *)&m;
#else
GLKMatrix4 m;
m.m[0] = matrixLeft.m[0] - matrixRight.m[0];
m.m[1] = matrixLeft.m[1] - matrixRight.m[1];
m.m[2] = matrixLeft.m[2] - matrixRight.m[2];
m.m[3] = matrixLeft.m[3] - matrixRight.m[3];
m.m[4] = matrixLeft.m[4] - matrixRight.m[4];
m.m[5] = matrixLeft.m[5] - matrixRight.m[5];
m.m[6] = matrixLeft.m[6] - matrixRight.m[6];
m.m[7] = matrixLeft.m[7] - matrixRight.m[7];
m.m[8] = matrixLeft.m[8] - matrixRight.m[8];
m.m[9] = matrixLeft.m[9] - matrixRight.m[9];
m.m[10] = matrixLeft.m[10] - matrixRight.m[10];
m.m[11] = matrixLeft.m[11] - matrixRight.m[11];
m.m[12] = matrixLeft.m[12] - matrixRight.m[12];
m.m[13] = matrixLeft.m[13] - matrixRight.m[13];
m.m[14] = matrixLeft.m[14] - matrixRight.m[14];
m.m[15] = matrixLeft.m[15] - matrixRight.m[15];
return m;
#endif
}
static __inline__ GLKMatrix4 GLKMatrix4Translate(GLKMatrix4 matrix, float tx, float ty, float tz)
{
GLKMatrix4 m = { matrix.m[0], matrix.m[1], matrix.m[2], matrix.m[3],
matrix.m[4], matrix.m[5], matrix.m[6], matrix.m[7],
matrix.m[8], matrix.m[9], matrix.m[10], matrix.m[11],
matrix.m[0] * tx + matrix.m[4] * ty + matrix.m[8] * tz + matrix.m[12],
matrix.m[1] * tx + matrix.m[5] * ty + matrix.m[9] * tz + matrix.m[13],
matrix.m[2] * tx + matrix.m[6] * ty + matrix.m[10] * tz + matrix.m[14],
matrix.m[15] };
return m;
}
static __inline__ GLKMatrix4 GLKMatrix4TranslateWithVector3(GLKMatrix4 matrix, GLKVector3 translationVector)
{
GLKMatrix4 m = { matrix.m[0], matrix.m[1], matrix.m[2], matrix.m[3],
matrix.m[4], matrix.m[5], matrix.m[6], matrix.m[7],
matrix.m[8], matrix.m[9], matrix.m[10], matrix.m[11],
matrix.m[0] * translationVector.v[0] + matrix.m[4] * translationVector.v[1] + matrix.m[8] * translationVector.v[2] + matrix.m[12],
matrix.m[1] * translationVector.v[0] + matrix.m[5] * translationVector.v[1] + matrix.m[9] * translationVector.v[2] + matrix.m[13],
matrix.m[2] * translationVector.v[0] + matrix.m[6] * translationVector.v[1] + matrix.m[10] * translationVector.v[2] + matrix.m[14],
matrix.m[15] };
return m;
}
static __inline__ GLKMatrix4 GLKMatrix4TranslateWithVector4(GLKMatrix4 matrix, GLKVector4 translationVector)
{
GLKMatrix4 m = { matrix.m[0], matrix.m[1], matrix.m[2], matrix.m[3],
matrix.m[4], matrix.m[5], matrix.m[6], matrix.m[7],
matrix.m[8], matrix.m[9], matrix.m[10], matrix.m[11],
matrix.m[0] * translationVector.v[0] + matrix.m[4] * translationVector.v[1] + matrix.m[8] * translationVector.v[2] + matrix.m[12],
matrix.m[1] * translationVector.v[0] + matrix.m[5] * translationVector.v[1] + matrix.m[9] * translationVector.v[2] + matrix.m[13],
matrix.m[2] * translationVector.v[0] + matrix.m[6] * translationVector.v[1] + matrix.m[10] * translationVector.v[2] + matrix.m[14],
matrix.m[15] };
return m;
}
static __inline__ GLKMatrix4 GLKMatrix4Scale(GLKMatrix4 matrix, float sx, float sy, float sz)
{
#if defined(__ARM_NEON__)
float32x4x4_t iMatrix = *(float32x4x4_t *)&matrix;
float32x4x4_t m;
m.val[0] = vmulq_n_f32(iMatrix.val[0], (float32_t)sx);
m.val[1] = vmulq_n_f32(iMatrix.val[1], (float32_t)sy);
m.val[2] = vmulq_n_f32(iMatrix.val[2], (float32_t)sz);
m.val[3] = iMatrix.val[3];
return *(GLKMatrix4 *)&m;
#else
GLKMatrix4 m = { matrix.m[0] * sx, matrix.m[1] * sx, matrix.m[2] * sx, matrix.m[3] * sx,
matrix.m[4] * sy, matrix.m[5] * sy, matrix.m[6] * sy, matrix.m[7] * sy,
matrix.m[8] * sz, matrix.m[9] * sz, matrix.m[10] * sz, matrix.m[11] * sz,
matrix.m[12], matrix.m[13], matrix.m[14], matrix.m[15] };
return m;
#endif
}
static __inline__ GLKMatrix4 GLKMatrix4ScaleWithVector3(GLKMatrix4 matrix, GLKVector3 scaleVector)
{
#if defined(__ARM_NEON__)
float32x4x4_t iMatrix = *(float32x4x4_t *)&matrix;
float32x4x4_t m;
m.val[0] = vmulq_n_f32(iMatrix.val[0], (float32_t)scaleVector.v[0]);
m.val[1] = vmulq_n_f32(iMatrix.val[1], (float32_t)scaleVector.v[1]);
m.val[2] = vmulq_n_f32(iMatrix.val[2], (float32_t)scaleVector.v[2]);
m.val[3] = iMatrix.val[3];
return *(GLKMatrix4 *)&m;
#else
GLKMatrix4 m = { matrix.m[0] * scaleVector.v[0], matrix.m[1] * scaleVector.v[0], matrix.m[2] * scaleVector.v[0], matrix.m[3] * scaleVector.v[0],
matrix.m[4] * scaleVector.v[1], matrix.m[5] * scaleVector.v[1], matrix.m[6] * scaleVector.v[1], matrix.m[7] * scaleVector.v[1],
matrix.m[8] * scaleVector.v[2], matrix.m[9] * scaleVector.v[2], matrix.m[10] * scaleVector.v[2], matrix.m[11] * scaleVector.v[2],
matrix.m[12], matrix.m[13], matrix.m[14], matrix.m[15] };
return m;
#endif
}
static __inline__ GLKMatrix4 GLKMatrix4ScaleWithVector4(GLKMatrix4 matrix, GLKVector4 scaleVector)
{
#if defined(__ARM_NEON__)
float32x4x4_t iMatrix = *(float32x4x4_t *)&matrix;
float32x4x4_t m;
m.val[0] = vmulq_n_f32(iMatrix.val[0], (float32_t)scaleVector.v[0]);
m.val[1] = vmulq_n_f32(iMatrix.val[1], (float32_t)scaleVector.v[1]);
m.val[2] = vmulq_n_f32(iMatrix.val[2], (float32_t)scaleVector.v[2]);
m.val[3] = iMatrix.val[3];
return *(GLKMatrix4 *)&m;
#else
GLKMatrix4 m = { matrix.m[0] * scaleVector.v[0], matrix.m[1] * scaleVector.v[0], matrix.m[2] * scaleVector.v[0], matrix.m[3] * scaleVector.v[0],
matrix.m[4] * scaleVector.v[1], matrix.m[5] * scaleVector.v[1], matrix.m[6] * scaleVector.v[1], matrix.m[7] * scaleVector.v[1],
matrix.m[8] * scaleVector.v[2], matrix.m[9] * scaleVector.v[2], matrix.m[10] * scaleVector.v[2], matrix.m[11] * scaleVector.v[2],
matrix.m[12], matrix.m[13], matrix.m[14], matrix.m[15] };
return m;
#endif
}
static __inline__ GLKMatrix4 GLKMatrix4Rotate(GLKMatrix4 matrix, float radians, float x, float y, float z)
{
GLKMatrix4 rm = GLKMatrix4MakeRotation(radians, x, y, z);
return GLKMatrix4Multiply(matrix, rm);
}
static __inline__ GLKMatrix4 GLKMatrix4RotateWithVector3(GLKMatrix4 matrix, float radians, GLKVector3 axisVector)
{
GLKMatrix4 rm = GLKMatrix4MakeRotation(radians, axisVector.v[0], axisVector.v[1], axisVector.v[2]);
return GLKMatrix4Multiply(matrix, rm);
}
static __inline__ GLKMatrix4 GLKMatrix4RotateWithVector4(GLKMatrix4 matrix, float radians, GLKVector4 axisVector)
{
GLKMatrix4 rm = GLKMatrix4MakeRotation(radians, axisVector.v[0], axisVector.v[1], axisVector.v[2]);
return GLKMatrix4Multiply(matrix, rm);
}
static __inline__ GLKMatrix4 GLKMatrix4RotateX(GLKMatrix4 matrix, float radians)
{
GLKMatrix4 rm = GLKMatrix4MakeXRotation(radians);
return GLKMatrix4Multiply(matrix, rm);
}
static __inline__ GLKMatrix4 GLKMatrix4RotateY(GLKMatrix4 matrix, float radians)
{
GLKMatrix4 rm = GLKMatrix4MakeYRotation(radians);
return GLKMatrix4Multiply(matrix, rm);
}
static __inline__ GLKMatrix4 GLKMatrix4RotateZ(GLKMatrix4 matrix, float radians)
{
GLKMatrix4 rm = GLKMatrix4MakeZRotation(radians);
return GLKMatrix4Multiply(matrix, rm);
}
static __inline__ GLKVector3 GLKMatrix4MultiplyVector3(GLKMatrix4 matrixLeft, GLKVector3 vectorRight)
{
GLKVector4 v4 = GLKMatrix4MultiplyVector4(matrixLeft, GLKVector4Make(vectorRight.v[0], vectorRight.v[1], vectorRight.v[2], 0.0f));
return GLKVector3Make(v4.v[0], v4.v[1], v4.v[2]);
}
static __inline__ GLKVector3 GLKMatrix4MultiplyVector3WithTranslation(GLKMatrix4 matrixLeft, GLKVector3 vectorRight)
{
GLKVector4 v4 = GLKMatrix4MultiplyVector4(matrixLeft, GLKVector4Make(vectorRight.v[0], vectorRight.v[1], vectorRight.v[2], 1.0f));
return GLKVector3Make(v4.v[0], v4.v[1], v4.v[2]);
}
static __inline__ GLKVector3 GLKMatrix4MultiplyAndProjectVector3(GLKMatrix4 matrixLeft, GLKVector3 vectorRight)
{
GLKVector4 v4 = GLKMatrix4MultiplyVector4(matrixLeft, GLKVector4Make(vectorRight.v[0], vectorRight.v[1], vectorRight.v[2], 1.0f));
return GLKVector3MultiplyScalar(GLKVector3Make(v4.v[0], v4.v[1], v4.v[2]), 1.0f / v4.v[3]);
}
static __inline__ void GLKMatrix4MultiplyVector3Array(GLKMatrix4 matrix, GLKVector3 *vectors, size_t vectorCount)
{
size_t i;
for (i=0; i < vectorCount; i++)
vectors[i] = GLKMatrix4MultiplyVector3(matrix, vectors[i]);
}
static __inline__ void GLKMatrix4MultiplyVector3ArrayWithTranslation(GLKMatrix4 matrix, GLKVector3 *vectors, size_t vectorCount)
{
size_t i;
for (i=0; i < vectorCount; i++)
vectors[i] = GLKMatrix4MultiplyVector3WithTranslation(matrix, vectors[i]);
}
static __inline__ void GLKMatrix4MultiplyAndProjectVector3Array(GLKMatrix4 matrix, GLKVector3 *vectors, size_t vectorCount)
{
size_t i;
for (i=0; i < vectorCount; i++)
vectors[i] = GLKMatrix4MultiplyAndProjectVector3(matrix, vectors[i]);
}
static __inline__ GLKVector4 GLKMatrix4MultiplyVector4(GLKMatrix4 matrixLeft, GLKVector4 vectorRight)
{
#if defined(__ARM_NEON__)
float32x4x4_t iMatrix = *(float32x4x4_t *)&matrixLeft;
float32x4_t v;
iMatrix.val[0] = vmulq_n_f32(iMatrix.val[0], (float32_t)vectorRight.v[0]);
iMatrix.val[1] = vmulq_n_f32(iMatrix.val[1], (float32_t)vectorRight.v[1]);
iMatrix.val[2] = vmulq_n_f32(iMatrix.val[2], (float32_t)vectorRight.v[2]);
iMatrix.val[3] = vmulq_n_f32(iMatrix.val[3], (float32_t)vectorRight.v[3]);
iMatrix.val[0] = vaddq_f32(iMatrix.val[0], iMatrix.val[1]);
iMatrix.val[2] = vaddq_f32(iMatrix.val[2], iMatrix.val[3]);
v = vaddq_f32(iMatrix.val[0], iMatrix.val[2]);
return *(GLKVector4 *)&v;
#else
GLKVector4 v = { matrixLeft.m[0] * vectorRight.v[0] + matrixLeft.m[4] * vectorRight.v[1] + matrixLeft.m[8] * vectorRight.v[2] + matrixLeft.m[12] * vectorRight.v[3],
matrixLeft.m[1] * vectorRight.v[0] + matrixLeft.m[5] * vectorRight.v[1] + matrixLeft.m[9] * vectorRight.v[2] + matrixLeft.m[13] * vectorRight.v[3],
matrixLeft.m[2] * vectorRight.v[0] + matrixLeft.m[6] * vectorRight.v[1] + matrixLeft.m[10] * vectorRight.v[2] + matrixLeft.m[14] * vectorRight.v[3],
matrixLeft.m[3] * vectorRight.v[0] + matrixLeft.m[7] * vectorRight.v[1] + matrixLeft.m[11] * vectorRight.v[2] + matrixLeft.m[15] * vectorRight.v[3] };
return v;
#endif
}
static __inline__ void GLKMatrix4MultiplyVector4Array(GLKMatrix4 matrix, GLKVector4 *vectors, size_t vectorCount)
{
size_t i;
for (i=0; i < vectorCount; i++)
vectors[i] = GLKMatrix4MultiplyVector4(matrix, vectors[i]);
}
#ifdef __cplusplus
}
#endif
#endif /* __GLK_MATRIX_4_H */

99
samples/C/NWMan.h Normal file
View File

@@ -0,0 +1,99 @@
#ifndef _NME_WMAN_H
#define _NME_WMAN_H
// Internal window manager API
#include "NCompat.h"
START_HEAD
#include "NPos.h"
#include "NUtil.h"
#include "NTypes.h"
NTS(NWMan_event);
NSTRUCT(NWMan, {
// Init stuff
bool (*init)();
bool (*destroy)();
// Window stuff
bool (*create_window)();
bool (*destroy_window)();
void (*swap_buffers)();
// Event stuff
bool (*next_event)(NWMan_event* event);
// Time stuff
uint (*get_millis)();
void (*sleep)(uint millis);
// Info
int rshift_key;
int lshift_key;
int left_key;
int right_key;
});
NENUM(NWMan_event_type, {
N_WMAN_MOUSE_MOVE = 0,
N_WMAN_MOUSE_BUTTON = 1,
N_WMAN_MOUSE_WHEEL = 2,
N_WMAN_KEYBOARD = 10,
N_WMAN_QUIT = 20,
N_WMAN_RESIZE = 21,
N_WMAN_FOCUS = 22
});
#define N_WMAN_MOUSE_LEFT 0
#define N_WMAN_MOUSE_RIGHT 1
#define N_WMAN_MOUSE_MIDDLE 2
NSTRUCT(NWMan_event, {
NWMan_event_type type;
union {
// Mouse
NPos2i mouse_pos;
struct {
short id;
bool state;
} mouse_button;
signed char mouse_wheel; // 1 if up, -1 if down
// Keyboard
struct {
int key;
bool state;
} keyboard;
// Window
bool window_quit; // Will always be true if WM_QUIT
NPos2i window_size;
bool window_focus;
};
});
NWMan_event NWMan_event_new(NWMan_event_type type);
bool NWMan_init();
bool NWMan_destroy();
extern NWMan N_WMan;
END_HEAD
#endif

27
samples/C/Nightmare.h Normal file
View File

@@ -0,0 +1,27 @@
#ifndef _NMEX_NIGHTMARE_H
#define _NMEX_NIGHTMARE_H
//#define NMEX
#include "../src/NCompat.h"
START_HEAD
#include "../src/NTypes.h"
#include "../src/NUtil.h"
#include "../src/NPorting.h"
#include "../src/NGlobals.h"
#include "../src/NLog.h"
#include "../src/NWMan.h"
#include "../src/NRsc.h"
#include "../src/NShader.h"
#include "../src/NSquare.h"
#include "../src/NImage.h"
#include "../src/NSprite.h"
#include "../src/NSpritesheet.h"
#include "../src/NEntity.h"
#include "../src/Game.h"
END_HEAD
#endif

89
samples/C/ntru_encrypt.h Normal file
View File

@@ -0,0 +1,89 @@
/*
* Copyright (C) 2014 FH Bielefeld
*
* This file is part of a FH Bielefeld project.
*
* This library is free software; you can redistribute it and/or
* modify it under the terms of the GNU Lesser General Public
* License as published by the Free Software Foundation; either
* version 2.1 of the License, or (at your option) any later version.
*
* This library is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* Lesser General Public License for more details.
*
* You should have received a copy of the GNU Lesser General Public
* License along with this library; if not, write to the Free Software
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston,
* MA 02110-1301 USA
*/
/**
* @file ntru_encrypt.h
* Header for the internal API of ntru_encrypt.c.
* @brief header for encrypt.c
*/
#ifndef PQC_ENCRYPT_H
#define PQC_ENCRYPT_H
#include "ntru_params.h"
#include "ntru_poly.h"
#include "ntru_string.h"
#include <fmpz_poly.h>
#include <fmpz.h>
/**
* encrypt the msg, using the math:
* e = (h r) + m (mod q)
*
* e = the encrypted poly
*
* h = the public key
*
* r = the random poly
*
* m = the message poly
*
* q = large mod
*
* @param msg_tern the message to encrypt, in ternary format
* @param pub_key the public key
* @param rnd the random poly (should have relatively small
* coefficients, but not restricted to {-1, 0, 1})
* @param out the output poly which is in the range {0, q-1}
* (not ternary!) [out]
* @param params ntru_params the ntru context
*/
void
ntru_encrypt_poly(
const fmpz_poly_t msg_tern,
const fmpz_poly_t pub_key,
const fmpz_poly_t rnd,
fmpz_poly_t out,
const ntru_params *params);
/**
* Encrypt a message in the form of a null-terminated char array and
* return a string.
*
* @param msg the message
* @param pub_key the public key
* @param rnd the random poly (should have relatively small
* coefficients, but not restricted to {-1, 0, 1})
* @param params ntru_params the ntru context
* @return the newly allocated encrypted string
*/
string *
ntru_encrypt_string(
const string *msg,
const fmpz_poly_t pub_key,
const fmpz_poly_t rnd,
const ntru_params *params);
#endif /* PQC_ENCRYPT_H */

View File

@@ -0,0 +1,40 @@
###* @cjsx React.DOM ###
define 'myProject.ReactExampleComponent', [
'React'
'myProject.ExampleStore'
'myProject.ExampleActions'
'myProject.ReactExampleTable'
], (React, ExampleStore, ExampleActions, ReactExampleTable ) ->
ReactExampleComponent = React.createClass
mixins: [ListenMixin]
getInitialState: ->
rows: ExampleStore.getRows()
meta: ExampleStore.getMeta()
componentWillMount: ->
@listenTo ExampleStore
componentDidMount: ->
ExampleActions.getExampleData()
onStoreChange: ->
if this.isMounted()
@setState
rows: ExampleStore.getRows()
meta: ExampleStore.getMeta()
componentWillUnmount: ->
@stopListening ExampleStore
render: ->
<div className="page-wrap">
<header>
<strong> {@state.title} </strong>
<header>
<ReactExampleTable
rows={@state.rows},
meta={@state.meta}
/>
</div>

26
samples/Cool/list.cl Normal file
View File

@@ -0,0 +1,26 @@
(* This simple example of a list class is adapted from an example in the
Cool distribution. *)
class List {
isNil() : Bool { true };
head() : Int { { abort(); 0; } };
tail() : List { { abort(); self; } };
cons(i : Int) : List {
(new Cons).init(i, self)
};
};
class Cons inherits List {
car : Int; -- The element in this list cell
cdr : List; -- The rest of the list
isNil() : Bool { false };
head() : Int { car };
tail() : List { cdr };
init(i : Int, rest : List) : List {
{
car <- i;
cdr <- rest;
self;
}
};
};

71
samples/Cool/sample.cl Normal file
View File

@@ -0,0 +1,71 @@
(* Refer to Alex Aiken, "The Cool Reference Manual":
http://theory.stanford.edu/~aiken/software/cool/cool-manual.pdf
for language specification.
*)
-- Exhibit various language constructs
class Sample {
testCondition(x: Int): Bool {
if x = 0
then false
else
if x < (1 + 2) * 3
then true
else false
fi
fi
};
testLoop(y: Int): Bool {
while y > 0 loop
{
if not condition(y)
then y <- y / 2
else y <- y - 1;
}
pool
};
testAssign(z: Int): Bool {
i : Int;
i <- ~z;
};
testCase(var: Sample): SELF_TYPE {
io : IO <- new IO;
case var of
a : A => io.out_string("Class type is A\n");
b : B => io.out_string("Class type is B\n");
s : Sample => io.out_string("Class type is Sample\n");
o : Object => io.out_string("Class type is object\n");
esac
};
testLet(i: Int): Int {
let (a: Int in
let(b: Int <- 3, c: Int <- 4 in
{
a <- 2;
a * b * 2 / c;
}
)
)
};
};
-- Used to test subclasses
class A inherits Sample {};
class B inherits A {};
class C {
main() : Int {
(new Sample).testLet(1)
};
};
-- "Hello, world" example
class Main inherits IO {
main(): SELF_TYPE {
out_string("Hello, World.\n")
};
};

15
samples/F#/sample.fs Normal file
View File

@@ -0,0 +1,15 @@
module Sample
open System
type Foo =
{
Bar : string
}
type Baz = interface end
let Sample1(xs : int list) : string =
xs
|> List.map (fun x -> string x)
|> String.concat ","

25
samples/FORTRAN/sample1.f Normal file
View File

@@ -0,0 +1,25 @@
c comment
* comment
program main
end
subroutine foo( i, x, b )
INTEGER i
REAL x
LOGICAL b
if( i.ne.0 ) then
call bar( -i )
end if
return
end
double complex function baz()
baz = (0.0d0,0.0d0)
return
end

View File

@@ -0,0 +1,25 @@
c comment
* comment
program main
end
subroutine foo( i, x, b )
INTEGER i
REAL x
LOGICAL b
if( i.ne.0 ) then
call bar( -i )
end if
return
end
double complex function baz()
baz = (0.0d0,0.0d0)
return
end

25
samples/FORTRAN/sample2.f Normal file
View File

@@ -0,0 +1,25 @@
PROGRAM MAIN
END
C comment
* comment
SUBROUTINE foo( i, x, b )
INTEGER i
REAL x
LOGICAL b
IF( i.NE.0 ) THEN
CALL bar( -i )
END IF
RETURN
END
DOUBLE COMPLEX FUNCTION baz()
baz = (0.0d0,0.0d0)
RETURN
END

25
samples/FORTRAN/sample3.F Normal file
View File

@@ -0,0 +1,25 @@
c comment
* comment
program main
end
subroutine foo( i, x, b )
INTEGER i
REAL x
LOGICAL b
if( i.ne.0 ) then
call bar( -i )
end if
return
end
double complex function baz()
baz = (0.0d0,0.0d0)
return
end

252
samples/Forth/core.f Normal file
View File

@@ -0,0 +1,252 @@
: immediate lastxt @ dup c@ negate swap c! ;
: \ source nip >in ! ; immediate \ Copyright 2004, 2012 Lars Brinkhoff
: char \ ( "word" -- char )
bl-word here 1+ c@ ;
: ahead here 0 , ;
: resolve here swap ! ;
: ' bl-word here find 0branch [ ahead ] exit [ resolve ] 0 ;
: postpone-nonimmediate [ ' literal , ' compile, ] literal , ;
: create dovariable_code header, reveal ;
create postponers
' postpone-nonimmediate ,
' abort ,
' , ,
: word \ ( char "<chars>string<char>" -- caddr )
drop bl-word here ;
: postpone \ ( C: "word" -- )
bl word find 1+ cells postponers + @ execute ; immediate
: unresolved \ ( C: "word" -- orig )
postpone postpone postpone ahead ; immediate
: chars \ ( n1 -- n2 )
;
: else \ ( -- ) ( C: orig1 -- orig2 )
unresolved branch swap resolve ; immediate
: if \ ( flag -- ) ( C: -- orig )
unresolved 0branch ; immediate
: then \ ( -- ) ( C: orig -- )
resolve ; immediate
: [char] \ ( "word" -- )
char postpone literal ; immediate
: (does>) lastxt @ dodoes_code over >code ! r> swap >does ! ;
: does> postpone (does>) ; immediate
: begin \ ( -- ) ( C: -- dest )
here ; immediate
: while \ ( x -- ) ( C: dest -- orig dest )
unresolved 0branch swap ; immediate
: repeat \ ( -- ) ( C: orig dest -- )
postpone branch , resolve ; immediate
: until \ ( x -- ) ( C: dest -- )
postpone 0branch , ; immediate
: recurse lastxt @ compile, ; immediate
: pad \ ( -- addr )
here 1024 + ;
: parse \ ( char "string<char>" -- addr n )
pad >r begin
source? if <source 2dup <> else 0 0 then
while
r@ c! r> 1+ >r
repeat 2drop pad r> over - ;
: ( \ ( "string<paren>" -- )
[ char ) ] literal parse 2drop ; immediate
\ TODO: If necessary, refill and keep parsing.
: string, ( addr n -- )
here over allot align swap cmove ;
: (s") ( -- addr n ) ( R: ret1 -- ret2 )
r> dup @ swap cell+ 2dup + aligned >r swap ;
create squote 128 allot
: s" ( "string<quote>" -- addr n )
state @ if
postpone (s") [char] " parse dup , string,
else
[char] " parse >r squote r@ cmove squote r>
then ; immediate
: (abort") ( ... addr n -- ) ( R: ... -- )
cr type cr abort ;
: abort" ( ... x "string<quote>" -- ) ( R: ... -- )
postpone if postpone s" postpone (abort") postpone then ; immediate
\ ----------------------------------------------------------------------
( Core words. )
\ TODO: #
\ TODO: #>
\ TODO: #s
: and ( x y -- x&y ) nand invert ;
: * 1 2>r 0 swap begin r@ while
r> r> swap 2dup dup + 2>r and if swap over + swap then dup +
repeat r> r> 2drop drop ;
\ TODO: */mod
: +loop ( -- ) ( C: nest-sys -- )
postpone (+loop) postpone 0branch , postpone unloop ; immediate
: space bl emit ;
: ?.- dup 0 < if [char] - emit negate then ;
: digit [char] 0 + emit ;
: (.) base @ /mod ?dup if recurse then digit ;
: ." ( "string<quote>" -- ) postpone s" postpone type ; immediate
: . ( x -- ) ?.- (.) space ;
: postpone-number ( caddr -- )
0 0 rot count >number dup 0= if
2drop nip
postpone (literal) postpone (literal) postpone ,
postpone literal postpone ,
else
." Undefined: " type cr abort
then ;
' postpone-number postponers cell+ !
: / ( x y -- x/y ) /mod nip ;
: 0< ( n -- flag ) 0 < ;
: 1- ( n -- n-1 ) -1 + ;
: 2! ( x1 x2 addr -- ) swap over ! cell+ ! ;
: 2* ( n -- 2n ) dup + ;
\ Kernel: 2/
: 2@ ( addr -- x1 x2 ) dup cell+ @ swap @ ;
\ Kernel: 2drop
\ Kernel: 2dup
\ TODO: 2over ( x1 x2 x3 x4 -- x1 x2 x3 x4 x1 x2 )
\ 3 pick 3 pick ;
\ TODO: 2swap
\ TODO: <#
: abs ( n -- |n| )
dup 0< if negate then ;
\ TODO: accept
: c, ( n -- )
here c! 1 chars allot ;
: char+ ( n1 -- n2 )
1+ ;
: constant create , does> @ ;
: decimal ( -- )
10 base ! ;
: depth ( -- n )
data_stack 100 cells + 'SP @ - /cell / 2 - ;
: do ( n1 n2 -- ) ( R: -- loop-sys ) ( C: -- do-sys )
postpone 2>r here ; immediate
\ TODO: environment?
\ TODO: evaluate
\ TODO: fill
\ TODO: fm/mod )
\ TODO: hold
: j ( -- x1 ) ( R: x1 x2 x3 -- x1 x2 x3 )
'RP @ 3 cells + @ ;
\ TODO: leave
: loop ( -- ) ( C: nest-sys -- )
postpone 1 postpone (+loop)
postpone 0branch ,
postpone unloop ; immediate
: lshift begin ?dup while 1- swap dup + swap repeat ;
: rshift 1 begin over while dup + swap 1- swap repeat nip
2>r 0 1 begin r@ while
r> r> 2dup swap dup + 2>r and if swap over + swap then dup +
repeat r> r> 2drop drop ;
: max ( x y -- max[x,y] )
2dup > if drop else nip then ;
\ Kernel: min
\ TODO: mod
\ TODO: move
: (quit) ( R: ... -- )
return_stack 100 cells + 'RP !
0 'source-id ! tib ''source ! #tib ''#source !
postpone [
begin
refill
while
interpret state @ 0= if ." ok" cr then
repeat
bye ;
' (quit) ' quit >body cell+ !
\ TODO: s>d
\ TODO: sign
\ TODO: sm/rem
: spaces ( n -- )
0 do space loop ;
\ TODO: u.
: signbit ( -- n ) -1 1 rshift invert ;
: xor ( x y -- x^y ) 2dup nand >r r@ nand swap r> nand nand ;
: u< ( x y -- flag ) signbit xor swap signbit xor > ;
\ TODO: um/mod
: variable ( "word" -- )
create /cell allot ;
: ['] \ ( C: "word" -- )
' postpone literal ; immediate

252
samples/Forth/core.for Normal file
View File

@@ -0,0 +1,252 @@
: immediate lastxt @ dup c@ negate swap c! ;
: \ source nip >in ! ; immediate \ Copyright 2004, 2012 Lars Brinkhoff
: char \ ( "word" -- char )
bl-word here 1+ c@ ;
: ahead here 0 , ;
: resolve here swap ! ;
: ' bl-word here find 0branch [ ahead ] exit [ resolve ] 0 ;
: postpone-nonimmediate [ ' literal , ' compile, ] literal , ;
: create dovariable_code header, reveal ;
create postponers
' postpone-nonimmediate ,
' abort ,
' , ,
: word \ ( char "<chars>string<char>" -- caddr )
drop bl-word here ;
: postpone \ ( C: "word" -- )
bl word find 1+ cells postponers + @ execute ; immediate
: unresolved \ ( C: "word" -- orig )
postpone postpone postpone ahead ; immediate
: chars \ ( n1 -- n2 )
;
: else \ ( -- ) ( C: orig1 -- orig2 )
unresolved branch swap resolve ; immediate
: if \ ( flag -- ) ( C: -- orig )
unresolved 0branch ; immediate
: then \ ( -- ) ( C: orig -- )
resolve ; immediate
: [char] \ ( "word" -- )
char postpone literal ; immediate
: (does>) lastxt @ dodoes_code over >code ! r> swap >does ! ;
: does> postpone (does>) ; immediate
: begin \ ( -- ) ( C: -- dest )
here ; immediate
: while \ ( x -- ) ( C: dest -- orig dest )
unresolved 0branch swap ; immediate
: repeat \ ( -- ) ( C: orig dest -- )
postpone branch , resolve ; immediate
: until \ ( x -- ) ( C: dest -- )
postpone 0branch , ; immediate
: recurse lastxt @ compile, ; immediate
: pad \ ( -- addr )
here 1024 + ;
: parse \ ( char "string<char>" -- addr n )
pad >r begin
source? if <source 2dup <> else 0 0 then
while
r@ c! r> 1+ >r
repeat 2drop pad r> over - ;
: ( \ ( "string<paren>" -- )
[ char ) ] literal parse 2drop ; immediate
\ TODO: If necessary, refill and keep parsing.
: string, ( addr n -- )
here over allot align swap cmove ;
: (s") ( -- addr n ) ( R: ret1 -- ret2 )
r> dup @ swap cell+ 2dup + aligned >r swap ;
create squote 128 allot
: s" ( "string<quote>" -- addr n )
state @ if
postpone (s") [char] " parse dup , string,
else
[char] " parse >r squote r@ cmove squote r>
then ; immediate
: (abort") ( ... addr n -- ) ( R: ... -- )
cr type cr abort ;
: abort" ( ... x "string<quote>" -- ) ( R: ... -- )
postpone if postpone s" postpone (abort") postpone then ; immediate
\ ----------------------------------------------------------------------
( Core words. )
\ TODO: #
\ TODO: #>
\ TODO: #s
: and ( x y -- x&y ) nand invert ;
: * 1 2>r 0 swap begin r@ while
r> r> swap 2dup dup + 2>r and if swap over + swap then dup +
repeat r> r> 2drop drop ;
\ TODO: */mod
: +loop ( -- ) ( C: nest-sys -- )
postpone (+loop) postpone 0branch , postpone unloop ; immediate
: space bl emit ;
: ?.- dup 0 < if [char] - emit negate then ;
: digit [char] 0 + emit ;
: (.) base @ /mod ?dup if recurse then digit ;
: ." ( "string<quote>" -- ) postpone s" postpone type ; immediate
: . ( x -- ) ?.- (.) space ;
: postpone-number ( caddr -- )
0 0 rot count >number dup 0= if
2drop nip
postpone (literal) postpone (literal) postpone ,
postpone literal postpone ,
else
." Undefined: " type cr abort
then ;
' postpone-number postponers cell+ !
: / ( x y -- x/y ) /mod nip ;
: 0< ( n -- flag ) 0 < ;
: 1- ( n -- n-1 ) -1 + ;
: 2! ( x1 x2 addr -- ) swap over ! cell+ ! ;
: 2* ( n -- 2n ) dup + ;
\ Kernel: 2/
: 2@ ( addr -- x1 x2 ) dup cell+ @ swap @ ;
\ Kernel: 2drop
\ Kernel: 2dup
\ TODO: 2over ( x1 x2 x3 x4 -- x1 x2 x3 x4 x1 x2 )
\ 3 pick 3 pick ;
\ TODO: 2swap
\ TODO: <#
: abs ( n -- |n| )
dup 0< if negate then ;
\ TODO: accept
: c, ( n -- )
here c! 1 chars allot ;
: char+ ( n1 -- n2 )
1+ ;
: constant create , does> @ ;
: decimal ( -- )
10 base ! ;
: depth ( -- n )
data_stack 100 cells + 'SP @ - /cell / 2 - ;
: do ( n1 n2 -- ) ( R: -- loop-sys ) ( C: -- do-sys )
postpone 2>r here ; immediate
\ TODO: environment?
\ TODO: evaluate
\ TODO: fill
\ TODO: fm/mod )
\ TODO: hold
: j ( -- x1 ) ( R: x1 x2 x3 -- x1 x2 x3 )
'RP @ 3 cells + @ ;
\ TODO: leave
: loop ( -- ) ( C: nest-sys -- )
postpone 1 postpone (+loop)
postpone 0branch ,
postpone unloop ; immediate
: lshift begin ?dup while 1- swap dup + swap repeat ;
: rshift 1 begin over while dup + swap 1- swap repeat nip
2>r 0 1 begin r@ while
r> r> 2dup swap dup + 2>r and if swap over + swap then dup +
repeat r> r> 2drop drop ;
: max ( x y -- max[x,y] )
2dup > if drop else nip then ;
\ Kernel: min
\ TODO: mod
\ TODO: move
: (quit) ( R: ... -- )
return_stack 100 cells + 'RP !
0 'source-id ! tib ''source ! #tib ''#source !
postpone [
begin
refill
while
interpret state @ 0= if ." ok" cr then
repeat
bye ;
' (quit) ' quit >body cell+ !
\ TODO: s>d
\ TODO: sign
\ TODO: sm/rem
: spaces ( n -- )
0 do space loop ;
\ TODO: u.
: signbit ( -- n ) -1 1 rshift invert ;
: xor ( x y -- x^y ) 2dup nand >r r@ nand swap r> nand nand ;
: u< ( x y -- flag ) signbit xor swap signbit xor > ;
\ TODO: um/mod
: variable ( "word" -- )
create /cell allot ;
: ['] \ ( C: "word" -- )
' postpone literal ; immediate

252
samples/Forth/core.fs Normal file
View File

@@ -0,0 +1,252 @@
: immediate lastxt @ dup c@ negate swap c! ;
: \ source nip >in ! ; immediate \ Copyright 2004, 2012 Lars Brinkhoff
: char \ ( "word" -- char )
bl-word here 1+ c@ ;
: ahead here 0 , ;
: resolve here swap ! ;
: ' bl-word here find 0branch [ ahead ] exit [ resolve ] 0 ;
: postpone-nonimmediate [ ' literal , ' compile, ] literal , ;
: create dovariable_code header, reveal ;
create postponers
' postpone-nonimmediate ,
' abort ,
' , ,
: word \ ( char "<chars>string<char>" -- caddr )
drop bl-word here ;
: postpone \ ( C: "word" -- )
bl word find 1+ cells postponers + @ execute ; immediate
: unresolved \ ( C: "word" -- orig )
postpone postpone postpone ahead ; immediate
: chars \ ( n1 -- n2 )
;
: else \ ( -- ) ( C: orig1 -- orig2 )
unresolved branch swap resolve ; immediate
: if \ ( flag -- ) ( C: -- orig )
unresolved 0branch ; immediate
: then \ ( -- ) ( C: orig -- )
resolve ; immediate
: [char] \ ( "word" -- )
char postpone literal ; immediate
: (does>) lastxt @ dodoes_code over >code ! r> swap >does ! ;
: does> postpone (does>) ; immediate
: begin \ ( -- ) ( C: -- dest )
here ; immediate
: while \ ( x -- ) ( C: dest -- orig dest )
unresolved 0branch swap ; immediate
: repeat \ ( -- ) ( C: orig dest -- )
postpone branch , resolve ; immediate
: until \ ( x -- ) ( C: dest -- )
postpone 0branch , ; immediate
: recurse lastxt @ compile, ; immediate
: pad \ ( -- addr )
here 1024 + ;
: parse \ ( char "string<char>" -- addr n )
pad >r begin
source? if <source 2dup <> else 0 0 then
while
r@ c! r> 1+ >r
repeat 2drop pad r> over - ;
: ( \ ( "string<paren>" -- )
[ char ) ] literal parse 2drop ; immediate
\ TODO: If necessary, refill and keep parsing.
: string, ( addr n -- )
here over allot align swap cmove ;
: (s") ( -- addr n ) ( R: ret1 -- ret2 )
r> dup @ swap cell+ 2dup + aligned >r swap ;
create squote 128 allot
: s" ( "string<quote>" -- addr n )
state @ if
postpone (s") [char] " parse dup , string,
else
[char] " parse >r squote r@ cmove squote r>
then ; immediate
: (abort") ( ... addr n -- ) ( R: ... -- )
cr type cr abort ;
: abort" ( ... x "string<quote>" -- ) ( R: ... -- )
postpone if postpone s" postpone (abort") postpone then ; immediate
\ ----------------------------------------------------------------------
( Core words. )
\ TODO: #
\ TODO: #>
\ TODO: #s
: and ( x y -- x&y ) nand invert ;
: * 1 2>r 0 swap begin r@ while
r> r> swap 2dup dup + 2>r and if swap over + swap then dup +
repeat r> r> 2drop drop ;
\ TODO: */mod
: +loop ( -- ) ( C: nest-sys -- )
postpone (+loop) postpone 0branch , postpone unloop ; immediate
: space bl emit ;
: ?.- dup 0 < if [char] - emit negate then ;
: digit [char] 0 + emit ;
: (.) base @ /mod ?dup if recurse then digit ;
: ." ( "string<quote>" -- ) postpone s" postpone type ; immediate
: . ( x -- ) ?.- (.) space ;
: postpone-number ( caddr -- )
0 0 rot count >number dup 0= if
2drop nip
postpone (literal) postpone (literal) postpone ,
postpone literal postpone ,
else
." Undefined: " type cr abort
then ;
' postpone-number postponers cell+ !
: / ( x y -- x/y ) /mod nip ;
: 0< ( n -- flag ) 0 < ;
: 1- ( n -- n-1 ) -1 + ;
: 2! ( x1 x2 addr -- ) swap over ! cell+ ! ;
: 2* ( n -- 2n ) dup + ;
\ Kernel: 2/
: 2@ ( addr -- x1 x2 ) dup cell+ @ swap @ ;
\ Kernel: 2drop
\ Kernel: 2dup
\ TODO: 2over ( x1 x2 x3 x4 -- x1 x2 x3 x4 x1 x2 )
\ 3 pick 3 pick ;
\ TODO: 2swap
\ TODO: <#
: abs ( n -- |n| )
dup 0< if negate then ;
\ TODO: accept
: c, ( n -- )
here c! 1 chars allot ;
: char+ ( n1 -- n2 )
1+ ;
: constant create , does> @ ;
: decimal ( -- )
10 base ! ;
: depth ( -- n )
data_stack 100 cells + 'SP @ - /cell / 2 - ;
: do ( n1 n2 -- ) ( R: -- loop-sys ) ( C: -- do-sys )
postpone 2>r here ; immediate
\ TODO: environment?
\ TODO: evaluate
\ TODO: fill
\ TODO: fm/mod )
\ TODO: hold
: j ( -- x1 ) ( R: x1 x2 x3 -- x1 x2 x3 )
'RP @ 3 cells + @ ;
\ TODO: leave
: loop ( -- ) ( C: nest-sys -- )
postpone 1 postpone (+loop)
postpone 0branch ,
postpone unloop ; immediate
: lshift begin ?dup while 1- swap dup + swap repeat ;
: rshift 1 begin over while dup + swap 1- swap repeat nip
2>r 0 1 begin r@ while
r> r> 2dup swap dup + 2>r and if swap over + swap then dup +
repeat r> r> 2drop drop ;
: max ( x y -- max[x,y] )
2dup > if drop else nip then ;
\ Kernel: min
\ TODO: mod
\ TODO: move
: (quit) ( R: ... -- )
return_stack 100 cells + 'RP !
0 'source-id ! tib ''source ! #tib ''#source !
postpone [
begin
refill
while
interpret state @ 0= if ." ok" cr then
repeat
bye ;
' (quit) ' quit >body cell+ !
\ TODO: s>d
\ TODO: sign
\ TODO: sm/rem
: spaces ( n -- )
0 do space loop ;
\ TODO: u.
: signbit ( -- n ) -1 1 rshift invert ;
: xor ( x y -- x^y ) 2dup nand >r r@ nand swap r> nand nand ;
: u< ( x y -- flag ) signbit xor swap signbit xor > ;
\ TODO: um/mod
: variable ( "word" -- )
create /cell allot ;
: ['] \ ( C: "word" -- )
' postpone literal ; immediate

252
samples/Forth/core1.F Normal file
View File

@@ -0,0 +1,252 @@
: immediate lastxt @ dup c@ negate swap c! ;
: \ source nip >in ! ; immediate \ Copyright 2004, 2012 Lars Brinkhoff
: char \ ( "word" -- char )
bl-word here 1+ c@ ;
: ahead here 0 , ;
: resolve here swap ! ;
: ' bl-word here find 0branch [ ahead ] exit [ resolve ] 0 ;
: postpone-nonimmediate [ ' literal , ' compile, ] literal , ;
: create dovariable_code header, reveal ;
create postponers
' postpone-nonimmediate ,
' abort ,
' , ,
: word \ ( char "<chars>string<char>" -- caddr )
drop bl-word here ;
: postpone \ ( C: "word" -- )
bl word find 1+ cells postponers + @ execute ; immediate
: unresolved \ ( C: "word" -- orig )
postpone postpone postpone ahead ; immediate
: chars \ ( n1 -- n2 )
;
: else \ ( -- ) ( C: orig1 -- orig2 )
unresolved branch swap resolve ; immediate
: if \ ( flag -- ) ( C: -- orig )
unresolved 0branch ; immediate
: then \ ( -- ) ( C: orig -- )
resolve ; immediate
: [char] \ ( "word" -- )
char postpone literal ; immediate
: (does>) lastxt @ dodoes_code over >code ! r> swap >does ! ;
: does> postpone (does>) ; immediate
: begin \ ( -- ) ( C: -- dest )
here ; immediate
: while \ ( x -- ) ( C: dest -- orig dest )
unresolved 0branch swap ; immediate
: repeat \ ( -- ) ( C: orig dest -- )
postpone branch , resolve ; immediate
: until \ ( x -- ) ( C: dest -- )
postpone 0branch , ; immediate
: recurse lastxt @ compile, ; immediate
: pad \ ( -- addr )
here 1024 + ;
: parse \ ( char "string<char>" -- addr n )
pad >r begin
source? if <source 2dup <> else 0 0 then
while
r@ c! r> 1+ >r
repeat 2drop pad r> over - ;
: ( \ ( "string<paren>" -- )
[ char ) ] literal parse 2drop ; immediate
\ TODO: If necessary, refill and keep parsing.
: string, ( addr n -- )
here over allot align swap cmove ;
: (s") ( -- addr n ) ( R: ret1 -- ret2 )
r> dup @ swap cell+ 2dup + aligned >r swap ;
create squote 128 allot
: s" ( "string<quote>" -- addr n )
state @ if
postpone (s") [char] " parse dup , string,
else
[char] " parse >r squote r@ cmove squote r>
then ; immediate
: (abort") ( ... addr n -- ) ( R: ... -- )
cr type cr abort ;
: abort" ( ... x "string<quote>" -- ) ( R: ... -- )
postpone if postpone s" postpone (abort") postpone then ; immediate
\ ----------------------------------------------------------------------
( Core words. )
\ TODO: #
\ TODO: #>
\ TODO: #s
: and ( x y -- x&y ) nand invert ;
: * 1 2>r 0 swap begin r@ while
r> r> swap 2dup dup + 2>r and if swap over + swap then dup +
repeat r> r> 2drop drop ;
\ TODO: */mod
: +loop ( -- ) ( C: nest-sys -- )
postpone (+loop) postpone 0branch , postpone unloop ; immediate
: space bl emit ;
: ?.- dup 0 < if [char] - emit negate then ;
: digit [char] 0 + emit ;
: (.) base @ /mod ?dup if recurse then digit ;
: ." ( "string<quote>" -- ) postpone s" postpone type ; immediate
: . ( x -- ) ?.- (.) space ;
: postpone-number ( caddr -- )
0 0 rot count >number dup 0= if
2drop nip
postpone (literal) postpone (literal) postpone ,
postpone literal postpone ,
else
." Undefined: " type cr abort
then ;
' postpone-number postponers cell+ !
: / ( x y -- x/y ) /mod nip ;
: 0< ( n -- flag ) 0 < ;
: 1- ( n -- n-1 ) -1 + ;
: 2! ( x1 x2 addr -- ) swap over ! cell+ ! ;
: 2* ( n -- 2n ) dup + ;
\ Kernel: 2/
: 2@ ( addr -- x1 x2 ) dup cell+ @ swap @ ;
\ Kernel: 2drop
\ Kernel: 2dup
\ TODO: 2over ( x1 x2 x3 x4 -- x1 x2 x3 x4 x1 x2 )
\ 3 pick 3 pick ;
\ TODO: 2swap
\ TODO: <#
: abs ( n -- |n| )
dup 0< if negate then ;
\ TODO: accept
: c, ( n -- )
here c! 1 chars allot ;
: char+ ( n1 -- n2 )
1+ ;
: constant create , does> @ ;
: decimal ( -- )
10 base ! ;
: depth ( -- n )
data_stack 100 cells + 'SP @ - /cell / 2 - ;
: do ( n1 n2 -- ) ( R: -- loop-sys ) ( C: -- do-sys )
postpone 2>r here ; immediate
\ TODO: environment?
\ TODO: evaluate
\ TODO: fill
\ TODO: fm/mod )
\ TODO: hold
: j ( -- x1 ) ( R: x1 x2 x3 -- x1 x2 x3 )
'RP @ 3 cells + @ ;
\ TODO: leave
: loop ( -- ) ( C: nest-sys -- )
postpone 1 postpone (+loop)
postpone 0branch ,
postpone unloop ; immediate
: lshift begin ?dup while 1- swap dup + swap repeat ;
: rshift 1 begin over while dup + swap 1- swap repeat nip
2>r 0 1 begin r@ while
r> r> 2dup swap dup + 2>r and if swap over + swap then dup +
repeat r> r> 2drop drop ;
: max ( x y -- max[x,y] )
2dup > if drop else nip then ;
\ Kernel: min
\ TODO: mod
\ TODO: move
: (quit) ( R: ... -- )
return_stack 100 cells + 'RP !
0 'source-id ! tib ''source ! #tib ''#source !
postpone [
begin
refill
while
interpret state @ 0= if ." ok" cr then
repeat
bye ;
' (quit) ' quit >body cell+ !
\ TODO: s>d
\ TODO: sign
\ TODO: sm/rem
: spaces ( n -- )
0 do space loop ;
\ TODO: u.
: signbit ( -- n ) -1 1 rshift invert ;
: xor ( x y -- x^y ) 2dup nand >r r@ nand swap r> nand nand ;
: u< ( x y -- flag ) signbit xor swap signbit xor > ;
\ TODO: um/mod
: variable ( "word" -- )
create /cell allot ;
: ['] \ ( C: "word" -- )
' postpone literal ; immediate

48
samples/GLSL/recurse1.fs Normal file
View File

@@ -0,0 +1,48 @@
#version 330 core
// cross-unit recursion
void main() {}
// two-level recursion
float cbar(int);
void cfoo(float)
{
cbar(2);
}
// four-level, out of order
void CB();
void CD();
void CA() { CB(); }
void CC() { CD(); }
// high degree
void CBT();
void CDT();
void CAT() { CBT(); CBT(); CBT(); }
void CCT() { CDT(); CDT(); CBT(); }
// not recursive
void norA() {}
void norB() { norA(); }
void norC() { norA(); }
void norD() { norA(); }
void norE() { norB(); }
void norF() { norB(); }
void norG() { norE(); }
void norH() { norE(); }
void norI() { norE(); }
// not recursive, but with a call leading into a cycle if ignoring direction
void norcA() { }
void norcB() { norcA(); }
void norcC() { norcB(); }
void norcD() { norcC(); norcB(); } // head of cycle
void norcE() { norcD(); } // lead into cycle

238
samples/Gosu/Ronin.gs Normal file
View File

@@ -0,0 +1,238 @@
/**
* Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.
*/
package ronin
uses gw.util.concurrent.LockingLazyVar
uses gw.lang.reflect.*
uses java.lang.*
uses java.io.*
uses ronin.config.*
uses org.slf4j.*
/**
* The central location for Ronin utility methods. Controllers and templates should generally access the
* methods and properties they inherit from {@link ronin.IRoninUtils} instead of using the methods and
* properties here.
*/
class Ronin {
// One static field to rule the all...
static var _CONFIG : IRoninConfig as Config
// And one thread local to bind them
static var _CURRENT_REQUEST = new ThreadLocal<RoninRequest>();
// That's inconstructable
private construct() {}
internal static function init(servlet : RoninServlet, m : ApplicationMode, src : File) {
if(_CONFIG != null) {
throw "Cannot initialize a Ronin application multiple times!"
}
var cfg = TypeSystem.getByFullNameIfValid("config.RoninConfig")
var defaultWarning = false
if(cfg != null) {
var ctor = cfg.TypeInfo.getConstructor({ronin.config.ApplicationMode, ronin.RoninServlet})
if(ctor == null) {
throw "config.RoninConfig must have a constructor with the same signature as ronin.config.RoninConfig"
}
_CONFIG = ctor.Constructor.newInstance({m, servlet}) as IRoninConfig
} else {
_CONFIG = new DefaultRoninConfig(m, servlet)
defaultWarning = true
}
var roninLogger = TypeSystem.getByFullNameIfValid("ronin.RoninLoggerFactory")
if(roninLogger != null) {
roninLogger.TypeInfo.getMethod("init", {ronin.config.LogLevel}).CallHandler.handleCall(null, {LogLevel})
}
if(defaultWarning) {
log("No configuration was found at config.RoninConfig, using the default configuration...", :level=WARN)
}
Quartz.maybeStart()
ReloadManager.setSourceRoot(src)
}
internal static property set CurrentRequest(req : RoninRequest) {
_CURRENT_REQUEST.set(req)
}
//============================================
// Public API
//============================================
/**
* The trace handler for the current request.
*/
static property get CurrentTrace() : Trace {
return CurrentRequest?.Trace
}
/**
* Ronin's representation of the current request.
*/
static property get CurrentRequest() : RoninRequest {
return _CURRENT_REQUEST.get()
}
/**
* The mode in which this application is running.
*/
static property get Mode() : ApplicationMode {
return _CONFIG?.Mode ?: TESTING
}
/**
* The log level at and above which log messages should be displayed.
*/
static property get LogLevel() : LogLevel {
return _CONFIG?.LogLevel ?: DEBUG
}
/**
* Whether or not to display detailed trace information on each request.
*/
static property get TraceEnabled() : boolean {
return _CONFIG != null ? _CONFIG.TraceEnabled : true
}
/**
* The default controller method to call when no method name is present in the request URL.
*/
static property get DefaultAction() : String {
return _CONFIG?.DefaultAction
}
/**
* The default controller to call when no controller name is present in the request URL.
*/
static property get DefaultController() : Type {
return _CONFIG?.DefaultController
}
/**
* The servlet responsible for handling Ronin requests.
*/
static property get RoninServlet() : RoninServlet {
return _CONFIG?.RoninServlet
}
/**
* The handler for request processing errors.
*/
static property get ErrorHandler() : IErrorHandler {
return _CONFIG?.ErrorHandler
}
/**
* The custom handler for logging messages.
*/
static property get LogHandler() : ILogHandler {
return _CONFIG?.LogHandler
}
/**
* Logs a message using the configured log handler.
* @param msg The text of the message to log, or a block which returns said text.
* @param level (Optional) The level at which to log the message.
* @param component (Optional) The logical component from whence the message originated.
* @param exception (Optional) An exception to associate with the message.
*/
static function log(msg : Object, level : LogLevel = null, component : String = null, exception : java.lang.Throwable = null) {
if(level == null) {
level = INFO
}
if(LogLevel <= level) {
var msgStr : String
if(msg typeis block():String) {
msgStr = (msg as block():String)()
} else {
msgStr = msg as String
}
if(_CONFIG?.LogHandler != null) {
_CONFIG.LogHandler.log(msgStr, level, component, exception)
} else {
switch(level) {
case TRACE:
LoggerFactory.getLogger(component?:Logger.ROOT_LOGGER_NAME).trace(msgStr, exception)
break
case DEBUG:
LoggerFactory.getLogger(component?:Logger.ROOT_LOGGER_NAME).debug(msgStr, exception)
break
case INFO:
LoggerFactory.getLogger(component?:Logger.ROOT_LOGGER_NAME).info(msgStr, exception)
break
case WARN:
LoggerFactory.getLogger(component?:Logger.ROOT_LOGGER_NAME).warn(msgStr, exception)
break
case ERROR:
case FATAL:
LoggerFactory.getLogger(component?:Logger.ROOT_LOGGER_NAME).error(msgStr, exception)
break
}
}
}
}
/**
* The caches known to Ronin.
*/
static enum CacheStore {
REQUEST,
SESSION,
APPLICATION
}
/**
* Retrieves a value from a cache, or computes and stores it if it is not in the cache.
* @param value A block which will compute the desired value.
* @param name (Optional) A unique identifier for the value. Default is null, which means one will be
* generated from the type of the value.
* @param store (Optional) The cache store used to retrieve or store the value. Default is the request cache.
* @return The retrieved or computed value.
*/
static function cache<T>(value : block():T, name : String = null, store : CacheStore = null) : T {
if(store == null or store == REQUEST) {
return _CONFIG.RequestCache.getValue(value, name)
} else if (store == SESSION) {
return _CONFIG.SessionCache.getValue(value, name)
} else if (store == APPLICATION) {
return _CONFIG.ApplicationCache.getValue(value, name)
} else {
throw "Don't know about CacheStore ${store}"
}
}
/**
* Invalidates a cached value in a cache.
* @param name The unique identifier for the value.
* @param store The cache store in which to invalidate the value.
*/
static function invalidate<T>(name : String, store : CacheStore) {
if(store == null or store == REQUEST) {
_CONFIG.RequestCache.invalidate(name)
} else if (store == SESSION) {
_CONFIG.SessionCache.invalidate(name)
} else if (store == APPLICATION) {
_CONFIG.ApplicationCache.invalidate(name)
} else {
throw "Don't know about CacheStore ${store}"
}
}
/**
* Detects changes made to resources in the Ronin application and
* reloads them. This function should only be called when Ronin is
* in development mode.
*/
static function loadChanges() {
ReloadManager.detectAndReloadChangedResources()
}
}

View File

@@ -0,0 +1,18 @@
apply plugin: GreetingPlugin
greeting.message = 'Hi from Gradle'
class GreetingPlugin implements Plugin<Project> {
void apply(Project project) {
// Add the 'greeting' extension object
project.extensions.create("greeting", GreetingPluginExtension)
// Add a task that uses the configuration
project.task('hello') << {
println project.greeting.message
}
}
}
class GreetingPluginExtension {
def String message = 'Hello from GreetingPlugin'
}

View File

@@ -0,0 +1,20 @@
apply plugin: GreetingPlugin
greeting {
message = 'Hi'
greeter = 'Gradle'
}
class GreetingPlugin implements Plugin<Project> {
void apply(Project project) {
project.extensions.create("greeting", GreetingPluginExtension)
project.task('hello') << {
println "${project.greeting.message} from ${project.greeting.greeter}"
}
}
}
class GreetingPluginExtension {
String message
String greeter
}

View File

@@ -0,0 +1,50 @@
/*
Huffman Tree DOT graph.
DOT Reference : http://www.graphviz.org/doc/info/lang.html
http://en.wikipedia.org/wiki/DOT_language
Timestamp : 1415989074
Phrase : 'OH GOD WHY IS LINGUIST SO ANAL ABOUT THIS STUFF'
Generated on http://huffman.ooz.ie/
*/
digraph G {
edge [label=0];
graph [ranksep=0];
T [shape=record, label="{{T|4}|000}"];
S [shape=record, label="{{S|5}|001}"];
SPACE [shape=record, label="{{SPACE|9}|01}"];
A [shape=record, label="{{A|3}|1000}"];
H [shape=record, label="{{H|3}|1001}"];
U [shape=record, label="{{U|3}|1010}"];
L [shape=record, label="{{L|2}|10110}"];
N [shape=record, label="{{N|2}|10111}"];
I [shape=record, label="{{I|4}|1100}"];
O [shape=record, label="{{O|4}|1101}"];
G [shape=record, label="{{G|2}|11100}"];
F [shape=record, label="{{F|2}|11101}"];
GF [label=4];
W [shape=record, label="{{W|1}|111100}"];
Y [shape=record, label="{{Y|1}|111101}"];
B [shape=record, label="{{B|1}|111110}"];
D [shape=record, label="{{D|1}|111111}"];
BD [label=2];
WYBD [label=4];
GFWYBD [label=8];
47 -> 18 -> 9 -> T;
29 -> 13 -> 6 -> A;
7 -> U;
4 -> L;
16 -> 8 -> I;
GFWYBD -> GF -> G;
WYBD -> 2 -> W;
BD -> B;9 -> S [label=1];
18 -> SPACE [label=1];
6 -> H [label=1];
13 -> 7 -> 4 -> N [label=1];
8 -> O [label=1];
GF -> F [label=1];
2 -> Y [label=1];
47 -> 29 -> 16 -> GFWYBD -> WYBD -> BD -> D [label=1];
}

View File

@@ -0,0 +1,74 @@
/*
Huffman Tree DOT graph.
DOT Reference : http://www.graphviz.org/doc/info/lang.html
http://en.wikipedia.org/wiki/DOT_language
Timestamp : 1415988139
Phrase : 'SERIAL KILLER AND SEX OFFENDER ANGUS SINCLAIR IS JAILED FOR A MINIMUM OF 37 YEARS FOR THE 1977 WORLDS END MURDERS OF HELEN SCOTT AND CHRISTINE EADIE.'
Generated on http://huffman.ooz.ie/
*/
digraph G {
edge [label=0];
graph [ranksep=0];
node [shape=record];
U [label="{{U|3}|00000}"];
G [label="{{G|1}|0000100}"];
K [label="{{K|1}|0000101}"];
_3 [label="{{3|1}|0000110}"];
_9 [label="{{9|1}|0000111}"];
_39 [label=2];
L [label="{{L|7}|0001}"];
O [label="{{O|7}|0010}"];
Y [label="{{Y|1}|0011000}"];
X [label="{{X|1}|0011001}"];
YX [label=2];
J [label="{{J|1}|0011010}"];
W [label="{{W|1}|0011011}"];
JW [label=2];
YXJW [label=4];
M [label="{{M|4}|00111}"];
E [label="{{E|15}|010}"];
D [label="{{D|8}|0110}"];
T [label="{{T|4}|01110}"];
DOT [label="{{DOT|1}|0111100}"];
_1 [label="{{1|1}|0111101}"];
DOT1 [label=2];
_7 [label="{{7|3}|011111}"];
A [label="{{A|9}|1000}"];
N [label="{{N|9}|1001}"];
S [label="{{S|10}|1010}"];
I [label="{{I|11}|1011}"];
R [label="{{R|11}|1100}"];
C [label="{{C|3}|110100}"];
H [label="{{H|3}|110101}"];
F [label="{{F|6}|11011}"];
SPACE [label="{{SPACE|26}|111}"];
149 -> 61 -> 29 -> 14 -> 7 -> U;
4 -> 2 -> G;
_39 -> _3;
15 -> O;
8 -> YXJW -> YX -> Y;
JW -> J;
32 -> E;
17 -> D;
9 -> T;
5 -> DOT1 -> DOT;
88 -> 39 -> 18 -> A;
21 -> S;
49 -> 23 -> R;
12 -> 6 -> C;2 -> K [label=1];
7 -> 4 -> _39 -> _9 [label=1];
14 -> L [label=1];
YX -> X [label=1];
YXJW -> JW -> W [label=1];
29 -> 15 -> 8 -> M [label=1];
DOT1 -> _1 [label=1];
61 -> 32 -> 17 -> 9 -> 5 -> _7 [label=1];
18 -> N [label=1];
39 -> 21 -> I [label=1];
6 -> H [label=1];
23 -> 12 -> F [label=1];
149 -> 88 -> 49 -> SPACE [label=1];
}

17
samples/HTML/example.xht Normal file
View File

@@ -0,0 +1,17 @@
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd">
<html xmlns="http://www.w3.org/1999/xhtml">
<head>
<title>This is a XHTML sample file</title>
<style type="text/css"><![CDATA[
#example {
background-color: yellow;
}
]]></style>
</head>
<body>
<div id="example">
Just a simple <strong>XHTML</strong> test page.
</div>
</body>
</html>

View File

@@ -0,0 +1,78 @@
/*
License
Copyright [2013] [Farruco Sanjurjo Arcay]
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.
*/
var TagsTotalPerMonth;
TagsTotalPerMonth = (function(){
function TagsTotalPerMonth(){};
TagsTotalPerMonth.getDatasource = function (category, months, values){
return new CategoryMonthlyExpenseBarChartDataSource(category, months, values);
};
TagsTotalPerMonth.getType = function (){ return Charts.ChartType.COLUMN};
return TagsTotalPerMonth;
})();
var TagsTotalPerMonthWithMean;
TagsTotalPerMonthWithMean = (function(){
function TagsTotalPerMonthWithMean(){};
TagsTotalPerMonthWithMean.getDatasource = function (category, months, values){
return new CategoryMonthlyWithMeanExpenseDataSource(category, months, values);
};
TagsTotalPerMonthWithMean.getType = function (){ return Charts.ChartType.LINE};
return TagsTotalPerMonthWithMean;
})();
var TagsAccumulatedPerMonth;
TagsAccumulatedPerMonth = (function(){
function TagsAccumulatedPerMonth(){};
TagsAccumulatedPerMonth.getDatasource = function (category, months, values){
return new CategoryMonthlyAccumulated(category, months, values);
};
TagsAccumulatedPerMonth.getType = function (){ return Charts.ChartType.AREA};
return TagsAccumulatedPerMonth;
})();
var MonthTotalsPerTags;
MonthTotalsPerTags = (function(){
function MonthTotalsPerTags(){};
MonthTotalsPerTags.getDatasource = function (month, tags, values){
return new CategoryExpenseDataSource(tags, month, values);
};
MonthTotalsPerTags.getType = function (){ return Charts.ChartType.PIE; };
return MonthTotalsPerTags;
})();
var SavingsFlowChartComposer = (function(){
function SavingsFlowChartComposer(){};
SavingsFlowChartComposer.getDatasource = function(months, values){
return new SavingsFlowDataSource(months, values);
};
SavingsFlowChartComposer.getType = function(){ return Charts.ChartType.COLUMN; };
return SavingsFlowChartComposer;
})();

View File

@@ -1,3 +0,0 @@
(function() {
}).call(this);

150
samples/JavaScript/itau.gs Normal file
View File

@@ -0,0 +1,150 @@
/*
The MIT License (MIT)
Copyright (c) 2014 Thiago Brandão Damasceno
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
*/
// based on http://ctrlq.org/code/19053-send-to-google-drive
function sendToGoogleDrive() {
var gmailLabels = 'inbox';
var driveFolder = 'Itaú Notifications';
var spreadsheetName = 'itau';
var archiveLabel = 'itau.processed';
var itauNotificationEmail = 'comunicacaodigital@itau-unibanco.com.br';
var filter = "from: " +
itauNotificationEmail +
" -label:" +
archiveLabel +
" label:" +
gmailLabels;
// Create label for 'itau.processed' if it doesn't exist
var moveToLabel = GmailApp.getUserLabelByName(archiveLabel);
if (!moveToLabel) {
moveToLabel = GmailApp.createLabel(archiveLabel);
}
// Create folder 'Itaú Notifications' if it doesn't exist
var folders = DriveApp.getFoldersByName(driveFolder);
var folder;
if (folders.hasNext()) {
folder = folders.next();
} else {
folder = DriveApp.createFolder(driveFolder);
}
// Create spreadsheet file 'itau' if it doesn't exist
var files = folder.getFilesByName(spreadsheetName);
// File is in DriveApp
// Doc is in SpreadsheetApp
// They are not interchangeable
var file, doc;
// Confusing :\
// As per: https://code.google.com/p/google-apps-script-issues/issues/detail?id=3578
if (files.hasNext()){
file = files.next();
doc = SpreadsheetApp.openById(file.getId());
} else {
doc = SpreadsheetApp.create(spreadsheetName);
file = DriveApp.getFileById(doc.getId());
folder.addFile(file);
DriveApp.removeFile(file);
}
var sheet = doc.getSheets()[0];
// Append header if first line
if(sheet.getLastRow() == 0){
sheet.appendRow(['Conta', 'Operação', 'Valor', 'Data', 'Hora', 'Email ID']);
}
var message, messages, account, operation, value, date, hour, emailID, plainBody;
var accountRegex = /Conta: (XXX[0-9\-]+)/;
var operationRegex = /Tipo de operação: ([A-Z]+)/;
var paymentRegex = /Pagamento de ([0-9A-Za-z\-]+)\ ?([0-9]+)?/;
var valueRegex = /Valor: R\$ ([0-9\,\.]+)/;
var dateRegex = /Data: ([0-9\/]+)/;
var hourRegex = /Hora: ([0-9\:]+)/;
var emailIDRegex = /E-mail nº ([0-9]+)/;
var threads = GmailApp.search(filter, 0, 100);
for (var x = 0; x < threads.length; x++) {
messages = threads[x].getMessages();
for (var i = 0; i < messages.length; i++) {
account, operation, value, date, hour, emailID = [];
message = messages[i];
plainBody = message.getPlainBody();
if(accountRegex.test(plainBody)) {
account = RegExp.$1;
}
if(operationRegex.test(plainBody)) {
operation = RegExp.$1;
}
if(valueRegex.test(plainBody)) {
value = RegExp.$1;
}
if(dateRegex.test(plainBody)) {
date = RegExp.$1;
}
if(hourRegex.test(plainBody)) {
hour = RegExp.$1;
}
if(emailIDRegex.test(plainBody)){
emailID = RegExp.$1;
}
if(paymentRegex.test(plainBody)){
operation = RegExp.$1;
if(RegExp.$2){
operation += ' ' + RegExp.$2
}
date = hour = ' - ';
}
if(account && operation && value && date && hour){
sheet.appendRow([account, operation, value, date, hour, emailID]);
}
// Logger.log(account);
// Logger.log(operation);
// Logger.log(value);
// Logger.log(date);
// Logger.log(hour);
}
threads[x].addLabel(moveToLabel);
}
}

View File

@@ -0,0 +1,38 @@
package
{
import loom.Application;
import loom2d.display.StageScaleMode;
import loom2d.ui.SimpleLabel;
/**
The HelloWorld app renders a label with its name on it,
and traces 'hello' to the log.
*/
public class HelloWorld extends Application
{
override public function run():void
{
stage.scaleMode = StageScaleMode.LETTERBOX;
centeredMessage(simpleLabel, this.getFullTypeName());
trace("hello");
}
// a convenience getter that generates a label and adds it to the stage
private function get simpleLabel():SimpleLabel
{
return stage.addChild(new SimpleLabel("assets/Curse-hd.fnt")) as SimpleLabel;
}
// a utility to set the label's text and then center it on the stage
private function centeredMessage(label:SimpleLabel, msg:String):void
{
label.text = msg;
label.center();
label.x = stage.stageWidth / 2;
label.y = (stage.stageHeight / 2) - (label.height / 2);
}
}
}

View File

@@ -0,0 +1,137 @@
package
{
import loom.Application;
public interface I {}
public class C {}
public class B extends C implements I {}
final public class A extends B {}
delegate ToCompute(s:String, o:Object):Number;
public enum Enumeration
{
foo,
baz,
cat,
}
struct P {
public var x:Number = 0;
public var y:Number = 0;
public static operator function =(a:P, b:P):P
{
a.x = b.x;
a.y = b.y;
return a;
}
}
// single-line comment
/*
Multi-line comment
*/
/**
Doc comment
*/
public class SyntaxExercise extends Application
{
static public var classVar:String = 'class variable';
public const CONST:String = 'constant';
private var _a:A = new A();
public var _d:ToCompute;
override public function run():void
{
trace("hello");
}
private function get a():A { return _a; }
private function set a(value:A):void { _a = value; }
private function variousTypes(defaultValue:String = ''):void
{
var nil:Object = null;
var b1:Boolean = true;
var b2:Boolean = false;
var n1:Number = 0.123;
var n2:Number = 12345;
var n3:Number = 0xfed;
var s1:String = 'single-quotes with "quotes" inside';
var s2:String = "double-quotes with 'quotes' inside";
var f1:Function = function (life:String, universe:Object, ...everything):Number { return 42; };
var v1:Vector.<Number> = [1, 2];
var d1:Dictionary.<String, Number> = { 'three': 3, 'four': 4 };
_d += f1;
_d -= f1;
}
private function variousOps():void
{
var a = ((100 + 200 - 0) / 300) % 2;
var b = 100 * 30;
var d = true && (b > 301);
var e = 0x10 | 0x01;
b++; b--;
a += 300; a -= 5; a *= 4; a /= 2; a %= 7;
var castable1:Boolean = (a is B);
var castable2:Boolean = (a as B) != null;
var cast:String = B(a).toString();
var instanced:Boolean = (_a instanceof A);
}
private function variousFlow():void
{
var n:Number = Math.random();
if (n > 0.6)
trace('top 40!');
else if(n > 0.3)
trace('mid 30!');
else
trace('bottom 30');
var flip:String = (Math.random() > 0.5) ? 'heads' : 'tails';
for (var i = 0; i < 100; i++)
trace(i);
var v:Vector.<String> = ['a', 'b', 'c'];
for each (var s:String in v)
trace(s);
var d:Dictionary.<String, Number> = { 'one': 1 };
for (var key1:String in d)
trace(key1);
for (var key2:Number in v)
trace(key2);
while (i > 0)
{
i--;
if (i == 13) continue;
trace(i);
}
do
{
i++;
}
while (i < 10);
switch (Math.floor(Math.random()) * 3 + 1)
{
case 1 : trace('rock'); break;
case 2 : trace('paper'); break;
default: trace('scissors'); break;
}
}
}
}

View File

@@ -0,0 +1,207 @@
<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd">
<modelVersion>4.0.0</modelVersion>
<groupId>renpengben</groupId>
<artifactId>spring4mvc-jpa</artifactId>
<packaging>war</packaging>
<version>0.0.1-SNAPSHOT</version>
<name>spring4mvc-jpa Maven Webapp</name>
<url>https://renpengben.github.io</url>
<description>spring4mvc-jpa</description>
<properties>
<project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
<java.version>1.7</java.version>
<junit.version>4.11</junit.version>
<slf4j.version>1.7.7</slf4j.version>
<log4j.version>1.2.17</log4j.version>
<spring.version>4.0.5.RELEASE</spring.version>
<spring.data.jpa.version>1.6.0.RELEASE</spring.data.jpa.version>
<cglib.version>2.1_3</cglib.version>
<mysql.version>5.1.31</mysql.version>
<hibernate.version>4.3.5.Final</hibernate.version>
<hibernate-validator.version>5.1.1.Final</hibernate-validator.version>
<druid-version>1.0.6</druid-version>
</properties>
<dependencies>
<dependency>
<groupId>junit</groupId>
<artifactId>junit</artifactId>
<version>${junit.version}</version>
<scope>test</scope>
</dependency>
<dependency>
<groupId>org.slf4j</groupId>
<artifactId>slf4j-api</artifactId>
<version>${slf4j.version}</version>
</dependency>
<dependency>
<groupId>org.slf4j</groupId>
<artifactId>slf4j-log4j12</artifactId>
<version>${slf4j.version}</version>
</dependency>
<dependency>
<groupId>log4j</groupId>
<artifactId>log4j</artifactId>
<version>${log4j.version}</version>
</dependency>
<!-- Spring -->
<dependency>
<groupId>org.springframework</groupId>
<artifactId>spring-core</artifactId>
<version>${spring.version}</version>
<exclusions>
<exclusion>
<groupId>commons-logging</groupId>
<artifactId>commons-logging</artifactId>
</exclusion>
</exclusions>
</dependency>
<dependency>
<groupId>org.springframework</groupId>
<artifactId>spring-beans</artifactId>
<version>${spring.version}</version>
</dependency>
<dependency>
<groupId>org.springframework</groupId>
<artifactId>spring-context</artifactId>
<version>${spring.version}</version>
</dependency>
<dependency>
<groupId>org.springframework</groupId>
<artifactId>spring-aop</artifactId>
<version>${spring.version}</version>
</dependency>
<dependency>
<groupId>org.springframework</groupId>
<artifactId>spring-expression</artifactId>
<version>${spring.version}</version>
</dependency>
<dependency>
<groupId>org.springframework</groupId>
<artifactId>spring-tx</artifactId>
<version>${spring.version}</version>
</dependency>
<dependency>
<groupId>org.springframework</groupId>
<artifactId>spring-aspects</artifactId>
<version>${spring.version}</version>
</dependency>
<dependency>
<groupId>org.springframework</groupId>
<artifactId>spring-context-support</artifactId>
<version>${spring.version}</version>
</dependency>
<dependency>
<groupId>org.springframework</groupId>
<artifactId>spring-jdbc</artifactId>
<version>${spring.version}</version>
</dependency>
<dependency>
<groupId>org.springframework</groupId>
<artifactId>spring-orm</artifactId>
<version>${spring.version}</version>
</dependency>
<dependency>
<groupId>org.springframework</groupId>
<artifactId>spring-web</artifactId>
<version>${spring.version}</version>
</dependency>
<dependency>
<groupId>org.springframework</groupId>
<artifactId>spring-webmvc</artifactId>
<version>${spring.version}</version>
</dependency>
<dependency>
<groupId>org.springframework</groupId>
<artifactId>spring-test</artifactId>
<version>${spring.version}</version>
<scope>test</scope>
</dependency>
<dependency>
<groupId>org.springframework.data</groupId>
<artifactId>spring-data-jpa</artifactId>
<version>${spring.data.jpa.version}</version>
<exclusions>
<exclusion>
<artifactId>junit-dep</artifactId>
<groupId>junit</groupId>
</exclusion>
</exclusions>
</dependency>
<dependency>
<groupId>cglib</groupId>
<artifactId>cglib-nodep</artifactId>
<version>${cglib.version}</version>
</dependency>
<!-- JPA -->
<dependency>
<groupId>org.hibernate</groupId>
<artifactId>hibernate-core</artifactId>
<version>${hibernate.version}</version>
</dependency>
<dependency>
<groupId>org.hibernate</groupId>
<artifactId>hibernate-entitymanager</artifactId>
<version>${hibernate.version}</version>
</dependency>
<dependency>
<groupId>org.hibernate</groupId>
<artifactId>hibernate-validator</artifactId>
<version>${hibernate-validator.version}</version>
<scope>compile</scope>
</dependency>
<dependency>
<groupId>mysql</groupId>
<artifactId>mysql-connector-java</artifactId>
<version>${mysql.version}</version>
<scope>runtime</scope>
</dependency>
<dependency>
<groupId>com.alibaba</groupId>
<artifactId>druid</artifactId>
<version>${druid-version}</version>
</dependency>
</dependencies>
<build>
<plugins>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-compiler-plugin</artifactId>
<version>2.0.2</version>
<configuration>
<source>1.7</source>
<target>1.7</target>
</configuration>
</plugin>
</plugins>
</build>
</project>

52
samples/Oz/example.oz Normal file
View File

@@ -0,0 +1,52 @@
% You can get a lot of information about Oz by following theses links :
% - http://mozart.github.io/
% - http://en.wikipedia.org/wiki/Oz_(programming_language)
% There is also a well known book that uses Oz for pedagogical reason :
% - http://mitpress.mit.edu/books/concepts-techniques-and-models-computer-programming
% And there are two courses on edX about 'Paradigms of Computer Programming' that also uses Oz for pedagogical reason :
% - https://www.edx.org/node/2751#.VHijtfl5OSo
% - https://www.edx.org/node/4436#.VHijzfl5OSo
%
% Here is an example of some code written with Oz.
declare
% Computes the sum of square of the N first integers.
fun {Sum N}
local SumAux in
fun {SumAux N Acc}
if N==0 then Acc
else
{Sum N-1 Acc}
end
end
{SumAux N 0}
end
end
% Returns true if N is a prime and false otherwize
fun {Prime N}
local PrimeAcc in
fun {PrimeAcc N Acc}
if(N == 1) then false
elseif(Acc == 1) then true
else
if (N mod Acc) == 0 then false
else
{PrimeAcc N Acc-1}
end
end
end
{PrimeAcc N (N div 2)}
end
end
% Reverse a list using cells and for loop (instead of recursivity)
fun {Reverse L}
local RevList in
RevList = {NewCell nil}
for E in L do
RevList := E|@RevList
end
@RevList
end
end

View File

@@ -0,0 +1,97 @@
use v6;
use Test;
=begin pod
Test handling of -I.
Multiple C<-I> switches are supposed to
prepend left-to-right:
-Ifoo -Ibar
should make C<@*INC> look like:
foo
bar
...
Duplication of directories on the command line is mirrored
in the C<@*INC> variable, so C<pugs -Ilib -Ilib> will have B<two>
entries C<lib/> in C<@*INC>.
=end pod
# L<S19/Reference/"Prepend directories to">
my $fragment = '-e "@*INC.perl.say"';
my @tests = (
'foo',
'foo$bar',
'foo bar$baz',
'foo$foo',
);
plan @tests*2;
diag "Running under $*OS";
my ($pugs,$redir) = ($*EXECUTABLE_NAME, ">");
if $*OS eq any <MSWin32 mingw msys cygwin> {
$pugs = 'pugs.exe';
$redir = '>';
};
sub nonce () { return (".{$*PID}." ~ (1..1000).pick) }
sub run_pugs ($c) {
my $tempfile = "temp-ex-output" ~ nonce;
my $command = "$pugs $c $redir $tempfile";
diag $command;
run $command;
my $res = slurp $tempfile;
unlink $tempfile;
return $res;
}
for @tests -> $t {
my @dirs = split('$',$t);
my $command;
# This should be smarter about quoting
# (currently, this should work for WinNT and Unix shells)
$command = join " ", map { qq["-I$_"] }, @dirs;
my $got = run_pugs( $command ~ " $fragment" );
$got .= chomp;
if (substr($got,0,1) ~~ "[") {
# Convert from arrayref to array
$got = substr($got, 1, -1);
};
my @got = EVAL $got;
@got = @got[ 0..@dirs-1 ];
my @expected = @dirs;
is @got, @expected, "'" ~ @dirs ~ "' works";
$command = join " ", map { qq[-I "$_"] }, @dirs;
$got = run_pugs( $command ~ " $fragment" );
$got .= chomp;
if (substr($got,0,1) ~~ "[") {
# Convert from arrayref to array
$got = substr($got, 1, -1);
};
@got = EVAL $got;
@got = @got[ 0..@dirs-1 ];
@expected = @dirs;
is @got, @expected, "'" ~ @dirs ~ "' works (with a space delimiting -I)";
}
# vim: ft=perl6

223
samples/Perl6/01-parse.t Normal file
View File

@@ -0,0 +1,223 @@
use v6;
BEGIN { @*INC.push('lib') };
use JSON::Tiny::Grammar;
use Test;
my @t =
'{}',
'{ }',
' { } ',
'{ "a" : "b" }',
'{ "a" : null }',
'{ "a" : true }',
'{ "a" : false }',
'{ "a" : { } }',
'[]',
'[ ]',
' [ ] ',
# stolen from JSON::XS, 18_json_checker.t, and adapted a bit
Q<<[
"JSON Test Pattern pass1",
{"object with 1 member":["array with 1 element"]},
{},
[]
]>>,
Q<<[1]>>,
Q<<[true]>>,
Q<<[-42]>>,
Q<<[-42,true,false,null]>>,
Q<<{ "integer": 1234567890 }>>,
Q<<{ "real": -9876.543210 }>>,
Q<<{ "e": 0.123456789e-12 }>>,
Q<<{ "E": 1.234567890E+34 }>>,
Q<<{ "": 23456789012E66 }>>,
Q<<{ "zero": 0 }>>,
Q<<{ "one": 1 }>>,
Q<<{ "space": " " }>>,
Q<<{ "quote": "\""}>>,
Q<<{ "backslash": "\\"}>>,
Q<<{ "controls": "\b\f\n\r\t"}>>,
Q<<{ "slash": "/ & \/"}>>,
Q<<{ "alpha": "abcdefghijklmnopqrstuvwyz"}>>,
Q<<{ "ALPHA": "ABCDEFGHIJKLMNOPQRSTUVWYZ"}>>,
Q<<{ "digit": "0123456789"}>>,
Q<<{ "0123456789": "digit"}>>,
Q<<{"special": "`1~!@#$%^&*()_+-={':[,]}|;.</>?"}>>,
Q<<{"hex": "\u0123\u4567\u89AB\uCDEF\uabcd\uef4A"}>>,
Q<<{"true": true}>>,
Q<<{"false": false}>>,
Q<<{"null": null}>>,
Q<<{"array":[ ]}>>,
Q<<{"object":{ }}>>,
Q<<{"address": "50 St. James Street"}>>,
Q<<{"url": "http://www.JSON.org/"}>>,
Q<<{"comment": "// /* <!-- --"}>>,
Q<<{"# -- --> */": " "}>>,
Q<<{ " s p a c e d " :[1,2 , 3
,
4 , 5 , 6 ,7 ],"compact":[1,2,3,4,5,6,7]}>>,
Q<<{"jsontext": "{\"object with 1 member\":[\"array with 1 element\"]}"}>>,
Q<<{"quotes": "&#34; \u0022 %22 0x22 034 &#x22;"}>>,
Q<<{ "\/\\\"\uCAFE\uBABE\uAB98\uFCDE\ubcda\uef4A\b\f\n\r\t`1~!@#$%^&*()_+-=[]{}|;:',./<>?"
: "A key can be any string"
}>>,
Q<<[ 0.5 ,98.6
,
99.44
,
1066,
1e1,
0.1e1
]>>,
Q<<[1e-1]>>,
Q<<[1e00,2e+00,2e-00,"rosebud"]>>,
Q<<[[[[[[[[[[[[[[[[[[["Not too deep"]]]]]]]]]]]]]]]]]]]>>,
Q<<{
"JSON Test Pattern pass3": {
"The outermost value": "must be an object or array.",
"In this test": "It is an object."
}
}
>>,
# from http://www.json.org/example.html
Q<<{
"glossary": {
"title": "example glossary",
"GlossDiv": {
"title": "S",
"GlossList": {
"GlossEntry": {
"ID": "SGML",
"SortAs": "SGML",
"GlossTerm": "Standard Generalized Markup Language",
"Acronym": "SGML",
"Abbrev": "ISO 8879:1986",
"GlossDef": {
"para": "A meta-markup language, used to create markup languages such as DocBook.",
"GlossSeeAlso": ["GML", "XML"]
},
"GlossSee": "markup"
}
}
}
}
}
>>,
Q<<{"menu": {
"id": "file",
"value": "File",
"popup": {
"menuitem": [
{"value": "New", "onclick": "CreateNewDoc()"},
{"value": "Open", "onclick": "OpenDoc()"},
{"value": "Close", "onclick": "CloseDoc()"}
]
}
}}>>,
Q<<{"widget": {
"debug": "on",
"window": {
"title": "Sample Konfabulator Widget",
"name": "main_window",
"width": 500,
"height": 500
},
"image": {
"src": "Images/Sun.png",
"name": "sun1",
"hOffset": 250,
"vOffset": 250,
"alignment": "center"
},
"text": {
"data": "Click Here",
"size": 36,
"style": "bold",
"name": "text1",
"hOffset": 250,
"vOffset": 100,
"alignment": "center",
"onMouseUp": "sun1.opacity = (sun1.opacity / 100) * 90;"
}
}}>>,
;
my @n =
'{ ',
'{ 3 : 4 }',
'{ 3 : tru }', # not quite true
'{ "a : false }', # missing quote
# stolen from JSON::XS, 18_json_checker.t
Q<<"A JSON payload should be an object or array, not a string.">>,
Q<<{"Extra value after close": true} "misplaced quoted value">>,
Q<<{"Illegal expression": 1 + 2}>>,
Q<<{"Illegal invocation": alert()}>>,
Q<<{"Numbers cannot have leading zeroes": 013}>>,
Q<<{"Numbers cannot be hex": 0x14}>>,
Q<<["Illegal backslash escape: \x15"]>>,
Q<<[\naked]>>,
Q<<["Illegal backslash escape: \017"]>>,
# skipped: wo don't implement no stinkin' aritifical limits.
# Q<<[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[["Too deep"]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]>>,
Q<<{"Missing colon" null}>>,
Q<<["Unclosed array">>,
Q<<{"Double colon":: null}>>,
Q<<{"Comma instead of colon", null}>>,
Q<<["Colon instead of comma": false]>>,
Q<<["Bad value", truth]>>,
Q<<['single quote']>>,
qq<["\ttab\tcharacter in string "]>,
Q<<["line
break"]>>,
Q<<["line\
break"]>>,
Q<<[0e]>>,
Q<<{unquoted_key: "keys must be quoted"}>>,
Q<<[0e+]>>,
Q<<[0e+-1]>>,
Q<<{"Comma instead if closing brace": true,>>,
Q<<["mismatch"}>>,
Q<<["extra comma",]>>,
Q<<["double extra comma",,]>>,
Q<<[ , "<-- missing value"]>>,
Q<<["Comma after the close"],>>,
Q<<["Extra close"]]>>,
Q<<{"Extra comma": true,}>>,
;
plan (+@t) + (+@n);
my $i = 0;
for @t -> $t {
my $desc = $t;
if $desc ~~ m/\n/ {
$desc .= subst(/\n.*$/, "\\n...[$i]");
}
my $parsed = 0;
try {
JSON::Tiny::Grammar.parse($t)
and $parsed = 1;
}
ok $parsed, "JSON string «$desc» parsed";
$i++;
}
for @n -> $t {
my $desc = $t;
if $desc ~~ m/\n/ {
$desc .= subst(/\n.*$/, "\\n...[$i]");
}
my $parsed = 0;
try { JSON::Tiny::Grammar.parse($t) and $parsed = 1 };
nok $parsed, "NOT parsed «$desc»";
$i++;
}
# vim: ft=perl6

9
samples/Perl6/A.pm Normal file
View File

@@ -0,0 +1,9 @@
# used in t/spec/S11-modules/nested.t
BEGIN { @*INC.push('t/spec/packages') };
module A::A {
use A::B;
}
# vim: ft=perl6

148
samples/Perl6/ANSIColor.pm Normal file
View File

@@ -0,0 +1,148 @@
use v6;
module Term::ANSIColor;
# these will be macros one day, yet macros can't be exported so far
sub RESET is export { "\e[0m" }
sub BOLD is export { "\e[1m" }
sub UNDERLINE is export { "\e[4m" }
sub INVERSE is export { "\e[7m" }
sub BOLD_OFF is export { "\e[22m" }
sub UNDERLINE_OFF is export { "\e[24m" }
sub INVERSE_OFF is export { "\e[27m" }
my %attrs =
reset => "0",
bold => "1",
underline => "4",
inverse => "7",
black => "30",
red => "31",
green => "32",
yellow => "33",
blue => "34",
magenta => "35",
cyan => "36",
white => "37",
default => "39",
on_black => "40",
on_red => "41",
on_green => "42",
on_yellow => "43",
on_blue => "44",
on_magenta => "45",
on_cyan => "46",
on_white => "47",
on_default => "49";
sub color (Str $what) is export {
my @res;
my @a = $what.split(' ');
for @a -> $attr {
if %attrs.exists($attr) {
@res.push: %attrs{$attr}
} else {
die("Invalid attribute name '$attr'")
}
}
return "\e[" ~ @res.join(';') ~ "m";
}
sub colored (Str $what, Str $how) is export {
color($how) ~ $what ~ color('reset');
}
sub colorvalid (*@a) is export {
for @a -> $el {
return False unless %attrs.exists($el)
}
return True;
}
sub colorstrip (*@a) is export {
my @res;
for @a -> $str {
@res.push: $str.subst(/\e\[ <[0..9;]>+ m/, '', :g);
}
return @res.join;
}
sub uncolor (Str $what) is export {
my @res;
my @list = $what.comb(/\d+/);
for @list -> $elem {
if %attrs.reverse.exists($elem) {
@res.push: %attrs.reverse{$elem}
} else {
die("Bad escape sequence: {'\e[' ~ $elem ~ 'm'}")
}
}
return @res.join(' ');
}
=begin pod
=head1 NAME
Term::ANSIColor - Color screen output using ANSI escape sequences
=head1 SYNOPSIS
use Term::ANSIColor;
say color('bold'), "this is in bold", color('reset');
say colored('underline red on_green', 'what a lovely colours!');
say BOLD, 'good to be fat!', BOLD_OFF;
say 'ok' if colorvalid('magenta', 'on_black', 'inverse');
say '\e[36m is ', uncolor('\e36m');
say colorstrip("\e[1mThis is bold\e[0m");
=head1 DESCRIPTION
Term::ANSIColor provides an interface for using colored output
in terminals. The following functions are available:
=head2 C<color()>
Given a string with color names, the output produced by C<color()>
sets the terminal output so the text printed after it will be colored
as specified. The following color names are recognised:
reset bold underline inverse black red green yellow blue
magenta cyan white default on_black on_red on_green on_yellow
on_blue on_magenta on_cyan on_white on_default
The on_* family of colors correspond to the background colors.
=head2 C<colored()>
C<colored()> is similar to C<color()>. It takes two Str arguments,
where the first is the colors to be used, and the second is the string
to be colored. The C<reset> sequence is automagically placed after
the string.
=head2 C<colorvalid()>
C<colorvalid()> gets an array of color specifications (like those
passed to C<color()>) and returns true if all of them are valid,
false otherwise.
=head2 C<colorstrip()>
C<colorstrip>, given a string, removes all the escape sequences
in it, leaving the plain text without effects.
=head2 C<uncolor()>
Given escape sequences, C<uncolor()> returns a string with readable
color names. E.g. passing "\e[36;44m" will result in "cyan on_blue".
=head1 Constants
C<Term::ANSIColor> provides constants which are just strings of
appropriate escape sequences. The following constants are available:
RESET BOLD UNDERLINE INVERSE BOLD_OFF UNDERLINE_OFF INVERSE_OFF
=end pod
# vim: ft=perl6

102
samples/Perl6/Bailador.pm Normal file
View File

@@ -0,0 +1,102 @@
use Bailador::App;
use Bailador::Request;
use Bailador::Response;
use Bailador::Context;
use HTTP::Easy::PSGI;
module Bailador;
my $app = Bailador::App.current;
our sub import {
my $file = callframe(1).file;
my $slash = $file.rindex('/');
if $slash {
$app.location = $file.substr(0, $file.rindex('/'));
} else {
$app.location = '.';
}
}
sub route_to_regex($route) {
$route.split('/').map({
my $r = $_;
if $_.substr(0, 1) eq ':' {
$r = q{(<-[\/\.]>+)};
}
$r
}).join("'/'");
}
multi parse_route(Str $route) {
my $r = route_to_regex($route);
return "/ ^ $r \$ /".eval;
}
multi parse_route($route) {
# do nothing
$route
}
sub get(Pair $x) is export {
my $p = parse_route($x.key) => $x.value;
$app.add_route: 'GET', $p;
return $x;
}
sub post(Pair $x) is export {
my $p = parse_route($x.key) => $x.value;
$app.add_route: 'POST', $p;
return $x;
}
sub request is export { $app.context.request }
sub content_type(Str $type) is export {
$app.response.headers<Content-Type> = $type;
}
sub header(Str $name, Cool $value) is export {
$app.response.headers{$name} = ~$value;
}
sub status(Int $code) is export {
$app.response.code = $code;
}
sub template(Str $tmpl, *@params) is export {
$app.template($tmpl, @params);
}
our sub dispatch_request(Bailador::Request $r) {
return dispatch($r.env);
}
sub dispatch($env) {
$app.context.env = $env;
my ($r, $match) = $app.find_route($env);
if $r {
status 200;
if $match {
$app.response.content = $r.value.(|$match.list);
} else {
$app.response.content = $r.value.();
}
}
return $app.response;
}
sub dispatch-psgi($env) {
return dispatch($env).psgi;
}
sub baile is export {
given HTTP::Easy::PSGI.new(port => 3000) {
.app(&dispatch-psgi);
say "Entering the development dance floor: http://0.0.0.0:3000";
.run;
}
}

View File

@@ -0,0 +1,7 @@
module ContainsUnicode {
sub uc-and-join(*@things, :$separator = ', ') is export {
@things».uc.join($separator)
}
}
# vim: ft=perl6

1431
samples/Perl6/Exception.pm Normal file

File diff suppressed because it is too large Load Diff

146
samples/Perl6/Model.pm Normal file
View File

@@ -0,0 +1,146 @@
use v6;
class Math::Model;
use Math::RungeKutta;
# TODO: only load when needed
use SVG;
use SVG::Plot;
has %.derivatives;
has %.variables;
has %.initials;
has @.captures is rw;
has %!inv = %!derivatives.invert;
# in Math::Model all variables are accessible by name
# in contrast Math::RungeKutta uses vectors, so we need
# to define an (arbitrary) ordering
# @!deriv-names holds the names of the derivatives in a fixed
# order, sod @!deriv-names[$number] turns the number into a name
# %!deriv-keying{$name} translates a name into the corresponding index
has @!deriv-names = %!inv.keys;
has %!deriv-keying = @!deriv-names Z=> 0..Inf;
# snapshot of all variables in the current model
has %!current-values;
has %.results;
has @.time;
has $.numeric-error is rw = 0.0001;
my sub param-names(&c) {
&c.signature.params».name».substr(1).grep({ $_ ne '_'});
}
method !params-for(&c) {
param-names(&c).map( {; $_ => %!current-values{$_} } ).hash;
}
method topo-sort(*@vars) {
my %seen;
my @order;
sub topo(*@a) {
for @a {
next if %!inv.exists($_) || %seen{$_} || $_ eq 'time';
die "Undeclared variable '$_' used in model"
unless %.variables.exists($_);
topo(param-names(%.variables{$_}));
@order.push: $_;
%seen{$_}++;
}
}
topo(@vars);
# say @order.perl;
@order;
}
method integrate(:$from = 0, :$to = 10, :$min-resolution = ($to - $from) / 20, :$verbose) {
for %.derivatives -> $d {
die "There must be a variable defined for each derivative, missing for '$d.key()'"
unless %.variables.exists($d.key) || %!inv.exists($d.key);
die "There must be an initial value defined for each derivative target, missing for '$d.value()'"
unless %.initials.exists($d.value);
}
%!current-values = %.initials;
%!current-values<time> = $from;
my @vars-topo = self.topo-sort(%.variables.keys);
sub update-current-values($time, @values) {
%!current-values<time> = $time;
%!current-values{@!deriv-names} = @values;
for @vars-topo {
my $c = %.variables{$_};
%!current-values{$_} = $c.(|self!params-for($c));
}
}
my @initial = %.initials{@!deriv-names};
sub derivatives($time, @values) {
update-current-values($time, @values);
my @r;
for %!inv{@!deriv-names} {
my $v = %.variables{$_};
@r.push: $v.defined
?? $v(|self!params-for($v))
!! %!current-values{$_};
}
@r;
}
@!time = ();
for @.captures {
%!results{$_} = [];
}
sub record($time, @values) {
update-current-values($time, @values);
@!time.push: $time;
say $time if $verbose;
for @.captures {
%!results{$_}.push: %!current-values{$_};;
}
}
record($from, %.initials{@!deriv-names});
adaptive-rk-integrate(
:$from,
:$to,
:@initial,
:derivative(&derivatives),
:max-stepsize($min-resolution),
:do(&record),
:epsilon($.numeric-error),
);
%!results;
}
method render-svg(
$filename,
:$x-axis = 'time',
:$width = 800,
:$height = 600,
:$title = 'Model output') {
my $f = open $filename, :w
or die "Can't open file '$filename' for writing: $!";
my @values = map { %!results{$_} }, @.captures.grep({ $_ ne $x-axis});
my @x = $x-axis eq 'time' ?? @!time !! %!results{$x-axis}.flat;
my $svg = SVG::Plot.new(
:$width,
:$height,
:@x,
:@values,
:$title,
).plot(:xy-lines);
$f.say(SVG.serialize($svg));
$f.close;
say "Wrote ouput to '$filename'";
}
# vim: ft=perl6

317
samples/Perl6/Simple.pm Normal file
View File

@@ -0,0 +1,317 @@
# ----------------------
# LWP::Simple for Perl 6
# ----------------------
use v6;
use MIME::Base64;
use URI;
class LWP::Simple:auth<cosimo>:ver<0.085>;
our $VERSION = '0.085';
enum RequestType <GET POST>;
has Str $.default_encoding = 'utf-8';
our $.class_default_encoding = 'utf-8';
# these were intended to be constant but that hit pre-compilation issue
my Buf $crlf = Buf.new(13, 10);
my Buf $http_header_end_marker = Buf.new(13, 10, 13, 10);
my Int constant $default_stream_read_len = 2 * 1024;
method base64encode ($user, $pass) {
my MIME::Base64 $mime .= new();
my $encoded = $mime.encode_base64($user ~ ':' ~ $pass);
return $encoded;
}
method get (Str $url) {
self.request_shell(RequestType::GET, $url)
}
method post (Str $url, %headers = {}, Any $content?) {
self.request_shell(RequestType::POST, $url, %headers, $content)
}
method request_shell (RequestType $rt, Str $url, %headers = {}, Any $content?) {
return unless $url;
my ($scheme, $hostname, $port, $path, $auth) = self.parse_url($url);
%headers{'Connection'} = 'close';
%headers{'User-Agent'} //= "LWP::Simple/$VERSION Perl6/$*PERL<compiler><name>";
if $auth {
$hostname = $auth<host>;
my $user = $auth<user>;
my $pass = $auth<password>;
my $base64enc = self.base64encode($user, $pass);
%headers<Authorization> = "Basic $base64enc";
}
%headers<Host> = $hostname;
if ($rt ~~ RequestType::POST && $content.defined) {
# Attach Content-Length header
# as recommended in RFC2616 section 14.3.
# Note: Empty content is also a content,
# header value equals to zero is valid.
%headers{'Content-Length'} = $content.encode.bytes;
}
my ($status, $resp_headers, $resp_content) =
self.make_request($rt, $hostname, $port, $path, %headers, $content);
given $status {
when / 30 <[12]> / {
my %resp_headers = $resp_headers.hash;
my $new_url = %resp_headers<Location>;
if ! $new_url {
die "Redirect $status without a new URL?";
}
# Watch out for too many redirects.
# Need to find a way to store a class member
#if $redirects++ > 10 {
# say "Too many redirects!";
# return;
#}
return self.request_shell($rt, $new_url, %headers, $content);
}
when /200/ {
# should be fancier about charset decoding application - someday
if $resp_headers<Content-Type> &&
$resp_headers<Content-Type> ~~
/ $<media-type>=[<-[/;]>+]
[ <[/]> $<media-subtype>=[<-[;]>+] ]? / &&
( $<media-type> eq 'text' ||
( $<media-type> eq 'application' &&
$<media-subtype> ~~ /[ ecma | java ]script | json/
)
)
{
my $charset =
($resp_headers<Content-Type> ~~ /charset\=(<-[;]>*)/)[0];
$charset = $charset ?? $charset.Str !!
self ?? $.default_encoding !! $.class_default_encoding;
return $resp_content.decode($charset);
}
else {
return $resp_content;
}
}
# Response failed
default {
return;
}
}
}
method parse_chunks(Blob $b is rw, IO::Socket::INET $sock) {
my Int ($line_end_pos, $chunk_len, $chunk_start) = (0) xx 3;
my Blob $content = Blob.new();
# smallest valid chunked line is 0CRLFCRLF (ascii or other 8bit like EBCDIC)
while ($line_end_pos + 5 <= $b.bytes) {
while ( $line_end_pos +4 <= $b.bytes &&
$b.subbuf($line_end_pos, 2) ne $crlf
) {
$line_end_pos++
}
# say "got here x0x pos ", $line_end_pos, ' bytes ', $b.bytes, ' start ', $chunk_start, ' some data ', $b.subbuf($chunk_start, $line_end_pos +2 - $chunk_start).decode('ascii');
if $line_end_pos +4 <= $b.bytes &&
$b.subbuf(
$chunk_start, $line_end_pos + 2 - $chunk_start
).decode('ascii') ~~ /^(<.xdigit>+)[";"|"\r\n"]/
{
# deal with case of chunk_len is 0
$chunk_len = :16($/[0].Str);
# say 'got chunk len ', $/[0].Str;
# test if at end of buf??
if $chunk_len == 0 {
# this is a "normal" exit from the routine
return True, $content;
}
# think 1CRLFxCRLF
if $line_end_pos + $chunk_len + 4 <= $b.bytes {
# say 'inner chunk';
$content ~= $b.subbuf($line_end_pos +2, $chunk_len);
$line_end_pos = $chunk_start = $line_end_pos + $chunk_len +4;
}
else {
# say 'last chunk';
# remaining chunk part len is chunk_len with CRLF
# minus the length of the chunk piece at end of buffer
my $last_chunk_end_len =
$chunk_len +2 - ($b.bytes - $line_end_pos -2);
$content ~= $b.subbuf($line_end_pos +2);
if $last_chunk_end_len > 2 {
$content ~= $sock.read($last_chunk_end_len -2);
}
# clean up CRLF after chunk
$sock.read(min($last_chunk_end_len, 2));
# this is a` "normal" exit from the routine
return False, $content;
}
}
else {
# say 'extend bytes ', $b.bytes, ' start ', $chunk_start, ' data ', $b.subbuf($chunk_start).decode('ascii');
# maybe odd case of buffer has just part of header at end
$b ~= $sock.read(20);
}
}
# say join ' ', $b[0 .. 100];
# say $b.subbuf(0, 100).decode('utf-8');
die "Could not parse chunk header";
}
method make_request (
RequestType $rt, $host, $port as Int, $path, %headers, $content?
) {
my $headers = self.stringify_headers(%headers);
my IO::Socket::INET $sock .= new(:$host, :$port);
my Str $req_str = $rt.Stringy ~ " {$path} HTTP/1.1\r\n"
~ $headers
~ "\r\n";
# attach $content if given
# (string context is forced by concatenation)
$req_str ~= $content if $content.defined;
$sock.send($req_str);
my Blob $resp = $sock.read($default_stream_read_len);
my ($status, $resp_headers, $resp_content) = self.parse_response($resp);
if (($resp_headers<Transfer-Encoding> || '') eq 'chunked') {
my Bool $is_last_chunk;
my Blob $resp_content_chunk;
($is_last_chunk, $resp_content) =
self.parse_chunks($resp_content, $sock);
while (not $is_last_chunk) {
($is_last_chunk, $resp_content_chunk) =
self.parse_chunks(
my Blob $next_chunk_start = $sock.read(1024),
$sock
);
$resp_content ~= $resp_content_chunk;
}
}
elsif ( $resp_headers<Content-Length> &&
$resp_content.bytes < $resp_headers<Content-Length>
) {
$resp_content ~= $sock.read(
$resp_headers<Content-Length> - $resp_content.bytes
);
}
else { # a bit hacky for now but should be ok
while ($resp.bytes > 0) {
$resp = $sock.read($default_stream_read_len);
$resp_content ~= $resp;
}
}
$sock.close();
return ($status, $resp_headers, $resp_content);
}
method parse_response (Blob $resp) {
my %header;
my Int $header_end_pos = 0;
while ( $header_end_pos < $resp.bytes &&
$http_header_end_marker ne $resp.subbuf($header_end_pos, 4) ) {
$header_end_pos++;
}
if ($header_end_pos < $resp.bytes) {
my @header_lines = $resp.subbuf(
0, $header_end_pos
).decode('ascii').split(/\r\n/);
my Str $status_line = @header_lines.shift;
for @header_lines {
my ($name, $value) = .split(': ');
%header{$name} = $value;
}
return $status_line, %header.item, $resp.subbuf($header_end_pos +4).item;
}
die "could not parse headers";
# if %header.exists('Transfer-Encoding') && %header<Transfer-Encoding> ~~ m/:i chunked/ {
# @content = self.decode_chunked(@content);
# }
}
method getprint (Str $url) {
my $out = self.get($url);
if $out ~~ Buf { $*OUT.write($out) } else { say $out }
}
method getstore (Str $url, Str $filename) {
return unless defined $url;
my $content = self.get($url);
if ! $content {
return
}
my $fh = open($filename, :bin, :w);
if $content ~~ Buf {
$fh.write($content)
}
else {
$fh.print($content)
}
$fh.close;
}
method parse_url (Str $url) {
my URI $u .= new($url);
my $path = $u.path_query;
my $user_info = $u.grammar.parse_result<URI_reference><URI><hier_part><authority><userinfo>;
return (
$u.scheme,
$user_info ?? "{$user_info}@{$u.host}" !! $u.host,
$u.port,
$path eq '' ?? '/' !! $path,
$user_info ?? {
host => $u.host,
user => ~ $user_info<likely_userinfo_component>[0],
password => ~ $user_info<likely_userinfo_component>[1]
} !! Nil
);
}
method stringify_headers (%headers) {
my Str $str = '';
for sort %headers.keys {
$str ~= $_ ~ ': ' ~ %headers{$_} ~ "\r\n";
}
return $str;
}

207
samples/Perl6/Win32.pm Normal file
View File

@@ -0,0 +1,207 @@
my class IO::Spec::Win32 is IO::Spec::Unix {
# Some regexes we use for path splitting
my $slash = regex { <[\/ \\]> }
my $notslash = regex { <-[\/ \\]> }
my $driveletter = regex { <[A..Z a..z]> ':' }
my $UNCpath = regex { [<$slash> ** 2] <$notslash>+ <$slash> [<$notslash>+ | $] }
my $volume_rx = regex { <$driveletter> | <$UNCpath> }
method canonpath ($path, :$parent) {
$path eq '' ?? '' !! self!canon-cat($path, :$parent);
}
method catdir(*@dirs) {
return "" unless @dirs;
return self!canon-cat( "\\", |@dirs ) if @dirs[0] eq "";
self!canon-cat(|@dirs);
}
method splitdir($dir) { $dir.split($slash) }
method catfile(|c) { self.catdir(|c) }
method devnull { 'nul' }
method rootdir { '\\' }
method tmpdir {
first( { .defined && .IO.d && .IO.w },
%*ENV<TMPDIR>,
%*ENV<TEMP>,
%*ENV<TMP>,
'SYS:/temp',
'C:\system\temp',
'C:/temp',
'/tmp',
'/')
|| self.curdir;
}
method path {
my @path = split(';', %*ENV<PATH>);
@path».=subst(:global, q/"/, '');
@path = grep *.chars, @path;
unshift @path, ".";
return @path;
}
method is-absolute ($path) {
# As of right now, this returns 2 if the path is absolute with a
# volume, 1 if it's absolute with no volume, 0 otherwise.
given $path {
when /^ [<$driveletter> <$slash> | <$UNCpath>]/ { 2 }
when /^ <$slash> / { 1 }
default { 0 }
} #/
}
method split ($path as Str is copy) {
$path ~~ s[ <$slash>+ $] = '' #=
unless $path ~~ /^ <$driveletter>? <$slash>+ $/;
$path ~~
m/^ ( <$volume_rx> ? )
( [ .* <$slash> ]? )
(.*)
/;
my ($volume, $directory, $basename) = (~$0, ~$1, ~$2);
$directory ~~ s/ <?after .> <$slash>+ $//;
if all($directory, $basename) eq '' && $volume ne '' {
$directory = $volume ~~ /^<$driveletter>/
?? '.' !! '\\';
}
$basename = '\\' if $directory eq any('/', '\\') && $basename eq '';
$directory = '.' if $directory eq '' && $basename ne '';
return (:$volume, :$directory, :$basename);
}
method join ($volume, $directory is copy, $file is copy) {
$directory = '' if $directory eq '.' && $file.chars;
if $directory.match( /^<$slash>$/ ) && $file.match( /^<$slash>$/ ) {
$file = '';
$directory = '' if $volume.chars > 2; #i.e. UNC path
}
self.catpath($volume, $directory, $file);
}
method splitpath($path as Str, :$nofile = False) {
my ($volume,$directory,$file) = ('','','');
if ( $nofile ) {
$path ~~
/^ (<$volume_rx>?) (.*) /;
$volume = ~$0;
$directory = ~$1;
}
else {
$path ~~
m/^ ( <$volume_rx> ? )
( [ .* <$slash> [ '.' ** 1..2 $]? ]? )
(.*)
/;
$volume = ~$0;
$directory = ~$1;
$file = ~$2;
}
return ($volume,$directory,$file);
}
method catpath($volume is copy, $directory, $file) {
# Make sure the glue separator is present
# unless it's a relative path like A:foo.txt
if $volume.chars and $directory.chars
and $volume !~~ /^<$driveletter>/
and $volume !~~ /<$slash> $/
and $directory !~~ /^ <$slash>/
{ $volume ~= '\\' }
if $file.chars and $directory.chars
and $directory !~~ /<$slash> $/
{ $volume ~ $directory ~ '\\' ~ $file; }
else { $volume ~ $directory ~ $file; }
}
method rel2abs ($path is copy, $base? is copy) {
my $is_abs = self.is-absolute($path);
# Check for volume (should probably document the '2' thing...)
return self.canonpath( $path ) if $is_abs == 2;
if $is_abs {
# It's missing a volume, add one
my $vol;
$vol = self.splitpath($base)[0] if $base.defined;
$vol ||= self.splitpath($*CWD)[0];
return self.canonpath( $vol ~ $path );
}
if not defined $base {
# TODO: implement _getdcwd call ( Windows maintains separate CWD for each volume )
# See: http://msdn.microsoft.com/en-us/library/1e5zwe0c%28v=vs.80%29.aspx
#$base = Cwd::getdcwd( (self.splitpath: $path)[0] ) if defined &Cwd::getdcwd ;
#$base //= $*CWD ;
$base = $*CWD;
}
elsif ( !self.is-absolute( $base ) ) {
$base = self.rel2abs( $base );
}
else {
$base = self.canonpath( $base );
}
my ($path_directories, $path_file) = self.splitpath( $path )[1..2] ;
my ($base_volume, $base_directories) = self.splitpath( $base, :nofile ) ;
$path = self.catpath(
$base_volume,
self.catdir( $base_directories, $path_directories ),
$path_file
) ;
return self.canonpath( $path ) ;
}
method !canon-cat ( $first, *@rest, :$parent --> Str) {
$first ~~ /^ ([ <$driveletter> <$slash>?
| <$UNCpath>
| [<$slash> ** 2] <$notslash>+
| <$slash> ]?)
(.*)
/;
my Str ($volume, $path) = ~$0, ~$1;
$volume.=subst(:g, '/', '\\');
if $volume ~~ /^<$driveletter>/ {
$volume.=uc;
}
elsif $volume.chars && $volume !~~ / '\\' $/ {
$volume ~= '\\';
}
$path = join "\\", $path, @rest.flat;
$path ~~ s:g/ <$slash>+ /\\/; # /xx\\yy --> \xx\yy
$path ~~ s:g/[ ^ | '\\'] '.' '\\.'* [ '\\' | $ ]/\\/; # xx/././yy --> xx/yy
if $parent {
while $path ~~ s:g { [^ | <?after '\\'>] <!before '..\\'> <-[\\]>+ '\\..' ['\\' | $ ] } = '' { };
}
$path ~~ s/^ '\\'+ //; # \xx --> xx NOTE: this is *not* root
$path ~~ s/ '\\'+ $//; # xx\ --> xx
if $volume ~~ / '\\' $ / { # <vol>\.. --> <vol>\
$path ~~ s/ ^ '..' '\\..'* [ '\\' | $ ] //;
}
if $path eq '' { # \\HOST\SHARE\ --> \\HOST\SHARE
$volume ~~ s/<?after '\\\\' .*> '\\' $ //;
$volume || '.';
}
else {
$volume ~ $path;
}
}
}

View File

@@ -0,0 +1,75 @@
# http://perl6advent.wordpress.com/2009/12/16/day-16-we-call-it-the-old-switcheroo/
use v6;
use Test;
sub weather($weather) {
given $weather {
when 'sunny' { return 'Aah! ☀' }
when 'cloudy' { return 'Meh. ☁' }
when 'rainy' { return 'Where is my umbrella? ☂' }
when 'snowy' { return 'Yippie! ☃' }
default { return 'Looks like any other day.' }
}
}
is weather(Any), 'Looks like any other day.', 'Weather given/when';
{
sub probability($probability) {
given $probability {
when 1.00 { return 'A certainty' }
when * > 0.75 { return 'Quite likely' }
when * > 0.50 { return 'Likely' }
when * > 0.25 { return 'Unlikely' }
when * > 0.00 { return 'Very unlikely' }
when 0.00 { return 'Fat chance' }
}
}
is probability(0.80), 'Quite likely', 'Probability given/when';
sub fib(Int $_) {
when * < 2 { 1 }
default { fib($_ - 1) + fib($_ - 2) }
}
is fib(5), 8, '6th fibonacci number';
}
class Card {
method bend() { return "Card bent" }
method fold() { return "Card folded" }
method mutilate() { return "Card mutilated" }
}
my Card $punch-card .= new;
my $actions;
given $punch-card {
$actions ~= .bend;
$actions ~= .fold;
$actions ~= .mutilate;
}
is $actions, 'Card bentCard foldedCard mutilated', 'Given as a sort of once-only for loop.';
my @list = 1, 2, 3, 4, 5;
my $castle = 'phantom';
my $full-of-vowels = 'aaaooouuuiiee';
is (.[0] + .[1] + .[2] given @list), 6, 'Statement ending given';
{
is ("My God, it's full of vowels!" when $full-of-vowels ~~ /^ <[aeiou]>+ $/), "My God, it's full of vowels!", 'Statement ending when';
is ('Boo!' when /phantom/ given $castle), 'Boo!', 'Nesting when inside given';
}
{
#Test DNA one liner at the end
my $result;
for ^20 {my ($a,$b)=<AT CG>.pick.comb.pick(*); my ($c,$d)=sort map({6+4*sin($_/2)},($_,$_+4)); $result ~= sprintf "%{$c}s%{$d-$c}s\n",$a,$b}
is $result.chars , 169 , 'We got a bunch of DNA';
is $result.split("\n").Int , 21 , 'On 20 line';
is $result.subst(/\s/ , '' , :g).chars , 40 , 'Containing 20 pairs';
}
eval_lives_ok 'for ^20 {my ($a,$b)=<AT CG>.pick.comb.pick(*); my ($c,$d)=sort map {6+4*sin($_/2)},$_,$_+4; sprintf "%{$c}s%{$d-$c}s\n",$a,$b}' , 'Can handle "map {...} ,$x,$y"';
done;

View File

@@ -0,0 +1,48 @@
use v6;
use Test;
plan 9;
sub test_lines(@lines) {
#!rakudo todo 'line counts'
is @lines.elems, 3, 'Three lines read';
is @lines[0],
"Please do not remove this file, used by S16-io/basic-open.t",
'Retrieved first line';
is @lines[2],
"This is a test line.",
'Retrieved last line';
}
#?niecza skip 'TextReader.eof NYI'
{
my $fh = open('t/spec/S16-io/test-data');
my $count = 0;
while !$fh.eof {
my $x = $fh.get;
$count++ if $x.defined;
}
is $count, 3, 'Read three lines with while !$hanlde.eof';
}
# test that we can interate over $fh.lines
{
my $fh = open('t/spec/S16-io/test-data');
ok defined($fh), 'Could open test file';
my @lines;
for $fh.lines -> $x {
push @lines, $x;
}
test_lines(@lines);
}
# test that we can get all items in list context:
{
my $fh = open('t/spec/S16-io/test-data');
ok defined($fh), 'Could open test file (again)';
my @lines = $fh.lines;
test_lines(@lines);
}
# vim: ft=perl6

209
samples/Perl6/calendar.t Normal file
View File

@@ -0,0 +1,209 @@
use v6;
use Test;
# calendar.t: tests some calendar-related methods common to
# Date and DateTime
plan 130;
sub date($year, $month, $day) {
Date.new(:$year, :$month, :$day)
}
sub dtim($year, $month, $day) {
DateTime.new(:$year, :$month, :$day,
:hour(17), :minute(33), :second(2.9))
}
# --------------------------------------------------------------------
# L<S32::Temporal/C<DateTime>/'truncated-to'>
# --------------------------------------------------------------------
is ~date(1969, 7, 20).truncated-to(month), '1969-07-01', 'Date.truncated-to(month)';
is ~dtim(1969, 7, 20).truncated-to(month), '1969-07-01T00:00:00Z', 'DateTime.truncated-to(month)';
is ~date(1969, 7, 20).truncated-to(year), '1969-01-01', 'Date.truncated-to(year)';
is ~dtim(1969, 7, 20).truncated-to(year), '1969-01-01T00:00:00Z', 'DateTime.truncated-to(year)';
is ~date(1999, 1, 18).truncated-to(week), '1999-01-18', 'Date.truncated-to(week) (no change in day)';
is ~date(1999, 1, 19).truncated-to(week), '1999-01-18', 'Date.truncated-to(week) (short jump)';
is ~date(1999, 1, 17).truncated-to(week), '1999-01-11', 'Date.truncated-to(week) (long jump)';
is ~dtim(1999, 1, 17).truncated-to(week), '1999-01-11T00:00:00Z', 'DateTime.truncated-to(week) (long jump)';
is ~date(1999, 4, 2).truncated-to(week), '1999-03-29', 'Date.truncated-to(week) (changing month)';
is ~date(1999, 1, 3).truncated-to(week), '1998-12-28', 'Date.truncated-to(week) (changing year)';
is ~dtim(1999, 1, 3).truncated-to(week), '1998-12-28T00:00:00Z', 'DateTime.truncated-to(week) (changing year)';
is ~date(2000, 3, 1).truncated-to(week), '2000-02-28', 'Date.truncated-to(week) (skipping over Feb 29)';
is ~dtim(2000, 3, 1).truncated-to(week), '2000-02-28T00:00:00Z', 'DateTime.truncated-to(week) (skipping over Feb 29)';
is ~date(1988, 3, 3).truncated-to(week), '1988-02-29', 'Date.truncated-to(week) (landing on Feb 29)';
is ~dtim(1988, 3, 3).truncated-to(week), '1988-02-29T00:00:00Z', 'DateTime.truncated-to(week) (landing on Feb 29)';
# Verify .gist
# Example taken from S32 specs documentation.
#?niecza skip 'Undeclared routine: hour'
{
my $dt = DateTime.new('2005-02-01T15:20:35Z');
my $truncated = $dt.truncated-to(hour);
is $truncated.gist, "2005-02-01T15:00:00Z", "validate .gist output";
}
# --------------------------------------------------------------------
# L<S32::Temporal/Accessors/'the synonym day-of-month'>
# --------------------------------------------------------------------
is date(2003, 3, 18).day-of-month, 18, 'Date.day can be spelled as Date.day-of-month';
is dtim(2003, 3, 18).day-of-month, 18, 'DateTime.day can be spelled as DateTime.day-of-month';
# --------------------------------------------------------------------
# L<S32::Temporal/Accessors/'day-of-week method'>
# --------------------------------------------------------------------
# much of this is blatantly stolen from the Date::Simple test suite
# and redistributed under the terms of the Artistic License 2.0 with
# permission of the original authors (John Tobey, Marty Pauly).
is date(1966, 10, 15).day-of-week, 6, 'Date.day-of-week (1966-10-15)';
is dtim(1966, 10, 15).day-of-week, 6, 'DateTime.day-of-week (1966-10-15)';
is date(2401, 3, 1).day-of-week, 4, 'Date.day-of-week (2401-03-01)';
is date(2401, 2, 28).day-of-week, 3, 'Date.day-of-week (2401-02-28)';
is date(2400, 3, 1).day-of-week, 3, 'Date.day-of-week (2400-03-01)';
is date(2400, 2, 29).day-of-week, 2, 'Date.day-of-week (2400-02-29)';
is date(2400, 2, 28).day-of-week, 1, 'Date.day-of-week (2400-02-28)';
is date(2101, 3, 1).day-of-week, 2, 'Date.day-of-week (2101-03-01)';
is date(2101, 2, 28).day-of-week, 1, 'Date.day-of-week (2101-02-28)';
is date(2100, 3, 1).day-of-week, 1, 'Date.day-of-week (2100-03-01)';
is dtim(2100, 3, 1).day-of-week, 1, 'DateTime.day-of-week (2100-03-01)';
is date(2100, 2, 28).day-of-week, 7, 'Date.day-of-week (2100-02-28)';
is dtim(2100, 2, 28).day-of-week, 7, 'DateTime.day-of-week (2100-02-28)';
is date(2001, 3, 1).day-of-week, 4, 'Date.day-of-week (2001-03-01)';
is date(2001, 2, 28).day-of-week, 3, 'Date.day-of-week (2001-02-28)';
is date(2000, 3, 1).day-of-week, 3, 'Date.day-of-week (2000-03-01)';
is date(2000, 2, 29).day-of-week, 2, 'Date.day-of-week (2000-02-29)';
is date(2000, 2, 28).day-of-week, 1, 'Date.day-of-week (2000-02-28)';
is date(1901, 3, 1).day-of-week, 5, 'Date.day-of-week (1901-03-01)';
is date(1901, 2, 28).day-of-week, 4, 'Date.day-of-week (1901-02-28)';
is date(1900, 3, 1).day-of-week, 4, 'Date.day-of-week (1900-03-01)';
is date(1900, 2, 28).day-of-week, 3, 'Date.day-of-week (1900-02-28)';
is date(1801, 3, 1).day-of-week, 7, 'Date.day-of-week (1801-03-01)';
is date(1801, 2, 28).day-of-week, 6, 'Date.day-of-week (1801-02-28)';
is date(1800, 3, 1).day-of-week, 6, 'Date.day-of-week (1800-03-01)';
is dtim(1800, 3, 1).day-of-week, 6, 'DateTime.day-of-week (1800-03-01)';
is date(1800, 2, 28).day-of-week, 5, 'Date.day-of-week (1800-02-28)';
is dtim(1800, 2, 28).day-of-week, 5, 'DateTime.day-of-week (1800-02-28)';
is date(1701, 3, 1).day-of-week, 2, 'Date.day-of-week (1701-03-01)';
is date(1701, 2, 28).day-of-week, 1, 'Date.day-of-week (1701-02-28)';
is date(1700, 3, 1).day-of-week, 1, 'Date.day-of-week (1700-03-01)';
is date(1700, 2, 28).day-of-week, 7, 'Date.day-of-week (1700-02-28)';
is date(1601, 3, 1).day-of-week, 4, 'Date.day-of-week (1601-03-01)';
is dtim(1601, 3, 1).day-of-week, 4, 'DateTime.day-of-week (1601-03-01)';
is date(1601, 2, 28).day-of-week, 3, 'Date.day-of-week (1601-02-28)';
is dtim(1601, 2, 28).day-of-week, 3, 'DateTime.day-of-week (1601-02-28)';
is date(1600, 3, 1).day-of-week, 3, 'Date.day-of-week (1600-03-01)';
is date(1600, 2, 29).day-of-week, 2, 'Date.day-of-week (1600-02-29)';
is date(1600, 2, 28).day-of-week, 1, 'Date.day-of-week (1600-02-28)';
# --------------------------------------------------------------------
# L<S32::Temporal/Accessors/'The method week'>
# --------------------------------------------------------------------
is date(1977, 8, 20).week.join(' '), '1977 33', 'Date.week (1977-8-20)';
is dtim(1977, 8, 20).week.join(' '), '1977 33', 'DateTime.week (1977-8-20)';
is date(1977, 8, 20).week-year, 1977, 'Date.week (1977-8-20)';
is dtim(1977, 8, 20).week-year, 1977, 'DateTime.week (1977-8-20)';
is date(1977, 8, 20).week-number, 33, 'Date.week-number (1977-8-20)';
is dtim(1977, 8, 20).week-number, 33, 'DateTime.week-number (1977-8-20)';
is date(1987, 12, 18).week.join(' '), '1987 51', 'Date.week (1987-12-18)';
is date(2020, 5, 4).week.join(' '), '2020 19', 'Date.week (2020-5-4)';
# From http://en.wikipedia.org/w/index.php?title=ISO_week_dtim&oldid=370553706#Examples
is date(2005, 01, 01).week.join(' '), '2004 53', 'Date.week (2005-01-01)';
is date(2005, 01, 02).week.join(' '), '2004 53', 'Date.week (2005-01-02)';
is date(2005, 12, 31).week.join(' '), '2005 52', 'Date.week (2005-12-31)';
is date(2007, 01, 01).week.join(' '), '2007 1', 'Date.week (2007-01-01)';
is date(2007, 12, 30).week.join(' '), '2007 52', 'Date.week (2007-12-30)';
is dtim(2007, 12, 30).week.join(' '), '2007 52', 'DateTime.week (2007-12-30)';
is date(2007, 12, 30).week-year, 2007, 'Date.week (2007-12-30)';
is dtim(2007, 12, 30).week-year, 2007, 'DateTime.week (2007-12-30)';
is date(2007, 12, 30).week-number, 52, 'Date.week-number (2007-12-30)';
is dtim(2007, 12, 30).week-number, 52, 'DateTime.week-number (2007-12-30)';
is date(2007, 12, 31).week.join(' '), '2008 1', 'Date.week (2007-12-31)';
is date(2008, 01, 01).week.join(' '), '2008 1', 'Date.week (2008-01-01)';
is date(2008, 12, 29).week.join(' '), '2009 1', 'Date.week (2008-12-29)';
is date(2008, 12, 31).week.join(' '), '2009 1', 'Date.week (2008-12-31)';
is date(2009, 01, 01).week.join(' '), '2009 1', 'Date.week (2009-01-01)';
is date(2009, 12, 31).week.join(' '), '2009 53', 'Date.week (2009-12-31)';
is date(2010, 01, 03).week.join(' '), '2009 53', 'Date.week (2010-01-03)';
is dtim(2010, 01, 03).week.join(' '), '2009 53', 'DateTime.week (2010-01-03)';
is date(2010, 01, 03).week-year, 2009, 'Date.week-year (2010-01-03)';
is dtim(2010, 01, 03).week-year, 2009, 'DateTime.week-year (2010-01-03)';
is date(2010, 01, 03).week-number, 53, 'Date.week-number (2010-01-03)';
is dtim(2010, 01, 03).week-number, 53, 'DateTime.week-number (2010-01-03)';
# day-of-week is tested each time show-dt is called.
# --------------------------------------------------------------------
# L<S32::Temporal/Accessors/'The weekday-of-month method'>
# --------------------------------------------------------------------
is date(1982, 2, 1).weekday-of-month, 1, 'Date.weekday-of-month (1982-02-01)';
is dtim(1982, 2, 1).weekday-of-month, 1, 'DateTime.weekday-of-month (1982-02-01)';
is date(1982, 2, 7).weekday-of-month, 1, 'Date.weekday-of-month (1982-02-07)';
is date(1982, 2, 8).weekday-of-month, 2, 'Date.weekday-of-month (1982-02-08)';
is date(1982, 2, 18).weekday-of-month, 3, 'Date.weekday-of-month (1982-02-18)';
is date(1982, 2, 28).weekday-of-month, 4, 'Date.weekday-of-month (1982-02-28)';
is dtim(1982, 2, 28).weekday-of-month, 4, 'DateTime.weekday-of-month (1982-02-28)';
is date(1982, 4, 4).weekday-of-month, 1, 'Date.weekday-of-month (1982-04-04)';
is date(1982, 4, 7).weekday-of-month, 1, 'Date.weekday-of-month (1982-04-07)';
is date(1982, 4, 8).weekday-of-month, 2, 'Date.weekday-of-month (1982-04-08)';
is date(1982, 4, 13).weekday-of-month, 2, 'Date.weekday-of-month (1982-04-13)';
is date(1982, 4, 30).weekday-of-month, 5, 'Date.weekday-of-month (1982-04-30)';
is dtim(1982, 4, 30).weekday-of-month, 5, 'DateTime.weekday-of-month (1982-04-30)';
# --------------------------------------------------------------------
# L<S32::Temporal/Accessors/'The days-in-month method'>
# --------------------------------------------------------------------
is date(1999, 5, 5).days-in-month, 31, 'Date.days-in-month (May 1999)';
is date(1999, 6, 5).days-in-month, 30, 'Date.days-in-month (Jun 1999)';
is date(1999, 2, 5).days-in-month, 28, 'Date.days-in-month (Feb 1999)';
is dtim(1999, 2, 5).days-in-month, 28, 'DateTime.days-in-month (Feb 1999)';
is date(2000, 2, 5).days-in-month, 29, 'Date.days-in-month (Feb 2000)';
is dtim(2000, 2, 5).days-in-month, 29, 'DateTime.days-in-month (Feb 2000)';
# --------------------------------------------------------------------
# L<S32::Temporal/Accessors/'The day-of-year method'>
# --------------------------------------------------------------------
is date(1975, 1, 1).day-of-year, 1, 'Date.day-of-year (1975-01-01)';
is dtim(1975, 1, 1).day-of-year, 1, 'DateTime.day-of-year (1975-01-01)';
is date(1977, 5, 5).day-of-year, 125, 'Date.day-of-year (1977-05-05)';
is date(1983, 11, 27).day-of-year, 331, 'Date.day-of-year (1983-11-27)';
is date(1999, 2, 28).day-of-year, 59, 'Date.day-of-year (1999-02-28)';
is dtim(1999, 2, 28).day-of-year, 59, 'DateTime.day-of-year (1999-02-28)';
is date(1999, 3, 1).day-of-year, 60, 'Date.day-of-year (1999-03-01)';
is dtim(1999, 3, 1).day-of-year, 60, 'DateTime.day-of-year (1999-03-01)';
is date(1999, 12, 31).day-of-year, 365, 'Date.day-of-year (1999-12-31)';
is date(2000, 2, 28).day-of-year, 59, 'Date.day-of-year (2000-02-28)';
is dtim(2000, 2, 28).day-of-year, 59, 'DateTime.day-of-year (2000-02-28)';
is date(2000, 2, 29).day-of-year, 60, 'Date.day-of-year (2000-02-29)';
is dtim(2000, 2, 29).day-of-year, 60, 'DateTime.day-of-year (2000-02-29)';
is date(2000, 3, 1).day-of-year, 61, 'Date.day-of-year (2000-03-01)';
is date(2000, 12, 31).day-of-year, 366, 'Date.day-of-year (2000-12-31)';
# --------------------------------------------------------------------
# L<S32::Temporal/Accessors/'The method is-leap-year'>
# --------------------------------------------------------------------
nok date(1800, 1, 1).is-leap-year, 'Date.is-leap-year (1800)';
nok date(1801, 1, 1).is-leap-year, 'Date.is-leap-year (1801)';
ok date(1804, 1, 1).is-leap-year, 'Date.is-leap-year (1804)';
nok date(1900, 1, 1).is-leap-year, 'Date.is-leap-year (1900)';
nok dtim(1900, 1, 1).is-leap-year, 'DateTime.is-leap-year (1900)';
ok date(1996, 1, 1).is-leap-year, 'Date.is-leap-year (1996)';
nok date(1999, 1, 1).is-leap-year, 'Date.is-leap-year (1999)';
ok date(2000, 1, 1).is-leap-year, 'Date.is-leap-year (2000)';
ok dtim(2000, 1, 1).is-leap-year, 'DateTime.is-leap-year (2000)';
done;
# vim: ft=perl6

586
samples/Perl6/for.t Normal file
View File

@@ -0,0 +1,586 @@
use v6;
#?pugs emit #
use MONKEY_TYPING;
use Test;
=begin description
Tests the "for" statement
This attempts to test as many variations of the
for statement as possible
=end description
plan 77;
## No foreach
# L<S04/The C<for> statement/"no foreach statement any more">
{
my $times_run = 0;
eval_dies_ok 'foreach 1..10 { $times_run++ }; 1', "foreach is gone";
eval_dies_ok 'foreach (1..10) { $times_run++}; 1',
"foreach is gone, even with parens";
is $times_run, 0, "foreach doesn't work";
}
## for with plain old range operator w/out parens
{
my $a = "";
for 0 .. 5 { $a = $a ~ $_; };
is($a, '012345', 'for 0..5 {} works');
}
# ... with pointy blocks
{
my $b = "";
for 0 .. 5 -> $_ { $b = $b ~ $_; };
is($b, '012345', 'for 0 .. 5 -> {} works');
}
#?pugs todo 'slice context'
#?niecza skip 'slice context'
{
my $str;
my @a = 1..3;
my @b = 4..6;
for zip(@a; @b) -> $x, $y {
$str ~= "($x $y)";
}
is $str, "(1 4)(2 5)(3 6)", 'for zip(@a; @b) -> $x, $y works';
}
# ... with referential sub
{
my $d = "";
for -2 .. 2 { $d ~= .sign };
is($d, '-1-1011', 'for 0 .. 5 { .some_sub } works');
}
## and now with parens around the range operator
{
my $e = "";
for (0 .. 5) { $e = $e ~ $_; };
is($e, '012345', 'for () {} works');
}
# ... with pointy blocks
{
my $f = "";
for (0 .. 5) -> $_ { $f = $f ~ $_; };
is($f, '012345', 'for () -> {} works');
}
# ... with implicit topic
{
$_ = "GLOBAL VALUE";
for "INNER VALUE" {
is( .lc, "inner value", "Implicit default topic is seen by lc()");
};
is($_,"GLOBAL VALUE","After the loop the implicit topic gets restored");
}
{
# as statement modifier
$_ = "GLOBAL VALUE";
is( .lc, "inner value", "Implicit default topic is seen by lc()" )
for "INNER VALUE";
#?pugs todo
is($_,"GLOBAL VALUE","After the loop the implicit topic gets restored");
}
## and now for with 'topical' variables
# ... w/out parens
my $i = "";
for 0 .. 5 -> $topic { $i = $i ~ $topic; };
is($i, '012345', 'for 0 .. 5 -> $topic {} works');
# ... with parens
my $j = "";
for (0 .. 5) -> $topic { $j = $j ~ $topic; };
is($j, '012345', 'for () -> $topic {} works');
## for with @array operator w/out parens
my @array_k = (0 .. 5);
my $k = "";
for @array_k { $k = $k ~ $_; };
is($k, '012345', 'for @array {} works');
# ... with pointy blocks
my @array_l = (0 .. 5);
my $l = "";
for @array_l -> $_ { $l = $l ~ $_; };
is($l, '012345', 'for @array -> {} works');
## and now with parens around the @array
my @array_o = (0 .. 5);
my $o = "";
for (@array_o) { $o = $o ~ $_; };
is($o, '012345', 'for (@array) {} works');
# ... with pointy blocks
{
my @array_p = (0 .. 5);
my $p = "";
for (@array_p) -> $_ { $p = $p ~ $_; };
is($p, '012345', 'for (@array) -> {} works');
}
my @elems = <a b c d e>;
{
my @a;
for (@elems) {
push @a, $_;
}
my @e = <a b c d e>;
is(@a, @e, 'for (@a) { ... $_ ... } iterates all elems');
}
{
my @a;
for (@elems) -> $_ { push @a, $_ };
my @e = @elems;
is(@a, @e, 'for (@a)->$_ { ... $_ ... } iterates all elems' );
}
{
my @a;
for (@elems) { push @a, $_, $_; }
my @e = <a a b b c c d d e e>;
is(@a, @e, 'for (@a) { ... $_ ... $_ ... } iterates all elems, not just odd');
}
# "for @a -> $var" is ro by default.
#?pugs skip 'parsefail'
{
my @a = <1 2 3 4>;
eval_dies_ok('for @a -> $elem {$elem = 5}', '-> $var is ro by default');
for @a <-> $elem {$elem++;}
is(@a, <2 3 4 5>, '<-> $var is rw');
for @a <-> $first, $second {$first++; $second++}
is(@a, <3 4 5 6>, '<-> $var, $var2 works');
}
# for with "is rw"
{
my @array_s = (0..2);
my @s = (1..3);
for @array_s { $_++ };
is(@array_s, @s, 'for @array { $_++ }');
}
{
my @array = <a b c d>;
for @array { $_ ~= "c" }
is ~@array, "ac bc cc dc",
'mutating $_ in for works';
}
{
my @array_t = (0..2);
my @t = (1..3);
for @array_t -> $val is rw { $val++ };
is(@array_t, @t, 'for @array -> $val is rw { $val++ }');
}
#?pugs skip "Can't modify const item"
{
my @array_v = (0..2);
my @v = (1..3);
for @array_v.values -> $val is rw { $val++ };
is(@array_v, @v, 'for @array.values -> $val is rw { $val++ }');
}
#?pugs skip "Can't modify const item"
{
my @array_kv = (0..2);
my @kv = (1..3);
for @array_kv.kv -> $key, $val is rw { $val++ };
is(@array_kv, @kv, 'for @array.kv -> $key, $val is rw { $val++ }');
}
#?pugs skip "Can't modify const item"
{
my %hash_v = ( a => 1, b => 2, c => 3 );
my %v = ( a => 2, b => 3, c => 4 );
for %hash_v.values -> $val is rw { $val++ };
is(%hash_v, %v, 'for %hash.values -> $val is rw { $val++ }');
}
#?pugs todo
{
my %hash_kv = ( a => 1, b => 2, c => 3 );
my %kv = ( a => 2, b => 3, c => 4 );
try { for %hash_kv.kv -> $key, $val is rw { $val++ }; };
is( %hash_kv, %kv, 'for %hash.kv -> $key, $val is rw { $val++ }');
}
# .key //= ++$i for @array1;
class TestClass{ has $.key is rw };
{
my @array1 = (TestClass.new(:key<1>),TestClass.new());
my $i = 0;
for @array1 { .key //= ++$i }
my $sum1 = [+] @array1.map: { $_.key };
is( $sum1, 2, '.key //= ++$i for @array1;' );
}
# .key = 1 for @array1;
{
my @array1 = (TestClass.new(),TestClass.new(:key<2>));
.key = 1 for @array1;
my $sum1 = [+] @array1.map: { $_.key };
is($sum1, 2, '.key = 1 for @array1;');
}
# $_.key = 1 for @array1;
{
my @array1 = (TestClass.new(),TestClass.new(:key<2>));
$_.key = 1 for @array1;
my $sum1 = [+] @array1.map: { $_.key };
is( $sum1, 2, '$_.key = 1 for @array1;');
}
# rw scalars
#L<S04/The C<for> statement/implicit parameter to block read/write "by default">
{
my ($a, $b, $c) = 0..2;
try { for ($a, $b, $c) { $_++ } };
is( [$a,$b,$c], [1,2,3], 'for ($a,$b,$c) { $_++ }');
($a, $b, $c) = 0..2;
try { for ($a, $b, $c) -> $x is rw { $x++ } };
is( [$a,$b,$c], [1,2,3], 'for ($a,$b,$c) -> $x is rw { $x++ }');
}
# list context
{
my $a = '';
my $b = '';
for 1..3, 4..6 { $a ~= $_.WHAT.gist ; $b ~= Int.gist };
is($a, $b, 'List context');
$a = '';
for [1..3, 4..6] { $a ~= $_.WHAT.gist };
is($a, Array.gist, 'List context');
$a = '';
$b = '';
for [1..3], [4..6] { $a ~= $_.WHAT.gist ; $b ~= Array.gist };
is($a, $b, 'List context');
}
{
# this was a rakudo bug with mixed 'for' and recursion, which seems to
# confuse some lexical pads or the like, see RT #58392
my $gather = '';
sub f($l) {
if $l <= 0 {
return $l;
}
$gather ~= $l;
for 1..3 {
f($l-1);
$gather ~= '.';
}
}
f(2);
is $gather, '21....1....1....', 'Can mix recursion and for';
}
# another variation
{
my $t = '';
my $c;
sub r($x) {
my $h = $c++;
r $x-1 if $x;
for 1 { $t ~= $h };
};
r 3;
is $t, '3210', 'can mix recursion and for (RT 103332)';
}
# grep and sort in for - these were pugs bugs once, so let's
# keep them as regression tests
{
my @array = <1 2 3 4>;
my $output = '';
for (grep { 1 }, @array) -> $elem {
$output ~= "$elem,";
}
is $output, "1,2,3,4,", "grep works in for";
}
{
my @array = <1 2 3 4>;
my $output = '';
for @array.sort -> $elem {
$output ~= "$elem,";
}
is $output, "1,2,3,4,", "sort works in for";
}
{
my @array = <1 2 3 4>;
my $output = '';
for (grep { 1 }, @array.sort) -> $elem {
$output ~= "$elem,";
}
is $output, "1,2,3,4,", "grep and sort work in for";
}
# L<S04/Statement parsing/keywords require whitespace>
eval_dies_ok('for(0..5) { }','keyword needs at least one whitespace after it');
# looping with more than one loop variables
{
my @a = <1 2 3 4>;
my $str = '';
for @a -> $x, $y {
$str ~= $x+$y;
}
is $str, "37", "for loop with two variables";
}
{
#my $str = '';
eval_dies_ok('for 1..5 -> $x, $y { $str ~= "$x$y" }', 'Should throw exception, no value for parameter $y');
#is $str, "1234", "loop ran before throwing exception";
#diag ">$str<";
}
#?rakudo skip 'optional variable in for loop (RT #63994)'
#?niecza 2 todo 'NYI'
{
my $str = '';
for 1..5 -> $x, $y? {
$str ~= " " ~ $x*$y;
}
is $str, " 2 12 0";
}
{
my $str = '';
for 1..5 -> $x, $y = 7 {
$str ~= " " ~ $x*$y;
}
is $str, " 2 12 35", 'default values in for-loops';
}
#?pugs todo
{
my @a = <1 2 3>;
my @b = <4 5 6>;
my $res = '';
for @a Z @b -> $x, $y {
$res ~= " " ~ $x * $y;
}
is $res, " 4 10 18", "Z -ed for loop";
}
#?pugs todo
{
my @a = <1 2 3>;
my $str = '';
for @a Z @a Z @a Z @a Z @a -> $q, $w, $e, $r, $t {
$str ~= " " ~ $q*$w*$e*$r*$t;
}
is $str, " 1 {2**5} {3**5}", "Z-ed for loop with 5 arrays";
}
{
eval_dies_ok 'for 1.. { };', "Please use ..* for indefinite range";
eval_dies_ok 'for 1... { };', "1... does not exist";
}
{
my $c;
for 1..8 {
$c = $_;
last if $_ == 6;
}
is $c, 6, 'for loop ends in time using last';
}
{
my $c;
for 1..* {
$c = $_;
last if $_ == 6;
}
is $c, 6, 'infinte for loop ends in time using last';
}
{
my $c;
for 1..Inf {
$c = $_;
last if $_ == 6;
}
is $c, 6, 'infinte for loop ends in time using last';
}
# RT #62478
#?pugs todo
{
try { EVAL('for (my $ii = 1; $ii <= 3; $ii++) { say $ii; }') };
ok "$!" ~~ /C\-style/, 'mentions C-style';
ok "$!" ~~ /for/, 'mentions for';
ok "$!" ~~ /loop/, 'mentions loop';
}
# RT #65212
#?pugs todo
{
my $parsed = 0;
try { EVAL '$parsed = 1; for (1..3)->$n { last }' };
ok ! $parsed, 'for (1..3)->$n fails to parse';
}
# RT #71268
{
sub rt71268 { for ^1 {} }
#?pugs todo
lives_ok { ~(rt71268) }, 'can stringify "for ^1 {}" without death';
#?pugs skip 'Cannot cast from VList to VCode'
ok rt71268() ~~ (), 'result of "for ^1 {}" is ()';
}
# RT 62478
{
eval_dies_ok 'for (my $i; $i <=3; $i++) { $i; }', 'Unsupported use of C-style "for (;;)" loop; in Perl 6 please use "loop (;;)"';
}
#?pugs todo
{
try { EVAL 'for (my $x; $x <=3; $x++) { $i; }'; diag($!) };
ok $! ~~ / 'C-style' /, 'Sensible error message';
}
# RT #64886
#?rakudo skip 'maybe bogus, for loops are not supposed to be lazy?'
{
my $a = 0;
for 1..10000000000 {
$a++;
last;
}
is $a, 1, 'for on Range with huge max value is lazy and enters block';
}
# RT #60780
lives_ok {
for 1 .. 5 -> $x, $y? { }
}, 'Iteration variables do not need to add up if one is optional';
# RT #78232
{
my $a = 0;
for 1, 2, 3 { sub foo {}; $a++ }
is $a, 3, 'RT #78232';
}
# http://irclog.perlgeek.de/perl6/2011-12-29#i_4892285
# (Niecza bug)
{
my $x = 0;
for 1 .. 2 -> $a, $b { $x = $b } #OK not used
is $x, 2, 'Lazy lists interact properly with multi-element for loops';
}
# RT #71270
# list comprehension
#?pugs skip 'Cannot cast from VList to VCode'
{
sub f() { for ^1 { } };
is ~f(), '', 'empty for-loop returns empty list';
}
# RT #74060
# more list comprehension
#?pugs skip 'parsefail'
#?niecza todo "https://github.com/sorear/niecza/issues/180"
{
my @s = ($_ * 2 if $_ ** 2 > 3 for 0 .. 5);
is ~@s, '4 6 8 10', 'Can use statement-modifying "for" in list comprehension';
}
# RT 113026
#?rakudo todo 'RT 113026 array iterator does not track a growing array'
#?niecza todo 'array iterator does not track a growing array'
#?pugs todo
{
my @rt113026 = 1 .. 10;
my $iter = 0;
for @rt113026 -> $n {
$iter++;
if $iter % 2 {
@rt113026.push: $n;
}
}
is $iter, 20, 'iterating over an expanding list';
is @rt113026, <1 2 3 4 5 6 7 8 9 10 1 3 5 7 9 1 5 9 5 5>,
'array expanded in for loop is expanded';
}
# RT #78406
{
my $c = 0;
dies_ok { for ^8 { .=fmt('%03b'); $c++ } }, '$_ is read-only here';
is $c, 0, '... and $_ is *always* read-only here';
}
dies_ok
{
my class Foo {
has @.items;
method check_items { for @.items -> $item { die "bad" if $item == 2 } }
method foo { self.check_items; .say for @.items }
}
Foo.new(items => (1, 2, 3, 4)).foo
}, 'for in called method runs (was a sink context bug)';
# RT #77460
#?pugs todo
{
my @a = 1;
for 1..10 {
my $last = @a[*-1];
push @a, (sub ($s) { $s + 1 })($last)
};
is @a, [1, 2, 3, 4, 5, 6, 7, 8,9, 10, 11];
}
# vim: ft=perl6

76
samples/Perl6/hash.t Normal file
View File

@@ -0,0 +1,76 @@
use v6;
use Test;
plan(5);
unless EVAL 'EVAL("1", :lang<perl5>)' {
skip_rest;
exit;
}
die unless
EVAL(q/
package My::Hash;
use strict;
sub new {
my ($class, $ref) = @_;
bless \$ref, $class;
}
sub hash {
my $self = shift;
return $$self;
}
sub my_keys {
my $self = shift;
return keys %{$$self};
}
sub my_exists {
my ($self, $idx) = @_;
return exists $$self->{$idx};
}
sub fetch {
my ($self, $idx) = @_;
return $$self->{$idx};
}
sub store {
my ($self, $idx, $val) = @_;
$$self->{$idx} = $val;
}
sub push {
my ($self, $val) = @_;
}
1;
/, :lang<perl5>);
my $p5ha = EVAL('sub { My::Hash->new($_[0]) }', :lang<perl5>);
my %hash = (5 => 'a', 6 => 'b', 7 => 'c', 8 => 'd');
my $p5hash = $p5ha(\%hash);
my $rethash = $p5hash.hash;
my @keys = %hash.keys.sort;
my @p5keys;
try {
@p5keys = $p5hash.my_keys; # this doesn't even pass lives_ok ??
@p5keys .= sort;
};
is("{ @keys }", "{ @p5keys }");
ok($p5hash.store(9, 'e'), 'can store');
is(%hash{9}, 'e', 'store result');
is($p5hash.fetch(5), 'a', 'fetch result');
is($p5hash.my_exists(5), %hash<5>:exists, 'exists');
#?pugs todo 'bug'
is($p5hash.my_exists(12), %hash<12>:exists, 'nonexists fail');
# vim: ft=perl6

630
samples/Perl6/htmlify.pl Executable file
View File

@@ -0,0 +1,630 @@
#!/usr/bin/env perl6
use v6;
# This script isn't in bin/ because it's not meant to be installed.
BEGIN say 'Initializing ...';
use Pod::To::HTML;
use URI::Escape;
use lib 'lib';
use Perl6::TypeGraph;
use Perl6::TypeGraph::Viz;
use Perl6::Documentable::Registry;
my $*DEBUG = False;
my $tg;
my %methods-by-type;
my $footer = footer-html;
my $head = q[
<link rel="icon" href="/favicon.ico" type="favicon.ico" />
<link rel="stylesheet" type="text/css" href="/style.css" media="screen" title="default" />
];
sub url-munge($_) {
return $_ if m{^ <[a..z]>+ '://'};
return "/type/$_" if m/^<[A..Z]>/;
return "/routine/$_" if m/^<[a..z]>/;
# poor man's <identifier>
if m/ ^ '&'( \w <[[\w'-]>* ) $/ {
return "/routine/$0";
}
return $_;
}
sub p2h($pod) {
pod2html($pod, :url(&url-munge), :$footer, :$head);
}
sub pod-gist(Pod::Block $pod, $level = 0) {
my $leading = ' ' x $level;
my %confs;
my @chunks;
for <config name level caption type> {
my $thing = $pod.?"$_"();
if $thing {
%confs{$_} = $thing ~~ Iterable ?? $thing.perl !! $thing.Str;
}
}
@chunks = $leading, $pod.^name, (%confs.perl if %confs), "\n";
for $pod.content.list -> $c {
if $c ~~ Pod::Block {
@chunks.push: pod-gist($c, $level + 2);
}
elsif $c ~~ Str {
@chunks.push: $c.indent($level + 2), "\n";
} elsif $c ~~ Positional {
@chunks.push: $c.map: {
if $_ ~~ Pod::Block {
*.&pod-gist
} elsif $_ ~~ Str {
$_
}
}
}
}
@chunks.join;
}
sub recursive-dir($dir) {
my @todo = $dir;
gather while @todo {
my $d = @todo.shift;
for dir($d) -> $f {
if $f.f {
take $f;
}
else {
@todo.push($f.path);
}
}
}
}
sub first-code-block(@pod) {
if @pod[1] ~~ Pod::Block::Code {
return @pod[1].content.grep(Str).join;
}
'';
}
sub MAIN(Bool :$debug, Bool :$typegraph = False) {
$*DEBUG = $debug;
say 'Creating html/ subdirectories ...';
for '', <type language routine images op op/prefix op/postfix op/infix
op/circumfix op/postcircumfix op/listop> {
mkdir "html/$_" unless "html/$_".IO ~~ :e;
}
say 'Reading lib/ ...';
my @source = recursive-dir('lib').grep(*.f).grep(rx{\.pod$});
@source .= map: {; .path.subst('lib/', '').subst(rx{\.pod$}, '').subst(:g, '/', '::') => $_ };
say 'Reading type graph ...';
$tg = Perl6::TypeGraph.new-from-file('type-graph.txt');
{
my %h = $tg.sorted.kv.flat.reverse;
@source .= sort: { %h{.key} // -1 };
}
my $dr = Perl6::Documentable::Registry.new;
say 'Processing Pod files ...';
for (0..* Z @source) -> $num, $_ {
my $podname = .key;
my $file = .value;
my $what = $podname ~~ /^<[A..Z]> | '::'/ ?? 'type' !! 'language';
printf "% 4d/%d: % -40s => %s\n", $num, +@source, $file.path, "$what/$podname";
my $pod = eval slurp($file.path) ~ "\n\$=pod";
$pod .= [0];
if $what eq 'language' {
write-language-file(:$dr, :$what, :$pod, :$podname);
}
else {
say pod-gist($pod[0]) if $*DEBUG;
write-type-file(:$dr, :$what, :pod($pod[0]), :$podname);
}
}
say 'Composing doc registry ...';
$dr.compose;
write-disambiguation-files($dr);
write-op-disambiguation-files($dr);
write-operator-files($dr);
write-type-graph-images(:force($typegraph));
write-search-file($dr);
write-index-file($dr);
say 'Writing per-routine files ...';
my %routine-seen;
for $dr.lookup('routine', :by<kind>).list -> $d {
next if %routine-seen{$d.name}++;
write-routine-file($dr, $d.name);
print '.'
}
say '';
say 'Processing complete.';
}
sub write-language-file(:$dr, :$what, :$pod, :$podname) {
spurt "html/$what/$podname.html", p2h($pod);
if $podname eq 'operators' {
my @chunks = chunks-grep($pod.content,
:from({ $_ ~~ Pod::Heading and .level == 2}),
:to({ $^b ~~ Pod::Heading and $^b.level <= $^a.level}),
);
for @chunks -> $chunk {
my $heading = $chunk[0].content[0].content[0];
next unless $heading ~~ / ^ [in | pre | post | circum | postcircum ] fix | listop /;
my $what = ~$/;
my $operator = $heading.split(' ', 2)[1];
$dr.add-new(
:kind<operator>,
:subkind($what),
:name($operator),
:pod($chunk),
:!pod-is-complete,
);
}
}
$dr.add-new(
:kind<language>,
:name($podname),
:$pod,
:pod-is-complete,
);
}
sub write-type-file(:$dr, :$what, :$pod, :$podname) {
my @chunks = chunks-grep($pod.content,
:from({ $_ ~~ Pod::Heading and .level == 2}),
:to({ $^b ~~ Pod::Heading and $^b.level <= $^a.level}),
);
if $tg.types{$podname} -> $t {
$pod.content.push: Pod::Block::Named.new(
name => 'Image',
content => [ "/images/type-graph-$podname.png"],
);
$pod.content.push: pod-link(
'Full-size type graph image as SVG',
"/images/type-graph-$podname.svg",
);
my @mro = $t.mro;
@mro.shift; # current type is already taken care of
for $t.roles -> $r {
next unless %methods-by-type{$r};
$pod.content.push:
pod-heading("Methods supplied by role $r"),
pod-block(
"$podname does role ",
pod-link($r.name, "/type/$r"),
", which provides the following methods:",
),
%methods-by-type{$r}.list,
;
}
for @mro -> $c {
next unless %methods-by-type{$c};
$pod.content.push:
pod-heading("Methods supplied by class $c"),
pod-block(
"$podname inherits from class ",
pod-link($c.name, "/type/$c"),
", which provides the following methods:",
),
%methods-by-type{$c}.list,
;
for $c.roles -> $r {
next unless %methods-by-type{$r};
$pod.content.push:
pod-heading("Methods supplied by role $r"),
pod-block(
"$podname inherits from class ",
pod-link($c.name, "/type/$c"),
", which does role ",
pod-link($r.name, "/type/$r"),
", which provides the following methods:",
),
%methods-by-type{$r}.list,
;
}
}
}
my $d = $dr.add-new(
:kind<type>,
# TODO: subkind
:$pod,
:pod-is-complete,
:name($podname),
);
for @chunks -> $chunk {
my $name = $chunk[0].content[0].content[0];
say "$podname.$name" if $*DEBUG;
next if $name ~~ /\s/;
%methods-by-type{$podname}.push: $chunk;
# determine whether it's a sub or method
my Str $subkind;
{
my %counter;
for first-code-block($chunk).lines {
if ms/^ 'multi'? (sub|method)»/ {
%counter{$0}++;
}
}
if %counter == 1 {
($subkind,) = %counter.keys;
}
if %counter<method> {
write-qualified-method-call(
:$name,
:pod($chunk),
:type($podname),
);
}
}
$dr.add-new(
:kind<routine>,
:$subkind,
:$name,
:pod($chunk),
:!pod-is-complete,
:origin($d),
);
}
spurt "html/$what/$podname.html", p2h($pod);
}
sub chunks-grep(:$from!, :&to!, *@elems) {
my @current;
gather {
for @elems -> $c {
if @current && ($c ~~ $from || to(@current[0], $c)) {
take [@current];
@current = ();
@current.push: $c if $c ~~ $from;
}
elsif @current or $c ~~ $from {
@current.push: $c;
}
}
take [@current] if @current;
}
}
sub pod-with-title($title, *@blocks) {
Pod::Block::Named.new(
name => "pod",
content => [
Pod::Block::Named.new(
name => "TITLE",
content => Array.new(
Pod::Block::Para.new(
content => [$title],
)
)
),
@blocks.flat,
]
);
}
sub pod-block(*@content) {
Pod::Block::Para.new(:@content);
}
sub pod-link($text, $url) {
Pod::FormattingCode.new(
type => 'L',
content => [
join('|', $text, $url),
],
);
}
sub pod-item(*@content, :$level = 1) {
Pod::Item.new(
:$level,
:@content,
);
}
sub pod-heading($name, :$level = 1) {
Pod::Heading.new(
:$level,
:content[pod-block($name)],
);
}
sub write-type-graph-images(:$force) {
unless $force {
my $dest = 'html/images/type-graph-Any.svg'.path;
if $dest.e && $dest.modified >= 'type-graph.txt'.path.modified {
say "Not writing type graph images, it seems to be up-to-date";
say "To force writing of type graph images, supply the --typegraph";
say "option at the command line, or delete";
say "file 'html/images/type-graph-Any.svg'";
return;
}
}
say 'Writing type graph images to html/images/ ...';
for $tg.sorted -> $type {
my $viz = Perl6::TypeGraph::Viz.new-for-type($type);
$viz.to-file("html/images/type-graph-{$type}.svg", format => 'svg');
$viz.to-file("html/images/type-graph-{$type}.png", format => 'png', size => '8,3');
print '.'
}
say '';
say 'Writing specialized visualizations to html/images/ ...';
my %by-group = $tg.sorted.classify(&viz-group);
%by-group<Exception>.push: $tg.types< Exception Any Mu >;
%by-group<Metamodel>.push: $tg.types< Any Mu >;
for %by-group.kv -> $group, @types {
my $viz = Perl6::TypeGraph::Viz.new(:types(@types),
:dot-hints(viz-hints($group)),
:rank-dir('LR'));
$viz.to-file("html/images/type-graph-{$group}.svg", format => 'svg');
$viz.to-file("html/images/type-graph-{$group}.png", format => 'png', size => '8,3');
}
}
sub viz-group ($type) {
return 'Metamodel' if $type.name ~~ /^ 'Perl6::Metamodel' /;
return 'Exception' if $type.name ~~ /^ 'X::' /;
return 'Any';
}
sub viz-hints ($group) {
return '' unless $group eq 'Any';
return '
subgraph "cluster: Mu children" {
rank=same;
style=invis;
"Any";
"Junction";
}
subgraph "cluster: Pod:: top level" {
rank=same;
style=invis;
"Pod::Config";
"Pod::Block";
}
subgraph "cluster: Date/time handling" {
rank=same;
style=invis;
"Date";
"DateTime";
"DateTime-local-timezone";
}
subgraph "cluster: Collection roles" {
rank=same;
style=invis;
"Positional";
"Associative";
"Baggy";
}
';
}
sub write-search-file($dr) {
say 'Writing html/search.html ...';
my $template = slurp("search_template.html");
my @items;
my sub fix-url ($raw) { $raw.substr(1) ~ '.html' };
@items.push: $dr.lookup('language', :by<kind>).sort(*.name).map({
"\{ label: \"Language: {.name}\", value: \"{.name}\", url: \"{ fix-url(.url) }\" \}"
});
@items.push: $dr.lookup('type', :by<kind>).sort(*.name).map({
"\{ label: \"Type: {.name}\", value: \"{.name}\", url: \"{ fix-url(.url) }\" \}"
});
my %seen;
@items.push: $dr.lookup('routine', :by<kind>).grep({!%seen{.name}++}).sort(*.name).map({
"\{ label: \"{ (.subkind // 'Routine').tclc }: {.name}\", value: \"{.name}\", url: \"{ fix-url(.url) }\" \}"
});
sub escape(Str $s) {
$s.trans([</ \\ ">] => [<\\/ \\\\ \\">]);
}
@items.push: $dr.lookup('operator', :by<kind>).map({
qq[\{ label: "$_.human-kind() {escape .name}", value: "{escape .name}", url: "{ fix-url .url }"\}]
});
my $items = @items.join(",\n");
spurt("html/search.html", $template.subst("ITEMS", $items));
}
my %operator_disambiguation_file_written;
sub write-disambiguation-files($dr) {
say 'Writing disambiguation files ...';
for $dr.grouped-by('name').kv -> $name, $p is copy {
print '.';
my $pod = pod-with-title("Disambiguation for '$name'");
if $p.elems == 1 {
$p.=[0] if $p ~~ Array;
if $p.origin -> $o {
$pod.content.push:
pod-block(
pod-link("'$name' is a $p.human-kind()", $p.url),
' from ',
pod-link($o.human-kind() ~ ' ' ~ $o.name, $o.url),
);
}
else {
$pod.content.push:
pod-block(
pod-link("'$name' is a $p.human-kind()", $p.url)
);
}
}
else {
$pod.content.push:
pod-block("'$name' can be anything of the following"),
$p.map({
if .origin -> $o {
pod-item(
pod-link(.human-kind, .url),
' from ',
pod-link($o.human-kind() ~ ' ' ~ $o.name, $o.url),
)
}
else {
pod-item( pod-link(.human-kind, .url) )
}
});
}
my $html = p2h($pod);
spurt "html/$name.html", $html;
if all($p>>.kind) eq 'operator' {
spurt "html/op/$name.html", $html;
%operator_disambiguation_file_written{$p[0].name} = 1;
}
}
say '';
}
sub write-op-disambiguation-files($dr) {
say 'Writing operator disambiguation files ...';
for $dr.lookup('operator', :by<kind>).classify(*.name).kv -> $name, @ops {
next unless %operator_disambiguation_file_written{$name};
my $pod = pod-with-title("Disambiguation for '$name'");
if @ops == 1 {
my $p = @ops[0];
if $p.origin -> $o {
$pod.content.push:
pod-block(
pod-link("'$name' is a $p.human-kind()", $p.url),
' from ',
pod-link($o.human-kind() ~ ' ' ~ $o.name, $o.url),
);
}
else {
$pod.content.push:
pod-block(
pod-link("'$name' is a $p.human-kind()", $p.url)
);
}
}
else {
$pod.content.push:
pod-block("'$name' can be anything of the following"),
@ops.map({
if .origin -> $o {
pod-item(
pod-link(.human-kind, .url),
' from ',
pod-link($o.human-kind() ~ ' ' ~ $o.name, $o.url),
)
}
else {
pod-item( pod-link(.human-kind, .url) )
}
});
}
my $html = p2h($pod);
spurt "html/$name.html", $html;
}
}
sub write-operator-files($dr) {
say 'Writing operator files ...';
for $dr.lookup('operator', :by<kind>).list -> $doc {
my $what = $doc.subkind;
my $op = $doc.name;
my $pod = pod-with-title(
"$what.tclc() $op operator",
pod-block(
"Documentation for $what $op, extracted from ",
pod-link("the operators language documentation", "/language/operators")
),
@($doc.pod),
);
spurt "html/op/$what/$op.html", p2h($pod);
}
}
sub write-index-file($dr) {
say 'Writing html/index.html ...';
my %routine-seen;
my $pod = pod-with-title('Perl 6 Documentation',
Pod::Block::Para.new(
content => ['Official Perl 6 documentation'],
),
# TODO: add more
pod-heading("Language Documentation"),
$dr.lookup('language', :by<kind>).sort(*.name).map({
pod-item( pod-link(.name, .url) )
}),
pod-heading('Types'),
$dr.lookup('type', :by<kind>).sort(*.name).map({
pod-item(pod-link(.name, .url))
}),
pod-heading('Routines'),
$dr.lookup('routine', :by<kind>).sort(*.name).map({
next if %routine-seen{.name}++;
pod-item(pod-link(.name, .url))
}),
);
spurt 'html/index.html', p2h($pod);
}
sub write-routine-file($dr, $name) {
say 'Writing html/routine/$name.html ...' if $*DEBUG;
my @docs = $dr.lookup($name, :by<name>).grep(*.kind eq 'routine');
my $subkind = 'routine';
{
my @subkinds = @docs>>.subkind;
$subkind = @subkinds[0] if all(@subkinds>>.defined) && [eq] @subkinds;
}
my $pod = pod-with-title("Documentation for $subkind $name",
pod-block("Documentation for $subkind $name, assembled from the
following types:"),
@docs.map({
pod-heading(.origin.name ~ '.' ~ .name),
pod-block("From ", pod-link(.origin.name, .origin.url ~ '#' ~ .name)),
.pod.list,
})
);
spurt "html/routine/$name.html", p2h($pod);
}
sub write-qualified-method-call(:$name!, :$pod!, :$type!) {
my $p = pod-with-title(
"Documentation for method $type.$name",
pod-block('From ', pod-link($type, "/type/{$type}#$name")),
@$pod,
);
spurt "html/{$type}.{$name}.html", p2h($p);
}
sub footer-html() {
state $dt = ~DateTime.now;
qq[
<div id="footer">
<p>
Generated on $dt from the sources at
<a href="https://github.com/perl6/doc">perl6/doc on github</a>.
</p>
<p>
This is a work in progress to document Perl 6, and known to be
incomplete. Your contribution is appreciated.
</p>
</div>
];
}

View File

@@ -0,0 +1,76 @@
use v6;
use Test;
# L<S02/Whitespace and Comments>
=begin kwid
= DESCRIPTION
Tests that the List quoting parser properly
ignores whitespace in lists. This becomes important
if your line endings are \x0d\x0a.
Characters that should be ignored are:
\t
\r
\n
\x20
Most likely there are more. James tells me that
the maximum Unicode char is \x10FFFF , so maybe
we should simply (re)construct the whitespace
list via IsSpace or \s on the fly.
Of course, in the parsed result, no item should
contain whitespace.
C<\xA0> is specifically an I<nonbreaking> whitespace
character and thus should B<not> break the list.
=end kwid
#?pugs emit if $?PUGS_BACKEND ne "BACKEND_PUGS" {
#?pugs emit skip_rest "PIL2JS and PIL-Run do not support EVAL() yet.";
#?pugs emit exit;
#?pugs emit }
my @list = <a b c d>;
my @separators = ("\t","\r","\n"," ");
my @nonseparators = (",","/","\\",";","\xa0");
plan +@separators + @nonseparators + 3;
for @separators -> $sep {
my $str = "<$sep" ~ @list.join("$sep$sep") ~ "$sep>";
my @res = EVAL $str;
my $vis = sprintf "%02x", ord $sep;
is( @res, @list, "'\\x$vis\\x$vis' is properly parsed as list whitespace")
};
for @nonseparators -> $sep {
my $ex = @list.join($sep);
my $str = "<" ~$ex~ ">";
my @res = EVAL $str;
my $vis = sprintf "%02x", ord $sep;
#?rakudo emit if $sep eq "\xa0" {
#?rakudo emit todo('\xa0 should not be a separator for list quotes');
#?rakudo emit };
#?niecza emit if $sep eq "\xa0" {
#?niecza emit todo('\xa0 should not be a separator for list quotes');
#?niecza emit };
is( @res, [@list.join($sep)], "'\\x$vis' does not split in a whitespace quoted list")
};
is < foo
>, 'foo', 'various combinations of whitespaces are stripped';
# RT #73772
isa_ok < >, Parcel, '< > (only whitespaces) is empty Parcel';
is < >.elems, 0, ".. and it's really empty";
# vim: ft=perl6

View File

@@ -0,0 +1,32 @@
use Test;
# stress test for lexicals and lexical subs
# See
# http://en.wikipedia.org/w/index.php?title=Man_or_boy_test&oldid=249795453#Perl
my @results = 1, 0, -2, 0, 1, 0, 1, -1, -10, -30;
# if we want to *really* stress-test, we can use a few more tests:
# my @results = 1, 0, -2, 0, 1, 0, 1, -1, -10, -30, -67, -138
# -291, -642, -1446, -3250, -7244, -16065, -35601, -78985;
plan +@results;
sub A($k is copy, &x1, &x2, &x3, &x4, &x5) {
my $B;
$B = sub (*@) { A(--$k, $B, &x1, &x2, &x3, &x4) };
if ($k <= 0) {
return x4($k, &x1, &x2, &x3, &x4, &x5)
+ x5($k, &x1, &x2, &x3, &x4, &x5);
}
return $B();
};
for 0 .. (@results-1) -> $i {
is A($i, sub (*@) {1}, sub (*@) {-1}, sub (*@) {-1}, sub (*@) {1}, sub (*@) {0}),
@results[$i],
"man-or-boy test for start value $i";
}
# vim: ft=perl6

39
samples/RAML/api.raml Normal file
View File

@@ -0,0 +1,39 @@
#%RAML 0.8
title: World Music API
baseUri: http://example.api.com/{version}
version: v1
traits:
- paged:
queryParameters:
pages:
description: The number of pages to return
type: number
- secured: !include http://raml-example.com/secured.yml
/songs:
is: [ paged, secured ]
get:
queryParameters:
genre:
description: filter the songs by genre
post:
/{songId}:
get:
responses:
200:
body:
application/json:
schema: |
{ "$schema": "http://json-schema.org/schema",
"type": "object",
"description": "A canonical song",
"properties": {
"title": { "type": "string" },
"artist": { "type": "string" }
},
"required": [ "title", "artist" ]
}
application/xml:
delete:
description: |
This method will *delete* an **individual song**

View File

@@ -0,0 +1,42 @@
PATH
remote: .
specs:
github-linguist (4.0.1)
charlock_holmes (~> 0.7.3)
escape_utils (~> 1.0.1)
mime-types (>= 1.19)
rugged (~> 0.22.0b1)
github-linguist-grammars (4.0.1)
GEM
remote: https://rubygems.org/
specs:
charlock_holmes (0.7.3)
coderay (1.1.0)
escape_utils (1.0.1)
metaclass (0.0.4)
method_source (0.8.2)
mime-types (2.4.3)
mocha (1.1.0)
metaclass (~> 0.0.1)
plist (3.1.0)
pry (0.10.1)
coderay (~> 1.1.0)
method_source (~> 0.8.1)
slop (~> 3.4)
rake (10.3.2)
rugged (0.22.0b1)
slop (3.6.0)
yajl-ruby (1.2.1)
PLATFORMS
ruby
DEPENDENCIES
github-linguist!
github-linguist-grammars!
mocha
plist (~> 3.1)
pry
rake
yajl-ruby

View File

@@ -1,2 +0,0 @@
#!/usr/bin/env python
puts "Not Python"

89
samples/XML/some-ideas.mm Normal file
View File

@@ -0,0 +1,89 @@
<map version="0.9.0">
<!-- To view this file, download free mind mapping software FreeMind from http://freemind.sourceforge.net -->
<node COLOR="#000000" CREATED="1385819664217" ID="ID_1105859543" MODIFIED="1385820134114" TEXT="Some ideas on demexp">
<font NAME="SansSerif" SIZE="20"/>
<hook NAME="accessories/plugins/AutomaticLayout.properties"/>
<node COLOR="#0033ff" CREATED="1385819753503" ID="ID_1407588370" MODIFIED="1385819767173" POSITION="right" TEXT="User Interface">
<edge STYLE="sharp_bezier" WIDTH="8"/>
<font NAME="SansSerif" SIZE="18"/>
<node COLOR="#00b439" CREATED="1385819771320" ID="ID_1257512743" MODIFIED="1385819783131" TEXT="Text file">
<edge STYLE="bezier" WIDTH="thin"/>
<font NAME="SansSerif" SIZE="16"/>
</node>
<node COLOR="#00b439" CREATED="1385819783831" ID="ID_997633499" MODIFIED="1385819786761" TEXT="Web">
<edge STYLE="bezier" WIDTH="thin"/>
<font NAME="SansSerif" SIZE="16"/>
</node>
<node COLOR="#00b439" CREATED="1385819787041" ID="ID_204106158" MODIFIED="1385819794885" TEXT="Graphical interface">
<edge STYLE="bezier" WIDTH="thin"/>
<font NAME="SansSerif" SIZE="16"/>
</node>
<node COLOR="#00b439" CREATED="1385819795339" ID="ID_768498137" MODIFIED="1385819800338" TEXT="Email">
<edge STYLE="bezier" WIDTH="thin"/>
<font NAME="SansSerif" SIZE="16"/>
</node>
<node COLOR="#00b439" CREATED="1385819801043" ID="ID_1660630451" MODIFIED="1385819802441" TEXT="SMS">
<edge STYLE="bezier" WIDTH="thin"/>
<font NAME="SansSerif" SIZE="16"/>
</node>
</node>
<node COLOR="#0033ff" CREATED="1385819872899" ID="ID_281388957" MODIFIED="1385819878444" POSITION="left" TEXT="Cordorcet voting module">
<edge STYLE="sharp_bezier" WIDTH="8"/>
<font NAME="SansSerif" SIZE="18"/>
<node COLOR="#00b439" CREATED="1385819880540" ID="ID_1389666909" MODIFIED="1385819948101" TEXT="Input">
<edge STYLE="bezier" WIDTH="thin"/>
<font NAME="SansSerif" SIZE="16"/>
<node COLOR="#990000" CREATED="1385819893834" ID="ID_631111389" MODIFIED="1385819901697" TEXT="Number of votes">
<font NAME="SansSerif" SIZE="14"/>
</node>
<node COLOR="#990000" CREATED="1385819902442" ID="ID_838201093" MODIFIED="1385819910452" TEXT="Number of possible choices">
<font NAME="SansSerif" SIZE="14"/>
</node>
<node COLOR="#990000" CREATED="1385819910703" ID="ID_1662888975" MODIFIED="1385819933316" TEXT="For a vote: number of votes and list of choices">
<font NAME="SansSerif" SIZE="14"/>
</node>
</node>
<node COLOR="#00b439" CREATED="1385819949027" ID="ID_1504837261" MODIFIED="1385819952198" TEXT="Format">
<edge STYLE="bezier" WIDTH="thin"/>
<font NAME="SansSerif" SIZE="16"/>
<node COLOR="#990000" CREATED="1385819955105" ID="ID_647722151" MODIFIED="1385819962151" TEXT="A single file?">
<font NAME="SansSerif" SIZE="14"/>
</node>
<node COLOR="#990000" CREATED="1385819962642" ID="ID_1374756253" MODIFIED="1385819976939" TEXT="Several files (parameters + 1 per vote)?">
<font NAME="SansSerif" SIZE="14"/>
</node>
<node COLOR="#990000" CREATED="1385819977578" ID="ID_979556559" MODIFIED="1385819984309" TEXT="JSON?">
<font NAME="SansSerif" SIZE="14"/>
</node>
</node>
</node>
<node COLOR="#0033ff" CREATED="1385820005408" ID="ID_1886566753" MODIFIED="1385820009909" POSITION="right" TEXT="Technologies">
<edge STYLE="sharp_bezier" WIDTH="8"/>
<font NAME="SansSerif" SIZE="18"/>
<node COLOR="#00b439" CREATED="1385820011913" ID="ID_1291489552" MODIFIED="1385820014698" TEXT="SPARK 2014">
<edge STYLE="bezier" WIDTH="thin"/>
<font NAME="SansSerif" SIZE="16"/>
</node>
<node COLOR="#00b439" CREATED="1385820015481" ID="ID_1825929484" MODIFIED="1385820017935" TEXT="Frama-C">
<edge STYLE="bezier" WIDTH="thin"/>
<font NAME="SansSerif" SIZE="16"/>
</node>
<node COLOR="#00b439" CREATED="1385820018603" ID="ID_253774957" MODIFIED="1385820027363" TEXT="Why3 -&gt; OCaml">
<edge STYLE="bezier" WIDTH="thin"/>
<font NAME="SansSerif" SIZE="16"/>
</node>
</node>
<node COLOR="#0033ff" CREATED="1385820136808" ID="ID_1002115371" MODIFIED="1385820139813" POSITION="left" TEXT="Vote storage">
<edge STYLE="sharp_bezier" WIDTH="8"/>
<font NAME="SansSerif" SIZE="18"/>
<node COLOR="#00b439" CREATED="1385820141400" ID="ID_1882609124" MODIFIED="1385820145261" TEXT="Database">
<edge STYLE="bezier" WIDTH="thin"/>
<font NAME="SansSerif" SIZE="16"/>
</node>
<node COLOR="#00b439" CREATED="1385820146138" ID="ID_1771403777" MODIFIED="1385820154334" TEXT="Text file (XML?)">
<edge STYLE="bezier" WIDTH="thin"/>
<font NAME="SansSerif" SIZE="16"/>
</node>
</node>
</node>
</map>

221
script/download-grammars Executable file
View File

@@ -0,0 +1,221 @@
#!/usr/bin/env ruby
require 'json'
require 'net/http'
require 'plist'
require 'set'
require 'tmpdir'
require 'uri'
require 'yaml'
GRAMMARS_PATH = File.expand_path("../../grammars", __FILE__)
SOURCES_FILE = File.expand_path("../../grammars.yml", __FILE__)
CSONC = File.expand_path("../../node_modules/.bin/csonc", __FILE__)
class TarballPackage
def self.fetch(tmp_dir, url)
`curl --silent --location --max-time 10 --output "#{tmp_dir}/archive" "#{url}"`
raise "Failed to fetch GH package: #{url} #{$?.to_s}" unless $?.success?
output = File.join(tmp_dir, 'extracted')
Dir.mkdir(output)
`tar -C "#{output}" -xf "#{tmp_dir}/archive"`
raise "Failed to uncompress tarball: #{tmp_dir}/archive (from #{url}) #{$?.to_s}" unless $?.success?
Dir["#{output}/**/*"].select do |path|
case File.extname(path.downcase)
when '.plist'
path.split('/')[-2] == 'Syntaxes'
when '.tmlanguage'
true
when '.cson'
path.split('/')[-2] == 'grammars'
else
false
end
end
end
attr_reader :url
def initialize(url)
@url = url
end
def fetch(tmp_dir)
self.class.fetch(tmp_dir, url)
end
end
class SingleGrammar
attr_reader :url
def initialize(url)
@url = url
end
def fetch(tmp_dir)
filename = File.join(tmp_dir, File.basename(url))
`curl --silent --location --max-time 10 --output "#{filename}" "#{url}"`
raise "Failed to fetch grammar: #{url}: #{$?.to_s}" unless $?.success?
[filename]
end
end
class SVNPackage
attr_reader :url
def initialize(url)
@url = url
end
def fetch(tmp_dir)
`svn export -q "#{url}/Syntaxes" "#{tmp_dir}/Syntaxes"`
raise "Failed to export SVN repository: #{url}: #{$?.to_s}" unless $?.success?
Dir["#{tmp_dir}/Syntaxes/*.{plist,tmLanguage,tmlanguage}"]
end
end
class GitHubPackage
def self.parse_url(url)
url, ref = url.split("@", 2)
path = URI.parse(url).path.split('/')
[path[1], path[2].chomp('.git'), ref || "master"]
end
attr_reader :user
attr_reader :repo
attr_reader :ref
def initialize(url)
@user, @repo, @ref = self.class.parse_url(url)
end
def url
suffix = "@#{ref}" unless ref == "master"
"https://github.com/#{user}/#{repo}#{suffix}"
end
def fetch(tmp_dir)
url = "https://github.com/#{user}/#{repo}/archive/#{ref}.tar.gz"
TarballPackage.fetch(tmp_dir, url)
end
end
def load_grammar(path)
case File.extname(path.downcase)
when '.plist', '.tmlanguage'
Plist::parse_xml(path)
when '.cson'
cson = `"#{CSONC}" "#{path}"`
raise "Failed to convert CSON grammar '#{path}': #{$?.to_s}" unless $?.success?
JSON.parse(cson)
else
raise "Invalid document type #{path}"
end
end
def install_grammar(tmp_dir, source, all_scopes)
p = if source.end_with?('.tmLanguage', '.plist')
SingleGrammar.new(source)
elsif source.start_with?('https://github.com')
GitHubPackage.new(source)
elsif source.start_with?('http://svn.textmate.org')
SVNPackage.new(source)
elsif source.end_with?('.tar.gz')
TarballPackage.new(source)
else
nil
end
raise "Unsupported source: #{source}" unless p
installed = []
p.fetch(tmp_dir).each do |path|
grammar = load_grammar(path)
scope = grammar['scopeName']
if all_scopes.key?(scope)
$stderr.puts "WARN: Duplicated scope #{scope}\n" +
" Current package: #{p.url}\n" +
" Previous package: #{all_scopes[scope]}"
next
end
File.write(File.join(GRAMMARS_PATH, "#{scope}.json"), JSON.pretty_generate(grammar))
all_scopes[scope] = p.url
installed << scope
end
$stderr.puts("OK #{p.url} (#{installed.join(', ')})")
end
def run_thread(queue, all_scopes)
Dir.mktmpdir do |tmpdir|
loop do
source, index = begin
queue.pop(true)
rescue ThreadError
# The queue is empty.
break
end
dir = "#{tmpdir}/#{index}"
Dir.mkdir(dir)
install_grammar(dir, source, all_scopes)
end
end
end
def generate_yaml(all_scopes, base)
yaml = all_scopes.each_with_object(base) do |(key,value),out|
out[value] ||= []
out[value] << key
end
yaml = yaml.sort.to_h
yaml.each { |k, v| v.sort! }
yaml
end
def main(sources)
begin
Dir.mkdir(GRAMMARS_PATH)
rescue Errno::EEXIST
end
`npm install`
all_scopes = {}
if ARGV[0] == '--add'
Dir.mktmpdir do |tmpdir|
install_grammar(tmpdir, ARGV[1], all_scopes)
end
generate_yaml(all_scopes, sources)
else
queue = Queue.new
sources.each do |url, scopes|
queue.push([url, queue.length])
end
threads = 8.times.map do
Thread.new { run_thread(queue, all_scopes) }
end
threads.each(&:join)
generate_yaml(all_scopes, {})
end
end
sources = File.open(SOURCES_FILE) do |file|
YAML.load(file)
end
yaml = main(sources)
File.write(SOURCES_FILE, YAML.dump(yaml))
$stderr.puts("Done")

57
script/prune-grammars Executable file
View File

@@ -0,0 +1,57 @@
#!/usr/bin/env ruby
require "json"
require "linguist"
require "set"
require "yaml"
def find_includes(json)
case json
when Hash
result = []
if inc = json["include"]
result << inc.split("#", 2).first unless inc.start_with?("#", "$")
end
result + json.values.flat_map { |v| find_includes(v) }
when Array
json.flat_map { |v| find_includes(v) }
else
[]
end
end
def transitive_includes(scope, includes)
scopes = Set.new
queue = includes[scope] || []
while s = queue.shift
next if scopes.include?(s)
scopes << s
queue += includes[s] || []
end
scopes
end
includes = {}
Dir["grammars/*.json"].each do |path|
scope = File.basename(path).sub(/\.json/, '')
json = JSON.load(File.read(path))
incs = find_includes(json)
next if incs.empty?
includes[scope] ||= []
includes[scope] += incs
end
yaml = YAML.load(File.read("grammars.yml"))
language_scopes = Linguist::Language.all.map(&:tm_scope).to_set
# The set of used scopes is the scopes for each language, plus all the scopes
# they include, transitively.
used_scopes = language_scopes + language_scopes.flat_map { |s| transitive_includes(s, includes).to_a }.to_set
unused = yaml.reject { |repo, scopes| scopes.any? { |scope| used_scopes.include?(scope) } }
puts "Unused grammar repos"
puts unused.map { |repo, scopes| sprintf("%-100s %s", repo, scopes.join(", ")) }.sort.join("\n")
yaml.delete_if { |k| unused.key?(k) }
File.write("grammars.yml", YAML.dump(yaml))

22
test/fixtures/Python/run_tests.module vendored Normal file
View File

@@ -0,0 +1,22 @@
#!/usr/bin/env python
import sys, os
# Set the current working directory to the directory where this script is located
os.chdir(os.path.abspath(os.path.dirname(sys.argv[0])))
#### Set the name of the application here and moose directory relative to the application
app_name = 'stork'
MODULE_DIR = os.path.abspath('..')
MOOSE_DIR = os.path.abspath(os.path.join(MODULE_DIR, '..'))
#### See if MOOSE_DIR is already in the environment instead
if os.environ.has_key("MOOSE_DIR"):
MOOSE_DIR = os.environ['MOOSE_DIR']
sys.path.append(os.path.join(MOOSE_DIR, 'python'))
import path_tool
path_tool.activate_module('TestHarness')
from TestHarness import TestHarness
# Run the tests!
TestHarness.buildAndRun(sys.argv, app_name, MOOSE_DIR)

1505
test/fixtures/Shell/mintleaf.module vendored Normal file

File diff suppressed because it is too large Load Diff

4
test/helper.rb Normal file
View File

@@ -0,0 +1,4 @@
require "bundler/setup"
require "test/unit"
require "mocha/setup"
require "linguist"

View File

@@ -1,16 +1,8 @@
require 'linguist/file_blob' require_relative "./helper"
require 'linguist/samples'
require 'test/unit'
require 'mocha/setup'
require 'mime/types'
require 'pygments'
class TestBlob < Test::Unit::TestCase class TestBlob < Test::Unit::TestCase
include Linguist include Linguist
Lexer = Pygments::Lexer
def setup def setup
# git blobs are normally loaded as ASCII-8BIT since they may contain data # git blobs are normally loaded as ASCII-8BIT since they may contain data
# with arbitrary encoding not known ahead of time # with arbitrary encoding not known ahead of time
@@ -196,8 +188,8 @@ class TestBlob < Test::Unit::TestCase
assert blob("Binary/MainMenu.nib").generated? assert blob("Binary/MainMenu.nib").generated?
assert !blob("XML/project.pbxproj").generated? assert !blob("XML/project.pbxproj").generated?
# Gemfile.locks # Gemfile.lock is NOT generated
assert blob("Gemfile.lock").generated? assert !blob("Gemfile.lock").generated?
# Generated .NET Docfiles # Generated .NET Docfiles
assert blob("XML/net_docfile.xml").generated? assert blob("XML/net_docfile.xml").generated?
@@ -229,7 +221,6 @@ class TestBlob < Test::Unit::TestCase
assert !blob("PostScript/sierpinski.ps").generated? assert !blob("PostScript/sierpinski.ps").generated?
# These examples are too basic to tell # These examples are too basic to tell
assert !blob("JavaScript/empty.js").generated?
assert !blob("JavaScript/hello.js").generated? assert !blob("JavaScript/hello.js").generated?
assert blob("JavaScript/intro-old.js").generated? assert blob("JavaScript/intro-old.js").generated?
@@ -301,6 +292,13 @@ class TestBlob < Test::Unit::TestCase
assert blob("deps/http_parser/http_parser.c").vendored? assert blob("deps/http_parser/http_parser.c").vendored?
assert blob("deps/v8/src/v8.h").vendored? assert blob("deps/v8/src/v8.h").vendored?
# Chart.js
assert blob("some/vendored/path/Chart.js").vendored?
assert !blob("some/vendored/path/chart.js").vendored?
# Codemirror deps
assert blob("codemirror/mode/blah.js").vendored?
# Debian packaging # Debian packaging
assert blob("debian/cron.d").vendored? assert blob("debian/cron.d").vendored?
@@ -467,26 +465,37 @@ class TestBlob < Test::Unit::TestCase
assert blob.language, "No language for #{sample[:path]}" assert blob.language, "No language for #{sample[:path]}"
assert_equal sample[:language], blob.language.name, blob.name assert_equal sample[:language], blob.language.name, blob.name
end end
# Test language detection for files which shouldn't be used as samples
root = File.expand_path('../fixtures', __FILE__)
Dir.entries(root).each do |language|
next unless File.file?(language)
# Each directory contains test files of a language
dirname = File.join(root, language)
Dir.entries(dirname).each do |filename|
next unless File.file?(filename)
# By default blob search the file in the samples;
# thus, we need to give it the absolute path
filepath = File.join(dirname, filename)
blob = blob(filepath)
assert blob.language, "No language for #{filepath}"
assert_equal language, blob.language.name, blob.name
end
end
end end
def test_lexer def test_minified_files_not_safe_to_highlight
assert_equal Lexer['Ruby'], blob("Ruby/foo.rb").lexer assert !blob("JavaScript/jquery-1.6.1.min.js").safe_to_colorize?
end end
def test_colorize def test_empty
assert_equal <<-HTML.chomp, blob("Ruby/foo.rb").colorize blob = Struct.new(:data) { include Linguist::BlobHelper }
<div class="highlight"><pre><span class="k">module</span> <span class="nn">Foo</span>
<span class="k">end</span>
</pre></div>
HTML
end
def test_colorize_does_skip_minified_files assert blob.new("").empty?
assert_nil blob("JavaScript/jquery-1.6.1.min.js").colorize assert blob.new(nil).empty?
end refute blob.new(" ").empty?
refute blob.new("nope").empty?
# Pygments.rb was taking exceeding long on this particular file
def test_colorize_doesnt_blow_up_with_files_with_high_ratio_of_long_lines
assert_nil blob("JavaScript/steelseries-min.js").colorize
end end
end end

View File

@@ -1,9 +1,4 @@
require 'linguist/classifier' require_relative "./helper"
require 'linguist/language'
require 'linguist/samples'
require 'linguist/tokenizer'
require 'test/unit'
class TestClassifier < Test::Unit::TestCase class TestClassifier < Test::Unit::TestCase
include Linguist include Linguist

10
test/test_file_blob.rb Normal file
View File

@@ -0,0 +1,10 @@
require 'linguist/file_blob'
require 'test/unit'
class TestFileBlob < Test::Unit::TestCase
def test_extensions
assert_equal [".gitignore"], Linguist::FileBlob.new(".gitignore").extensions
assert_equal [".xml"], Linguist::FileBlob.new("build.xml").extensions
assert_equal [".html.erb", ".erb"], Linguist::FileBlob.new("dotted.dir/index.html.erb").extensions
end
end

View File

@@ -1,9 +1,4 @@
require 'linguist/heuristics' require_relative "./helper"
require 'linguist/language'
require 'linguist/samples'
require 'linguist/file_blob'
require 'test/unit'
class TestHeuristcs < Test::Unit::TestCase class TestHeuristcs < Test::Unit::TestCase
include Linguist include Linguist
@@ -16,23 +11,29 @@ class TestHeuristcs < Test::Unit::TestCase
File.read(File.join(samples_path, name)) File.read(File.join(samples_path, name))
end end
def file_blob(name)
path = File.exist?(name) ? name : File.join(samples_path, name)
FileBlob.new(path)
end
def all_fixtures(language_name, file="*") def all_fixtures(language_name, file="*")
Dir.glob("#{samples_path}/#{language_name}/#{file}") Dir.glob("#{samples_path}/#{language_name}/#{file}")
end end
# Candidate languages = ["C++", "Objective-C"]
def test_obj_c_by_heuristics def test_obj_c_by_heuristics
languages = ["C++", "Objective-C"]
# Only calling out '.h' filenames as these are the ones causing issues # Only calling out '.h' filenames as these are the ones causing issues
all_fixtures("Objective-C", "*.h").each do |fixture| assert_heuristics({
results = Heuristics.disambiguate_c(fixture("Objective-C/#{File.basename(fixture)}"), languages) "Objective-C" => all_fixtures("Objective-C", "*.h"),
assert_equal Language["Objective-C"], results.first "C++" => ["C++/render_adapter.cpp", "C++/ThreadedQueue.h"],
end "C" => nil
})
end end
def test_cpp_by_heuristics def test_c_by_heuristics
languages = ["C++", "Objective-C"] languages = [Language["C++"], Language["Objective-C"], Language["C"]]
results = Heuristics.disambiguate_c(fixture("C++/render_adapter.cpp"), languages) results = Heuristics.call(file_blob("C/ArrowLeft.h"), languages)
assert_equal Language["C++"], results.first assert_equal [], results
end end
def test_detect_still_works_if_nothing_matches def test_detect_still_works_if_nothing_matches
@@ -41,85 +42,91 @@ class TestHeuristcs < Test::Unit::TestCase
assert_equal Language["Objective-C"], match assert_equal Language["Objective-C"], match
end end
def test_pl_prolog_by_heuristics # Candidate languages = ["Perl", "Prolog"]
languages = ["Perl", "Prolog"] def test_pl_prolog_perl_by_heuristics
results = Heuristics.disambiguate_pl(fixture("Prolog/turing.pl"), languages) assert_heuristics({
assert_equal Language["Prolog"], results.first "Prolog" => "Prolog/turing.pl",
end "Perl" => "Perl/perl-test.t",
})
def test_pl_perl_by_heuristics
languages = ["Perl", "Prolog"]
results = Heuristics.disambiguate_pl(fixture("Perl/perl-test.t"), languages)
assert_equal Language["Perl"], results.first
end end
# Candidate languages = ["ECL", "Prolog"]
def test_ecl_prolog_by_heuristics def test_ecl_prolog_by_heuristics
languages = ["ECL", "Prolog"] assert_heuristics({
results = Heuristics.disambiguate_ecl(fixture("Prolog/or-constraint.ecl"), languages) "ECL" => "ECL/sample.ecl",
assert_equal Language["Prolog"], results.first "Prolog" => "Prolog/or-constraint.ecl"
})
end end
def test_ecl_ecl_by_heuristics # Candidate languages = ["IDL", "Prolog"]
languages = ["ECL", "Prolog"] def test_pro_prolog_idl_by_heuristics
results = Heuristics.disambiguate_ecl(fixture("ECL/sample.ecl"), languages) assert_heuristics({
assert_equal Language["ECL"], results.first "Prolog" => "Prolog/logic-problem.pro",
end "IDL" => "IDL/mg_acosh.pro"
})
def test_pro_prolog_by_heuristics
languages = ["IDL", "Prolog"]
results = Heuristics.disambiguate_pro(fixture("Prolog/logic-problem.pro"), languages)
assert_equal Language["Prolog"], results.first
end
def test_pro_idl_by_heuristics
languages = ["IDL", "Prolog"]
results = Heuristics.disambiguate_pro(fixture("IDL/mg_acosh.pro"), languages)
assert_equal Language["IDL"], results.first
end end
# Candidate languages = ["AGS Script", "AsciiDoc"]
def test_asc_asciidoc_by_heuristics def test_asc_asciidoc_by_heuristics
languages = ["AGS Script", "AsciiDoc"] assert_heuristics({
results = Heuristics.disambiguate_asc(fixture("AsciiDoc/list.asc"), languages) "AsciiDoc" => "AsciiDoc/list.asc",
assert_equal Language["AsciiDoc"], results.first "AGS Script" => nil
end })
def test_ts_typescript_by_heuristics
languages = ["TypeScript", "XML"]
results = Heuristics.disambiguate_ts(fixture("TypeScript/classes.ts"), languages)
assert_equal Language["TypeScript"], results.first
end
def test_ts_xml_by_heuristics
languages = ["TypeScript", "XML"]
results = Heuristics.disambiguate_ts(fixture("XML/pt_BR.xml"), languages)
assert_equal Language["XML"], results.first
end end
def test_cl_by_heuristics def test_cl_by_heuristics
languages = ["Common Lisp", "OpenCL"] assert_heuristics({
languages.each do |language| "Common Lisp" => all_fixtures("Common Lisp"),
all_fixtures(language).each do |fixture| "OpenCL" => all_fixtures("OpenCL")
results = Heuristics.disambiguate_cl(fixture("#{language}/#{File.basename(fixture)}"), languages) })
assert_equal Language[language], results.first end
def test_f_by_heuristics
assert_heuristics({
"FORTRAN" => all_fixtures("FORTRAN"),
"Forth" => all_fixtures("Forth")
})
end
# Candidate languages = ["Hack", "PHP"]
def test_hack_by_heuristics
assert_heuristics({
"Hack" => "Hack/funs.php",
"PHP" => "PHP/Model.php"
})
end
# Candidate languages = ["Scala", "SuperCollider"]
def test_sc_supercollider_scala_by_heuristics
assert_heuristics({
"SuperCollider" => "SuperCollider/WarpPreset.sc",
"Scala" => "Scala/node11.sc"
})
end
def test_fs_by_heuristics
assert_heuristics({
"F#" => all_fixtures("F#"),
"Forth" => all_fixtures("Forth"),
"GLSL" => all_fixtures("GLSL")
})
end
def assert_heuristics(hash)
candidates = hash.keys.map { |l| Language[l] }
hash.each do |language, blobs|
Array(blobs).each do |blob|
result = Heuristics.call(file_blob(blob), candidates)
assert_equal [Language[language]], result
end end
end end
end end
def test_hack_by_heuristics def test_ls_by_heuristics
languages = ["Hack", "PHP"] assert_heuristics({
results = Heuristics.disambiguate_hack(fixture("Hack/funs.php"), languages) "LiveScript" => "LiveScript/hello.ls",
assert_equal Language["Hack"], results.first "LoomScript" => "LoomScript/HelloWorld.ls"
end })
def test_sc_supercollider_by_heuristics
languages = ["Scala", "SuperCollider"]
results = Heuristics.disambiguate_sc(fixture("SuperCollider/WarpPreset.sc"), languages)
assert_equal Language["SuperCollider"], results.first
end
def test_sc_scala_by_heuristics
languages = ["Scala", "SuperCollider"]
results = Heuristics.disambiguate_sc(fixture("Scala/node11.sc"), languages)
assert_equal Language["Scala"], results.first
end end
end end

View File

@@ -1,64 +1,8 @@
require 'linguist/language' require_relative "./helper"
require 'test/unit'
require 'pygments'
class TestLanguage < Test::Unit::TestCase class TestLanguage < Test::Unit::TestCase
include Linguist include Linguist
Lexer = Pygments::Lexer
def test_lexer
assert_equal Lexer['ActionScript 3'], Language['ActionScript'].lexer
assert_equal Lexer['AspectJ'], Language['AspectJ'].lexer
assert_equal Lexer['Bash'], Language['Gentoo Ebuild'].lexer
assert_equal Lexer['Bash'], Language['Gentoo Eclass'].lexer
assert_equal Lexer['Bash'], Language['Shell'].lexer
assert_equal Lexer['C'], Language['OpenCL'].lexer
assert_equal Lexer['C'], Language['XS'].lexer
assert_equal Lexer['C++'], Language['C++'].lexer
assert_equal Lexer['Chapel'], Language['Chapel'].lexer
assert_equal Lexer['Coldfusion HTML'], Language['ColdFusion'].lexer
assert_equal Lexer['Coq'], Language['Coq'].lexer
assert_equal Lexer['FSharp'], Language['F#'].lexer
assert_equal Lexer['FSharp'], Language['F#'].lexer
assert_equal Lexer['Fortran'], Language['FORTRAN'].lexer
assert_equal Lexer['Gherkin'], Language['Cucumber'].lexer
assert_equal Lexer['Groovy'], Language['Groovy'].lexer
assert_equal Lexer['HTML'], Language['HTML'].lexer
assert_equal Lexer['HTML+Django/Jinja'], Language['HTML+Django'].lexer
assert_equal Lexer['HTML+PHP'], Language['HTML+PHP'].lexer
assert_equal Lexer['HTTP'], Language['HTTP'].lexer
assert_equal Lexer['JSON'], Language['JSON'].lexer
assert_equal Lexer['Java'], Language['ChucK'].lexer
assert_equal Lexer['Java'], Language['Java'].lexer
assert_equal Lexer['JavaScript'], Language['JavaScript'].lexer
assert_equal Lexer['LSL'], Language['LSL'].lexer
assert_equal Lexer['MOOCode'], Language['Moocode'].lexer
assert_equal Lexer['MuPAD'], Language['mupad'].lexer
assert_equal Lexer['NASM'], Language['Assembly'].lexer
assert_equal Lexer['OCaml'], Language['OCaml'].lexer
assert_equal Lexer['Ooc'], Language['ooc'].lexer
assert_equal Lexer['OpenEdge ABL'], Language['OpenEdge ABL'].lexer
assert_equal Lexer['REBOL'], Language['Rebol'].lexer
assert_equal Lexer['RHTML'], Language['HTML+ERB'].lexer
assert_equal Lexer['RHTML'], Language['RHTML'].lexer
assert_equal Lexer['Ruby'], Language['Crystal'].lexer
assert_equal Lexer['Ruby'], Language['Mirah'].lexer
assert_equal Lexer['Ruby'], Language['Ruby'].lexer
assert_equal Lexer['S'], Language['R'].lexer
assert_equal Lexer['Scheme'], Language['Nu'].lexer
assert_equal Lexer['Racket'], Language['Racket'].lexer
assert_equal Lexer['Scheme'], Language['Scheme'].lexer
assert_equal Lexer['Standard ML'], Language['Standard ML'].lexer
assert_equal Lexer['TeX'], Language['TeX'].lexer
assert_equal Lexer['Verilog'], Language['Verilog'].lexer
assert_equal Lexer['XSLT'], Language['XSLT'].lexer
assert_equal Lexer['aspx-vb'], Language['ASP'].lexer
assert_equal Lexer['haXe'], Language['Haxe'].lexer
assert_equal Lexer['reStructuredText'], Language['reStructuredText'].lexer
end
def test_find_by_alias def test_find_by_alias
assert_equal Language['ASP'], Language.find_by_alias('asp') assert_equal Language['ASP'], Language.find_by_alias('asp')
assert_equal Language['ASP'], Language.find_by_alias('aspx') assert_equal Language['ASP'], Language.find_by_alias('aspx')
@@ -119,6 +63,7 @@ class TestLanguage < Test::Unit::TestCase
assert_equal Language['VimL'], Language.find_by_alias('viml') assert_equal Language['VimL'], Language.find_by_alias('viml')
assert_equal Language['reStructuredText'], Language.find_by_alias('rst') assert_equal Language['reStructuredText'], Language.find_by_alias('rst')
assert_equal Language['YAML'], Language.find_by_alias('yml') assert_equal Language['YAML'], Language.find_by_alias('yml')
assert_nil Language.find_by_alias(nil)
end end
def test_groups def test_groups
@@ -193,6 +138,7 @@ class TestLanguage < Test::Unit::TestCase
assert_equal :programming, Language['Python'].type assert_equal :programming, Language['Python'].type
assert_equal :programming, Language['Ruby'].type assert_equal :programming, Language['Ruby'].type
assert_equal :programming, Language['TypeScript'].type assert_equal :programming, Language['TypeScript'].type
assert_equal :programming, Language['Makefile'].type
end end
def test_markup def test_markup
@@ -211,7 +157,6 @@ class TestLanguage < Test::Unit::TestCase
def test_other def test_other
assert_nil Language['Brainfuck'].type assert_nil Language['Brainfuck'].type
assert_nil Language['Makefile'].type
end end
def test_searchable def test_searchable
@@ -221,6 +166,7 @@ class TestLanguage < Test::Unit::TestCase
end end
def test_find_by_name def test_find_by_name
assert_nil Language.find_by_name(nil)
ruby = Language['Ruby'] ruby = Language['Ruby']
assert_equal ruby, Language.find_by_name('Ruby') assert_equal ruby, Language.find_by_name('Ruby')
end end
@@ -277,34 +223,21 @@ class TestLanguage < Test::Unit::TestCase
assert_equal [Language['Chapel']], Language.find_by_filename('examples/hello.chpl') assert_equal [Language['Chapel']], Language.find_by_filename('examples/hello.chpl')
end end
def test_find_by_shebang def test_find_by_interpreter
assert_equal 'ruby', Linguist.interpreter_from_shebang("#!/usr/bin/ruby\n# baz") {
{ [] => ["", "ruby" => "Ruby",
"foo", "Rscript" => "R",
"#bar", "sh" => "Shell",
"#baz", "bash" => "Shell",
"///", "python" => "Python",
"\n\n\n\n\n", "python2" => "Python",
" #!/usr/sbin/ruby", "python3" => "Python",
"\n#!/usr/sbin/ruby"], "sbcl" => "Common Lisp"
['Ruby'] => ["#!/usr/bin/env ruby\n# baz", }.each do |interpreter, language|
"#!/usr/sbin/ruby\n# bar", assert_equal [Language[language]], Language.find_by_interpreter(interpreter)
"#!/usr/bin/ruby\n# foo",
"#!/usr/sbin/ruby",
"#!/usr/sbin/ruby foo bar baz\n"],
['R'] => ["#!/usr/bin/env Rscript\n# example R script\n#\n"],
['Shell'] => ["#!/usr/bin/bash\n", "#!/bin/sh"],
['Python'] => ["#!/bin/python\n# foo\n# bar\n# baz",
"#!/usr/bin/python2.7\n\n\n\n",
"#!/usr/bin/python3\n\n\n\n"],
["Common Lisp"] => ["#!/usr/bin/sbcl --script\n\n"]
}.each do |languages, bodies|
bodies.each do |body|
assert_equal([body, languages.map{|l| Language[l]}],
[body, Language.find_by_shebang(body)])
end
end end
assert_equal [], Language.find_by_interpreter(nil)
end end
def test_find def test_find
@@ -317,6 +250,7 @@ class TestLanguage < Test::Unit::TestCase
assert_equal 'C#', Language['c#'].name assert_equal 'C#', Language['c#'].name
assert_equal 'C#', Language['csharp'].name assert_equal 'C#', Language['csharp'].name
assert_nil Language['defunkt'] assert_nil Language['defunkt']
assert_nil Language[nil]
end end
def test_find_ignores_case def test_find_ignores_case
@@ -401,12 +335,6 @@ class TestLanguage < Test::Unit::TestCase
assert_equal '.coffee', Language['CoffeeScript'].primary_extension assert_equal '.coffee', Language['CoffeeScript'].primary_extension
assert_equal '.t', Language['Turing'].primary_extension assert_equal '.t', Language['Turing'].primary_extension
assert_equal '.ts', Language['TypeScript'].primary_extension assert_equal '.ts', Language['TypeScript'].primary_extension
# This is a nasty requirement, but there's some code in GitHub that
# expects this. Really want to drop this.
Language.all.each do |language|
assert language.primary_extension, "#{language} has no primary extension"
end
end end
def test_eql def test_eql
@@ -418,21 +346,14 @@ class TestLanguage < Test::Unit::TestCase
assert !Language.by_type(:prose).nil? assert !Language.by_type(:prose).nil?
end end
def test_colorize def test_all_languages_have_grammars
assert_equal <<-HTML.chomp, Language['Ruby'].colorize("def foo\n 'foo'\nend\n") scopes = YAML.load(File.read(File.expand_path("../../grammars.yml", __FILE__))).values.flatten
<div class="highlight"><pre><span class="k">def</span> <span class="nf">foo</span> missing = Language.all.reject { |language| language.tm_scope == "none" || scopes.include?(language.tm_scope) }
<span class="s1">&#39;foo&#39;</span> message = "The following languages' scopes are not listed in grammars.yml. Please add grammars for all new languages.\n"
<span class="k">end</span> message << "If no grammar exists for a language, mark the language with `tm_scope: none` in lib/linguist/languages.yml.\n"
</pre></div>
HTML
end
def test_colorize_with_options width = missing.map { |language| language.name.length }.max
assert_equal <<-HTML.chomp, Language['Ruby'].colorize("def foo\n 'foo'\nend\n", :options => { :cssclass => "highlight highlight-ruby" }) message << missing.map { |language| sprintf("%-#{width}s %s", language.name, language.tm_scope) }.sort.join("\n")
<div class="highlight highlight-ruby"><pre><span class="k">def</span> <span class="nf">foo</span> assert missing.empty?, message
<span class="s1">&#39;foo&#39;</span>
<span class="k">end</span>
</pre></div>
HTML
end end
end end

View File

@@ -1,6 +1,4 @@
require 'linguist/md5' require_relative "./helper"
require 'test/unit'
class TestMD5 < Test::Unit::TestCase class TestMD5 < Test::Unit::TestCase
include Linguist include Linguist

View File

@@ -1,57 +1,29 @@
require 'test/unit' require_relative "./helper"
class TestPedantic < Test::Unit::TestCase class TestPedantic < Test::Unit::TestCase
Lib = File.expand_path("../../lib/linguist", __FILE__) filename = File.expand_path("../../lib/linguist/languages.yml", __FILE__)
LANGUAGES = YAML.load(File.read(filename))
def file(name)
File.read(File.join(Lib, name))
end
def test_language_names_are_sorted def test_language_names_are_sorted
languages = [] assert_sorted LANGUAGES.keys
file("languages.yml").lines.each do |line|
if line =~ /^(\w+):$/
languages << $1
end
end
assert_sorted languages
end end
def test_extensions_are_sorted def test_extensions_are_sorted
extensions = nil LANGUAGES.each do |name, language|
file("languages.yml").lines.each do |line| extensions = language['extensions']
if line =~ /^ extensions:$/ assert_sorted extensions[1..-1] if extensions && extensions.size > 1
extensions = []
elsif extensions && line =~ /^ - \.([\w-]+)( *#.*)?$/
extensions << $1
else
assert_sorted extensions[1..-1] if extensions
extensions = nil
end
end end
end end
def test_filenames_are_sorted def test_filenames_are_sorted
filenames = nil LANGUAGES.each do |name, language|
file("languages.yml").lines.each do |line| assert_sorted language['filenames'] if language['filenames']
if line =~ /^ filenames:$/
filenames = []
elsif filenames && line =~ /^ - \.(\w+)$/
filenames << $1
else
assert_sorted filenames if filenames
filenames = nil
end
end end
end end
def assert_sorted(list) def assert_sorted(list)
previous = nil list.each_cons(2) do |previous, item|
list.each do |item| flunk "#{previous} should come after #{item}" if previous > item
if previous && previous > item
flunk "#{previous} should come after #{item}"
end
previous = item
end end
end end
end end

Some files were not shown because too many files have changed in this diff Show More