Compare commits

...

296 Commits
v4.8.13 ... dev

Author SHA1 Message Date
Colin Seymour
994bc1f135 Release v5.0.9 (#3597)
* Update all grammars

* Update atom-language-clean grammar to match

* Don't update reason grammer

There seems to be a problem with the 1.3.5 release in that the conversion isn't producing a reason entry so doesn't match whats in grammar.yml

* Bump version to 5.0.9

* Update grammars

* Don't update javascript grammar

The current grammar has a known issue and is pending the fix in https://github.com/atom/language-javascript/pull/497
2017-05-03 14:49:26 +01:00
John Gardner
44f03e64c1 Merge heuristics for disambiguating ".t" files (#3587)
References: github/linguist#3546
2017-04-29 11:15:39 +02:00
Jacob Elder
4166f2e89d Clarify support for generated code (#3588)
* Clarify support for generated code

* Incorporate feedback

* TIL about how .gitattributes matching works
2017-04-28 16:20:22 -07:00
John Gardner
1a8f19c6f2 Fix numbering of ordered lists (#3586) 2017-04-28 14:02:38 -07:00
Santiago M. Mola
c0e242358a Fix heuristics after rename (#3556)
* fix Roff detection in heuristics

This affects extensions .l, .ms, .n and .rno.

Groff was renamed to Roff in 673aeb32b9851cc58429c4b598c876292aaf70c7,
but heuristic was not updated.

* replace FORTRAN with Fortran

It was already renamed in most places since 4fd8fce08574809aa58e9771e2a9da5d135127be
heuristics.rb was missing though.

* fix caseness of GCC Machine Description
2017-04-26 15:31:36 -07:00
thesave
eb38c8dcf8 [Add Language] Jolie (#3574)
* added support for Jolie language

* added support for Jolie language

* added samples for Jolie
2017-04-26 11:04:25 -07:00
Trent Schafer
f146b4afbd New extension support for PL/SQL language (#2735)
* Add additional PL/SQL file extensions

* Add PL/SQL samples for .ddl and .prc

* Fix sort order of PL/SQL extensions

* Restore vendor/grammars/assembly.

* Restore `pls` as primary PL/SQL extension

* Add tpb to go with tps
2017-04-26 11:03:01 -07:00
Nicolas Garnier
db15d0f5d2 Added MJML as an XML extension (#3582) 2017-04-26 19:24:57 +10:00
Michael Grafnetter
e6d57c771d Add .admx and .adml as extensions for XML (#3580)
* Add .admx and .adml as extensions for XML

* Fixed the order of extensions
2017-04-24 09:55:22 -07:00
Nathan Phillip Brink
eef0335c5f Clarify description of implicit alias. (#3578)
* Clarify description of implicit alias.

I was trying to look up the alias to use for DNS Zone. From the docs
the alias I should use would be dns zone, but in reality it is dns-zone.
This change updates the comments to describe how to derive the
implicit name of a given alias.

* Further clarify description of implicit alias.

@pchaigno requested replacing the Ruby with English.
2017-04-24 09:54:37 -07:00
Christoph Pojer
461c27c066 Revert "Added Jest snapshot test files as generated src (#3572)" (#3579)
This reverts commit f38d6bd124.
2017-04-22 14:20:54 +02:00
Matvei Stefarov
59d67d6743 Treat vstemplate and vsixmanifest as XML (#3517) 2017-04-22 09:25:50 +01:00
Sandy Armstrong
7aeeb82d3d Treat Xamarin .workbook files as markdown (#3500)
* Treat Xamarin .workbook files as markdown

Xamarin Workbook files are interactive coding documents for C#, serialized as
markdown files. They include a YAML front matter header block with some
metadata. Interactive code cells are included as `csharp` fenced code blocks.

An example can be found here:
https://github.com/xamarin/Workbooks/blob/master/csharp/csharp6/csharp6.workbook

Treated as markdown, it would appear like so:
https://gist.github.com/sandyarmstrong/e331dfeaf89cbce89043a1c31faa1297

* Add .workbook sample

Source: https://github.com/xamarin/Workbooks/blob/master/csharp/csharp6/csharp6.workbook
2017-04-20 15:29:17 +01:00
Christophe Coevoet
c98ca20076 Switch the PHP grammar to the upstream repo (#3575)
* Switch the PHP grammar to the upstream repo

* Update all URLs pointing to the PHP grammar bundle
2017-04-20 14:40:44 +01:00
Paul Chaignon
4e0b5f02aa Fix usage line in binary (#3564)
Linguist cannot work on any directory; it needs to be a Git
repository.
2017-04-20 10:18:03 +01:00
Tim Jones
8da7cb805e Add .cginc extension to HLSL language (#3491)
* Add .cginc extension to HLSL language

* Move extension to correct position

* Add representative sample .cginc file
2017-04-20 09:48:48 +01:00
Dorian
e5e81a8560 Add .irbc and Rakefile to matching ruby filenames (#3457) 2017-04-20 09:41:31 +01:00
Tim Jones
dd53fa1585 Add ShaderLab language (#3490)
* Add ShaderLab language

* Update HLSL and ShaderLab grammars to latest version

* Add .shader extension back to GLSL language

* Add sample GLSL .shader files

Note that these are copies of existing GLSL samples, renamed to have
the .shader extension.
2017-04-20 09:04:08 +01:00
Daniel F Moisset
354a8f079a Add suport for python typeshed (.pyi) extension (#3548) 2017-04-20 09:01:41 +01:00
Hank Brekke
f38d6bd124 Added Jest snapshot test files as generated src (#3572) 2017-04-20 08:58:39 +01:00
Santiago M. Mola
e80b92e407 Fix heuristic for Unix Assembly with .ms extension (#3550) 2017-04-06 22:01:42 +10:00
Martin Nowak
fa6ae1116f better heuristic distinction of .d files (#3145)
* fix benchmark

- require json for Hash.to_json

* better heuristic distinction of .d files

- properly recongnize dtrace probes
- recongnize \ in Makefile paths
- recongnize single line `file.ext : dep.ext` make targets
- recognize D module, import, function, and unittest declarations
- add more representative D samples

D changed from 31.2% to 28.1%
DTrace changed from 33.5% to 32.5%
Makefile changed from 35.3% to 39.4%

See
https://gist.github.com/MartinNowak/fda24fdef64f2dbb05c5a5ceabf22bd3
for the scraper used to get a test corpus.
2017-03-30 18:25:53 +01:00
Yuki Izumi
b7e27a9f58 .pod disambiguation heuristic fix (#3541)
Look for any line starting with "=\w+", not full lines, otherwise we
miss e.g. "=head1 HEADING".
2017-03-27 14:10:17 +11:00
Javier Honduvilla Coto
69ba4c5586 Update the Instrumenter doc ... (#3530)
... with an instance of the given`Instrumenter` instead of the class itself.
2017-03-23 06:11:45 +01:00
Rafer Hazen
c39d7fd6e8 Add data-engineering staff to maintainers list (#3533) 2017-03-22 07:06:58 -06:00
Yuki Izumi
44ed47cea1 Release v5.0.8 (#3535) 2017-03-22 16:41:36 +11:00
Yuki Izumi
de51cb08d2 Add .mdwn for Markdown (#3534) 2017-03-22 16:28:59 +11:00
Ronald Wampler
3dd2d08190 Add .mdown as an extension for Markdown (#3525)
* Add `.mdown` as an extension for Markdown

* Add `.mdown` sample
2017-03-22 16:14:54 +11:00
Yuki Izumi
3b625e1954 Release v5.0.7 (#3524)
* grammar update
* Release v5.0.7
2017-03-20 14:13:04 +11:00
Yuki Izumi
5c6f690b97 Prefer Markdown over GCC Machine Description (#3523)
* Add minimal Markdown sample
* Heuristic defaults to Markdown on no match
* Allow Linguist to detect empty blobs
2017-03-20 13:07:54 +11:00
Michael Rawlings
3bbfc907f3 [Add Language] Marko (#3519)
* add marko

* update marko
2017-03-17 09:46:20 +00:00
Colin Seymour
053b8bca97 GitHub.com now uses gitattributes overrides for syntax highlighting (#3518)
See https://github.com/github/linguist/issues/1792#issuecomment-286379822 for more details.
2017-03-15 22:42:08 -07:00
Yves Siegrist
7fb3db6203 Add .eye files to be used as ruby (#3509)
Usually files that are used for [eye](https://github.com/kostya/eye) have the file extension `.eye`.
A eye definition file always contains ruby code.
2017-03-13 17:22:56 -07:00
Liav Turkia
ba09394f85 Added a demos folder and updated regexes (#3512)
I added a check for case-sensitivity to the regex's. In my repositories, I have both a Docs and Demos folder and those wouldn't have been matched before. Now, they would.
2017-03-13 17:20:36 -07:00
Paul Chaignon
c59c88f16e Update grammar whitelist (#3510)
* Remove a few hashes for grammars with BSD licenses

There was an error in Licensee v8.8.2, which caused it to not
recognize some BSD licenses. v8.8.3 fixes it.

* Update submodules

Remove 2 grammars from the whitelist because their licenses were
added to a LICENSE file which a proper format (one that Licensee
detects).

MagicPython now supports all scopes that were previously supported
by language-python.
2017-03-13 17:19:06 -07:00
Brandon Black
8a6e74799a Merge branch 'master' of https://github.com/github/linguist 2017-03-13 17:13:48 -07:00
Brandon Black
4268769d2e adjusting travis config 2017-03-13 17:13:24 -07:00
NN
6601864084 Add wixproj as XML (#3511)
* Add wixproj as XML

WiX uses wixproj for projects.

* Add wixproj sample
2017-03-13 17:01:58 -07:00
Paul Chaignon
d57aa37fb7 Grammar for OpenSCAD from Textmate bundle (#3502) 2017-03-13 17:00:27 -07:00
Karl Pettersson
e72347fd98 Add alias for pandoc (#3493) 2017-03-13 16:59:35 -07:00
Brandon Black
1b429ea46b updating rubies 2017-03-10 00:00:19 -08:00
Paul Chaignon
9468ad4947 Fix grammar hashes (#3504)
* Update Licensee hashes for grammar licenses

Licensee v8.8 changed the way licenses are normalized, thus changing hashes for
some grammars

* Update Licensee

Prevent automatic updates to major releases
2017-03-09 23:57:35 -08:00
Nate Whetsell
733ef63193 Add Jison (#3488) 2017-02-22 00:24:50 -08:00
Brandon Black
9ca6a5841e Release v5.0.6 (#3489)
* grammar update

* bumping linguist version

* fixes for grammar updates
2017-02-21 23:13:15 -08:00
Brandon Black
41ace5fba0 using fork for php.tmbundle since updates are broken 2017-02-21 17:13:55 -08:00
Alex Arslan
cc4295b3b3 Update URL for Julia TextMate repo (#3487) 2017-02-21 17:05:59 -08:00
doug tangren
1e4ce80fd9 add support for detecting bazel WORKSPACE files (#3459)
* add support for detecting bazel WORKSPACE files

* Update languages.yml
2017-02-21 16:48:44 -08:00
Brandon Black
74a71fd90d fixing merge conflict 2017-02-21 16:28:34 -08:00
TingPing
9b08318456 Add Meson language (#3463) 2017-02-21 16:24:58 -08:00
Tim Jones
fa5b6b03dc Add grammar for HLSL (High Level Shading Language) (#3469) 2017-02-21 16:05:25 -08:00
Garen Torikian
cb59296fe0 Like ^docs?, sometimes one sample is enough (#3485) 2017-02-20 10:29:30 -08:00
Eloy Durán
f1be771611 Disambiguate TypeScript with tsx extension. (#3464)
Using the technique as discussed in #2761.
2017-02-20 10:17:18 +00:00
Alex Louden
b66fcb2529 Improve Terraform (.tf) / HCL (.hcl) syntax highlighting (#3392)
* Add Terraform grammar, and change .tf and .hcl files from using Ruby to Terraform sublime syntax

* Expand Terraform sample to demonstrate more language features

* Revert terraform sample change

* Add terraform sample - Dokku AWS deploy

* Updated to latest Terraform

* Update terraform string interpolation

* Update terraform to latest
2017-02-20 10:09:59 +00:00
Brandon Black
f7fe1fee66 Release v5.0.5 -- part Deux (#3479)
* bumping to v5.0.5

* relaxing rugged version requirement
2017-02-15 21:29:04 -08:00
Brandon Black
94367cc460 Update LICENSE 2017-02-15 14:11:37 -08:00
Phineas
72bec1fddc Update LICENSE Copyright Date to 2017 (#3476) 2017-02-15 14:11:13 -08:00
Brandon Black
4e2eba4ef8 Revert "Release v5.0.5" (#3477) 2017-02-15 12:48:45 -08:00
Colin Seymour
10457b6639 Release v5.0.5 (#3473)
Release v5.0.5

* Update submodules

* Update grammars

* Bump version to 5.0.5

* Relax dependency on rugged

It's probably not wise to depend on a beta version just yet.

* revert php.tmbundle grammar update

One of the changes in 3ed4837b43...010cc1c22c leads to breakage in snippet highlighting on github.com
2017-02-15 11:12:53 +00:00
Paul Chaignon
d58cbc68a6 Support for the P4 language
P4 is a language to describe the processing pipeline of network devices
2017-02-15 06:53:46 +01:00
Colin Seymour
01de40faaa Return early in Classifier.classify if no languages supplied (#3471)
* Return early if no languages supplied

There's no need to tokenise the data when attempting to classify without a limited language scope as no action will be performed when it comes to scoring anyway.

* Add test for empty languages array
2017-02-13 18:22:54 +00:00
Paul Chaignon
62d285fce6 Fix head commit for TXL grammar (#3470) 2017-02-13 14:35:38 +01:00
Stefan Stölzle
0056095e8c Add .lkml to LookML (#3454)
* Add .lkml to LookML

* Limit .lkml to .view.lkml and .model.lkml

* Add lkml samples

* Fix extension order
2017-02-03 11:50:30 +01:00
Lars Brinkhoff
d6dc3a3991 Accomodate Markdown lines which begin with '>'. (#3452) 2017-02-02 11:58:52 -08:00
Greg Zimmerman
b524461b7c Add PowerShell color. Matches default console color for PowerShell. (#3448) 2017-02-02 11:13:01 -08:00
fix-fix
76d41697aa Use official HTML primary color (#3447)
Use primary color of HTML5 logo as defined on Logo FAQ page
2017-02-02 11:09:55 -08:00
John Gardner
32147b629e Register "Emakefile" as an Erlang filename (#3443) 2017-02-02 11:09:07 -08:00
John Gardner
e7b5e25bf8 Add support for regular expression data (#3441) 2017-02-02 11:08:20 -08:00
Brandon Black
d761658f8b Release v5.0.4 (#3445)
* bumping to v5.0.3

* updating rugged
2017-01-31 15:08:52 -08:00
Brandon Black
3719214aba fixing null reference in yarn.lock check (#3444) 2017-01-31 14:45:22 -08:00
Paul Chaignon
47b109be36 Improve heuristic for Modula-2 (#3434)
Module names can contain dots
2017-01-24 15:10:46 -08:00
Javier Honduvilla Coto
1ec4db97c2 Be able to configure the number of threads for git submodules update (#3438)
* make the number of submodule update threads configurable

* change thread number to be a script argument
2017-01-24 09:55:31 -08:00
Brandon Black
9fe5fe0de2 v5.0.3 Release (#3436)
* bumping to v5.0.3

* updating grammars
2017-01-23 20:09:26 -08:00
sunderls
b36ea7ac9d Add yarn (#3432)
* add yarn.lock

* fix comment

* remove yarn test

* add test

* fix test

* try fix again

* try 3rd time

* check filename and firstline for yarn lockfile
2017-01-23 10:58:53 -08:00
Yuki Izumi
625b06c30d Use textmate/diff.tmbundle again (#3429)
Upstream textmate/diff.tmbundle#6 was merged.
2017-01-17 17:02:10 +11:00
Brandon Black
28bce533b2 Release v5.0.2 (#3427)
* updated grammars

* bumping version

* adding .gem files to gitignore
2017-01-11 16:08:31 -08:00
John Gardner
93ec1922cb Swap grammar used for CSS highlighting (#3426)
* Swap grammar used for CSS highlighting

* Whitelist license of Atom's CSS grammar

* Explicitly declare grammar as MIT-licensed

Source: https://github.com/atom/language-css/blob/5d4af/package.json#L14
2017-01-11 16:16:25 +11:00
Yuki Izumi
5d09fb67dd Allow for split(",") returning nil (#3424) 2017-01-10 11:44:24 +11:00
Arfon Smith
93dcb61742 Removing references to @arfon (#3423) 2017-01-09 11:54:55 -08:00
Brandon Black
3a03594685 bumping to v5.0.1 (#3421) 2017-01-08 23:30:56 -08:00
Brandon Black
5ce2c254f9 purging vestigial search term references 2017-01-08 23:08:16 -08:00
Brandon Black
d7814c4899 fixing incompatibility with latest rugged 2017-01-08 22:59:00 -08:00
Brandon Black
50c08bf29e updating grammars 2017-01-08 22:29:01 -08:00
Yuki Izumi
34928baee6 Use https://github.com/textmate/diff.tmbundle/pull/6 (#3420) 2017-01-08 21:53:14 -08:00
USAMI Kenta
27bb41aa4d Add Cask file as Emacs Lisp (#3416)
* Add Cask file as Emacs Lisp

* Replace Cask file (original is zonuexe/composer.el)
2017-01-04 11:26:20 -08:00
Brandon Black
1415f4b52d fixing grammar file pedantry 2017-01-03 17:23:56 -08:00
Arie Kurniawan
ae8ffcad22 change http to https
fix the little flaws found on http protocol that is used,
one of the web using the http protocol which is already supporting more secure protocol,
which is https
2017-01-03 17:19:58 -08:00
Brandon Black
f43633bf10 fixing license for atom-language-rust 2017-01-03 17:02:25 -08:00
Brandon Black
a604de9846 replacing atom grammar due to ST2 compatibility change 2017-01-03 16:46:02 -08:00
Brandon Black
3e224e0039 updating grammars 2017-01-03 16:33:46 -08:00
Christopher Gilbert
15b04f86c3 Amended license file for SublimeEthereum language grammar 2017-01-03 16:13:27 -08:00
Christopher Gilbert
42af436c20 Added grammar for Solidity programming language 2017-01-03 16:13:27 -08:00
Christopher Gilbert
2b08c66f0b Added grammar for Solidity programming language 2017-01-03 16:13:27 -08:00
Zach Brock
f98ab593fb Detect Javascript files generated by Protocol Buffers. 2017-01-03 16:07:26 -08:00
Brandon Black
f951ec07de bin help cleanup 2017-01-03 15:52:22 -08:00
Vadim Markovtsev
e9ac71590f Add --json option to bin/linguist 2017-01-03 14:56:12 -08:00
Al Thomas
210cd19876 Add Genie programming language (#3396)
* Add Genie programming language

Genie was introduced in 2008 as part of the GNOME project:
https://wiki.gnome.org/Projects/Genie

It is a programming language that uses the Vala compiler to
produce native binaries. It has good bindings to C libraries
especially those that are part of the GObject world such as
Gtk+3 and GStreamer

* Change color for Genie so tests pass
2017-01-03 14:15:17 -08:00
Mate Lorincz
f473c555ac Update vendor.yml (#3382)
Exclude Realm and RealmSwift frameworks from language statistics.
2017-01-03 14:10:49 -08:00
Nate Whetsell
48e4394d87 Add Jison-generated JavaScript to generated files (#3393)
* Fix typos

* Add Jison-generated JavaScript to generated files
2017-01-03 14:08:29 -08:00
meganemura
e1ce88920d Fix add-grammar and convert-grammars (#3354)
* Skip removed grammar submodule

* Clean up old grammar from grammars and grammars.yml

* Clean up unused grammar license

Run `script/licensed`.
This was missing change in 12f9295 of #3350.

* Clean up license files when we replace grammar

Update license files by running `script/licensed`.
Since we replace grammar, the old grammar license must be removed
and new grammar license must be added.
2017-01-03 13:57:46 -08:00
Paul Chaignon
675cee1d72 Improve the .pl Prolog heuristic rule (#3409)
:- can be directly at the start of a line
2017-01-03 13:54:47 -08:00
yutannihilation
1c4baf6dc2 ignore roxygen2-generated files (#3373) 2017-01-03 13:31:04 -08:00
Samantha McVey
8f2820e9cc Add XCompose language and highlighter (#3402)
* Add XCompose language and highlighter

* XCompose fix some errors in the Travis build

* Remove xmodmap files for XCompose

Most xmodmap files aren't XCompose, and there's not enough xmodmap files
which are XCompose to be worth adding to heuristics

* Remove some extensions/filenames from XCompose

* Rename and move sample to correct folder and filename

That we have added in languages.yml

* Use generated language id
2017-01-03 13:29:00 -08:00
Danila Malyutin
04c268e535 Add mysql extension for sql scripts (#3413) 2017-01-03 11:47:19 -08:00
Paul Chaignon
ec749b3f8d Remove the Hy grammar (#3411)
The grammar repository was recently deleted
2017-01-03 11:38:44 -08:00
Samantha McVey
08b63e7033 Change repository for Perl 6 grammar 2016-12-24 00:58:21 +01:00
Darin Morrison
7867b946b9 Add support for Reason (#3336) 2016-12-22 17:03:18 -08:00
Brandon Black
a4d12cc8e4 Release v5.0.0 (#3388)
* bumping version to v5.0.0

* updating license information

* reverting accidental changes to language ids cc @twp

* Updating grammars
2016-12-22 16:55:22 -08:00
Brandon Black
a1165b74b1 reverting accidental changes to language ids cc @twp 2016-12-15 13:56:09 -08:00
Brandon Black
0fa1fa5581 fixing groff reference 2016-12-14 10:19:39 -08:00
Arfon Smith
d8b91bd5c4 The grand language renaming bonanza (#3278)
* Removing FORTRAN samples because OS X case-insensitive filesystems :-\

* Adding Fotran samples back

* FORTRAN -> Fortran

* Groff -> Roff

* GAS -> Unix Assembly

* Cucumber -> Gherkin

* Nimrod -> Nim

* Ragel in Ruby Host -> Ragel

* Jade -> Pug

* VimL -> Vim script
2016-12-13 13:39:27 -08:00
Paul Chaignon
9b941a34f0 Use filenames as a definitive answer (#2006)
* Separate find_by_extension and find_by_filename
find_by_extension now takes a path as argument and not only the file extension.
Currently only find_by_extension is used as a strategy.

* Add find_by_filename as first strategy
2016-12-12 12:34:33 -08:00
Paul Chaignon
9d8392dab8 Remove deprecated code (#3359)
* Remove deprecated find_by_shebang

* Remove deprecated ace_modes function

* Remove deprecated primary_extension function

Gists don't have a language dropdown anymore

* Remove deprecated Linguist::Language.detect function

* Remove deprecated search_term field
2016-12-12 12:24:19 -08:00
Brandon Black
2c78dd2c66 Bumping to v4.8.18 (#3370)
* make tests great again 

* version bump

* removing empty line in gemspec
2016-12-07 11:39:49 -08:00
Brandon Black
3988f3e7a7 Merge branch 'rascal-linguist' of git://github.com/ahmadsalim/linguist into ahmadsalim-rascal-linguist 2016-12-07 09:28:40 -08:00
USAMI Kenta
d9a4e831b4 Add .php_cs and .php_cs.dist (#3367)
* Add .php_cs and .php_cs.dist

* Move files to filenames subdir
2016-12-07 09:20:40 -08:00
John Gardner
45c27f26a2 Add support for the GN configuration language (#3368)
* Add samples and definition for GN build files

* Add grammar to provide GN syntax highlighting

* Fix failing tests

* Add Python extensions for GYP includes and .gclient configs
2016-12-07 09:20:23 -08:00
Ahmad Salim Al-Sibahi
0fbc29bf68 Updated language id 2016-12-07 08:53:40 +01:00
Ahmad Salim Al-Sibahi
5569d2056d Removed old submodules 2016-12-07 08:47:40 +01:00
Ahmad Salim Al-Sibahi
be262d0b4f Merge remote-tracking branch 'refs/remotes/github/master'
Conflicts:
	.gitmodules
	grammars.yml
2016-12-07 08:36:48 +01:00
Brandon Black
33ce2d7264 Merge branch 'master' of https://github.com/github/linguist into meganemura-replace-haml 2016-12-06 22:13:40 -08:00
Paul Chaignon
c486f56204 Mark .indent.pro files as vendored (#3361) 2016-12-06 21:59:28 -08:00
Jim Deville
9f3b7d0ba5 Allow golang as an alias for Go code fences (#3221) 2016-12-06 21:55:25 -08:00
Paul Chaignon
79f20e8057 Heuristic rule for TeX .cls files (#3360) 2016-12-06 21:50:33 -08:00
Yamagishi Kazutoshi
cd30c7613c Detect .babelrc (#3358)
`.babelrc` is Babel configuration file in JSON 5 format.
2016-12-06 21:43:33 -08:00
John Gardner
5aa53c0711 Swap grammar used for Ninja highlighting (#3353) 2016-12-06 21:32:11 -08:00
Ahmad Salim Al-Sibahi
c17cdca896 Updated to the latest available grammar 2016-12-06 11:22:09 +01:00
Christoph Päper
ecdae83364 Add support for font-specific formats (#3142)
* markdown and font-specific updates languages.yml

- Markdown: ← extensions: .md.txt
- Text: ← extensions: .text
- Text: ← filenames: FONTLOG http://scripts.sil.org/cms/scripts/page.php?item_id=OFL-FAQ_web#43cecb44
- OpenType: ← extensions: .fea https://www.adobe.com/devnet/opentype/afdko/topic_feature_file_syntax.html
- Spline Font: ← extensions: .sfd http://fontforge.github.io/en-US/documentation/developers/sfdformat/

* Update languages.yml

`type: data` for SFD

* Update languages.yml

OpenType feature ← type: markup

* Update languages.yml

alphabetic order

* Update languages.yml

incorporated suggestions:

- “OpenType Font Feature” → “Opentype feature” (no “file” at the end, because that’s left out almost everywhere else, too)
- `tm_scope` according to https://github.com/Alhadis/language-fontforge

* remove non-font related additions

- `.md.txt` Markdown
- `.text` Plain Text

* Update languages.yml

remove comment

* changed names as requested

* Merge remote-tracking branch 'github/master' into patch-2

# Conflicts:
#	lib/linguist/languages.yml

* quote marks

* Revert "Merge remote-tracking branch 'github/master' into patch-2"

This reverts commit 18e4256b828c4186fec806319cbf8b76f0d2c79b.

* Update language IDs

* Add missing submodule to grammars list
2016-12-05 18:50:05 -08:00
meganemura
31aafa2c78 Replace ruby-haml.tmbundle with language-haml
./script/add-grammar --replace ruby-haml.tmbundle https://github.com/ezekg/language-haml
2016-12-01 11:38:12 +09:00
Ahmad Salim Al-Sibahi
8a911b8ff3 Fixed ordering correctly 2016-11-30 00:15:17 +01:00
Ahmad Salim Al-Sibahi
9233f1d17f Fixed color, order and ace mode 2016-11-30 00:03:55 +01:00
Ahmad Salim Al-Sibahi
77eb36a982 Added Rascal Language Id 2016-11-29 23:42:02 +01:00
Ahmad Salim Al-Sibahi
4e6e58a099 Added Example Rascal files 2016-11-29 23:37:34 +01:00
Ahmad Salim Al-Sibahi
c87976330f Added Rascal Grammar Link 2016-11-29 23:15:16 +01:00
Simen Bekkhus
0e9109c3fc Add Nunjucks highlighting (#3341) 2016-11-29 10:14:25 -08:00
meganemura
12f9295dd7 Improve grammar scripts (#3350)
* Remove trailing spaces

* Setup Bundler in some scripts

* Update grammar index

* Make prune-grammars script to be callable in a script directory

* Prune unused xquery grammar repo

source.xq by language-jsoniq is actual tm_scope for XQuery.

* Remove xquery submodule

git submodule deinit vendor/grammars/xquery/
git rm vendor/grammars/xquery/

* Fix invocation of script/list-grammars

This fixes #3339.

* Make add-grammars script to be callable in a script directory

* Generate samples.json before running list-grammars

list-grammars requires linguist.
2016-11-29 10:13:33 -08:00
Santi Aguilera
581723748b Added Dangerfile for ruby (#3333)
* Added Dangerfile for ruby

* rm ant.tmbundle from wlist because has license

Travis output says ant.tmbundle has license, so Im removing it from the whitelist (projects without license)

* Added sample

* New line at EOF

* Fix

* Remove bad file
2016-11-29 08:01:35 -08:00
Paul Chaignon
0980e304b1 Generate language_id (#3284)
* Generate language_id from language names

The language_id is generated from the SHA256 hash of the language's name

* Test the validity of language ids

All languages should have a positive 32bit integer as an id

* Update languages.yml header in set-language-ids
2016-11-29 07:50:44 -08:00
Kyle Smith
d46a529b6a Add support for Thrift-generated PHP code. (#3329) 2016-11-29 07:49:41 -08:00
Paul Chaignon
1d2ec4dbc3 Fix error with filenames ending with a dot (#3349)
The second negative argument to split instructs it to
preserve null fields in the returned array
2016-11-29 07:42:50 -08:00
Brandon Black
829eea0139 Merge pull request #3337 from pchaigno/update-grammar-whitelist
Remove Ant from grammar whitelist
2016-11-21 17:13:29 -08:00
Paul Chaignon
78b2853d70 License of Ant grammar is correctly detected
The last version of Licensee can recognize
underlined license headers in READMEs
2016-11-18 23:47:55 +01:00
Ahmad Salim Al-Sibahi
202f3c08cd Started on adding support for rascal. 2016-11-14 14:29:30 +01:00
Brandon Black
b958779e3d Merge pull request #3325 from danilaml/py3-ext
Add py3 to Python's extensions list
2016-11-13 17:57:56 -08:00
Danila Malyutin
00dc775daf Update, according to comments 2016-11-12 22:01:31 +03:00
Danila Malyutin
009a4e67b6 add py3 extension to Python 2016-11-12 06:11:39 +03:00
Arfon Smith
faaa4470af Merge pull request #3297 from github/bye-bye-github
Not going to be staff for much longer.
2016-11-03 20:32:28 -04:00
Arfon Smith
2a320cb988 Adding new GitHub staff 2016-11-03 20:22:21 -04:00
Arfon Smith
74931d1bd5 Merge branch 'master' into bye-bye-github 2016-11-03 20:09:02 -04:00
Arfon Smith
3ca93a84b9 Merge pull request #3315 from github/cut-release-v4.8.17
v4.8.17 release PR
2016-11-02 15:20:00 -04:00
Arfon Smith
aa27f18ea6 Bumping version to v4.8.17 2016-11-02 13:33:16 -04:00
Arfon Smith
d3e2ea3f71 Grammar updates 2016-11-02 13:25:50 -04:00
Arfon Smith
53aa1209ab Merge pull request #3314 from github/coq-kiq
Replace GPL-licensed Coq samples
2016-11-02 13:13:44 -04:00
Arfon Smith
b2a486fed2 Merge pull request #3312 from sanssecours/ebnf
Add Support for EBNF
2016-11-02 13:07:34 -04:00
Alhadis
4f1e5c34b1 Add permissive-licensed Coq samples
BSD-2-Clause: https://github.com/jscert/jscert

  * JSCorrectness.v
  * JSInterpreterExtraction.v
  * JSNumber.v
  * JSPrettyInterm.v

MIT/Expat: https://github.com/clarus/coq-atm

  * Computation.v
  * Main.v
  * Spec.v
2016-11-03 02:50:54 +11:00
Alhadis
85c9833081 Delete GPL2.1-licensed Coq samples
These files are modified variants of the ones included in Coq's standard
distribution. The original materials feature the GPL 2.1 license in each
file's header, which was suspiciously removed from Linguist's samples.

    https://github.com/coq/coq/tree/trunk/theories

Basics.v is a different file to that found in Coq's distro, but is vague
in origin and being removed to err on the side of caution. The remaining
samples are from a MIT-licensed online course by Software Foundations:

    https://www.cis.upenn.edu/~bcpierce/sf/current/

References: github/linguist#3313
2016-11-03 02:32:23 +11:00
René Schwaiger
33899b9d6b Add support for EBNF
Extended Backus–Naur form ([EBNF][]) is a metalanguage used to specify
language grammars.

[EBNF]: https://en.wikipedia.org/wiki/Extended_Backus–Naur_form
2016-11-02 13:50:30 +01:00
Arfon Smith
417239004a Merge pull request #3311 from sanssecours/abnf
Add Support for ABNF
2016-11-02 08:33:47 -04:00
René Schwaiger
6a1423d28f Add support for ABNF
Augmented Backus–Naur form ([ABNF][]) is a metalanguage used to specify
language grammars.

[ABNF]: https://en.wikipedia.org/wiki/Augmented_Backus–Naur_form
2016-11-02 12:52:16 +01:00
Arfon Smith
96a23ce388 Merge pull request #3092 from pchaigno/python-console
Support for Python console
2016-11-01 07:18:18 -04:00
Paul Chaignon
e8d7eed3aa Support for Python console 2016-11-01 11:21:00 +01:00
Arfon Smith
9d419c4ab9 Merge pull request #3245 from jecisc/smalltalk_examples
Add smalltalk samples
2016-10-30 12:31:37 -04:00
Arfon Smith
4eefc1f58e Merge pull request #3310 from github/welcoming-Alhadis
Adding @Alhadis to the maintainers list.
2016-10-30 08:41:59 -04:00
Arfon Smith
0b94b9cda7 Update CONTRIBUTING.md 2016-10-30 08:32:33 -04:00
Arfon Smith
c736038d94 Merge pull request #3308 from github/help-for-highlighting
Adding some language about grammars to the README
2016-10-29 09:41:08 -04:00
Arfon Smith
ec562138f8 More words 2016-10-29 09:40:33 -04:00
Arfon Smith
50013e8dd7 Words 2016-10-29 09:39:57 -04:00
Arfon Smith
416c5d1185 Update README.md 2016-10-29 08:58:57 -04:00
Arfon Smith
8869912d31 Merge pull request #3307 from github/3192-temp
3192  part deux
2016-10-29 08:40:10 -04:00
Arfon Smith
43fa563b77 Adding script/list-grammars to script/add-grammar 2016-10-29 08:33:58 -04:00
Arfon Smith
41c6aee8c3 Small readme edits 2016-10-29 08:30:09 -04:00
Alhadis
8cf575c37d Update grammars list 2016-10-29 13:05:59 +11:00
Alhadis
4e20928e04 Merge branch 'master' into grammar-list 2016-10-29 12:31:04 +11:00
Arfon Smith
3e37bd2680 Merge pull request #2843 from github/go-vendor
Go vendor
2016-10-28 00:24:06 -04:00
Arfon Smith
a29f5b2d46 Adding Go-specific vendor paths 2016-10-27 13:59:09 -04:00
Arfon Smith
4efc6f8c95 Merge branch 'master' into go-vendor 2016-10-26 18:34:02 -04:00
Arfon Smith
359699c454 Not going to be staff for much longer. 2016-10-26 13:55:19 -04:00
Arfon Smith
346aa99fcf Merge pull request #3296 from github/cut-release-v4.8.16
Staging PR for Linguist v4.8.16 release
2016-10-26 13:43:22 -04:00
Arfon Smith
d147778677 Bumping to v4.8.16 2016-10-25 20:30:46 -04:00
Arfon Smith
e520209e49 Grammar update 2016-10-25 20:25:07 -04:00
Arfon Smith
338cc16239 Merge pull request #3264 from Alhadis/rexx-interpreters
Add REXX interpreters
2016-10-25 20:15:05 -04:00
Arfon Smith
67ea35094b Merge pull request #3295 from github/3233-local
UPDATE: Add support for MQL4 and MQL5 languages
2016-10-25 20:10:56 -04:00
Arfon Smith
6f0393fcbd Merge branch 'master' into 3233-local 2016-10-25 20:00:48 -04:00
Arfon Smith
2923d50d7e Merge pull request #3294 from larsbrinkhoff/boot
Makefile.boot misclassified as Clojure.
2016-10-25 18:45:39 -04:00
Lars Brinkhoff
4e26f609ef Makefile.boot misclassified as Clojure. 2016-10-25 22:21:31 +02:00
Arfon Smith
e86d6e8dd2 Merge pull request #3193 from Alhadis/grammar-scripts
Add script to add or replace grammars
2016-10-25 07:36:01 -04:00
Arfon Smith
5fa02ad1fb Merge pull request #3182 from adius/patch-1
Exclude "dist" directory from language statistics
2016-10-22 18:39:51 -04:00
Arfon Smith
5a06240f69 Merge pull request #3289 from pchaigno/change-grammar-actionscript
Update grammar for ActionScript
2016-10-22 18:39:22 -04:00
Arfon Smith
d6e0f74c80 Merge pull request #3279 from Alhadis/mirror-modes
Add CodeMirror modes for GPG keys and IRC logs
2016-10-22 18:38:38 -04:00
Paul Chaignon
a5c08bb203 Update grammar for ActionScript 2016-10-22 21:35:28 +02:00
Arfon Smith
c6dc29abb1 Merge pull request #3242 from JoshCheek/julia-interpreters
Set interpreters for Julia
2016-10-22 12:24:44 -05:00
Arfon Smith
ffd984bb7e Merge pull request #3277 from Alhadis/pic
Add support for the Pic language
2016-10-17 12:03:43 -05:00
Alhadis
dc5473559b Merge branch 'master' into grammar-scripts 2016-10-17 17:38:19 +11:00
Lars Brinkhoff
8e9c224952 Remove color attribute from all languages with a group attribute. 2016-10-13 06:54:21 +02:00
Lars Brinkhoff
d43f111723 Remove redundant group attribute for ColdFusion. 2016-10-13 06:54:21 +02:00
Lars Brinkhoff
de9ff713a4 Test that grouped languages have no color. 2016-10-13 06:54:21 +02:00
Alhadis
98783560ec Add CodeMirror modes for GPG keys and IRC logs 2016-10-12 14:20:58 +11:00
Alhadis
8f31fbbd55 Bump language ID 2016-10-12 11:19:22 +11:00
Alhadis
e4cdbd2b2b Merge branch 'master' into pic 2016-10-12 11:18:28 +11:00
Arfon Smith
ba52e48ceb Merge pull request #3175 from Alhadis/cson
Classify CSON as data
2016-10-11 17:16:08 -07:00
Alhadis
a44ebe493b Bump language ID 2016-10-12 11:10:44 +11:00
Alhadis
eb0e75e11e Add support for the Pic language 2016-10-12 02:46:17 +11:00
Arfon Smith
22c2cf4967 Merge pull request #3268 from Alhadis/sublime
Add new entry for Sublime Text config files
2016-10-10 16:36:53 -07:00
Alhadis
39e3688fb8 Bump language ID for Sublime Text config files 2016-10-11 09:59:17 +11:00
Alhadis
6b83e5fb7b Merge branch 'master' into sublime 2016-10-11 09:57:05 +11:00
Arfon Smith
dd2e5ffe07 Merge pull request #3258 from scottmangiapane/master
Added support for TI Code
2016-10-10 08:55:23 -07:00
Andrew Case
f6b6c4e165 Merge branch 'master' into add-mql4-mql5-current 2016-10-10 17:41:29 +03:00
Andrew Case
608ed60b5c Merge branch 'master' of https://github.com/github/linguist 2016-10-10 17:31:02 +03:00
Arfon Smith
2ce2945058 Merge pull request #3272 from pchaigno/doc-set-language-id
Improve documentation on the language_id
2016-10-10 06:47:01 -07:00
Paul Chaignon
c8d376754e Improve documentation on the language_id 2016-10-09 17:55:50 +02:00
Arfon Smith
ecaef91fa1 Merge pull request #3266 from Alhadis/make
Add ".make" as a Makefile file extension
2016-10-09 07:58:17 -07:00
Alhadis
d265b78e7e Copy Sublime configs from submodules 2016-10-07 11:38:11 +11:00
Alhadis
5a5bf7d5e5 Add new entry for Sublime Text config files 2016-10-07 10:56:15 +11:00
Alhadis
e46781b903 Merge upstream changes into topic branch 2016-10-07 10:25:03 +11:00
Arfon Smith
9543a8c8e9 Merge pull request #3259 from Alhadis/colour-removal
Guard against unused colour definitions
2016-10-06 09:52:29 -07:00
Alhadis
6ac1ac9232 Resolve conflicts from upstream changes 2016-10-06 17:00:28 +11:00
Alhadis
1bbb919fef Reclassify Sublime configuration files as JSON
See:
- https://github.com/github/linguist/issues/2662#issuecomment-251865073
- https://github.com/github/linguist/pull/3267
2016-10-06 16:42:01 +11:00
Scott Mangiapane
71dfed0e45 Restored .8xp.txt extension 2016-10-05 12:31:32 -04:00
Scott Mangiapane
a2db058ce4 TI Code -> TI Program 2016-10-05 12:24:50 -04:00
Alhadis
12695fee2f Add ".make" as a Makefile file extension 2016-10-05 21:36:22 +11:00
Alhadis
4a775dca37 Add REXX interpreters 2016-10-05 17:45:38 +11:00
Alhadis
d7c689fd6b Merge branch 'master' into grammar-scripts 2016-10-05 16:34:24 +11:00
Alhadis
20b8188384 Add test to guard against unused colours 2016-10-05 16:17:00 +11:00
Scott Mangiapane
26310d9515 TI PRGM -> TI Code 2016-10-04 23:02:11 -04:00
Alhadis
e38cc75da5 Update ASN.1 grammar
This stops the ASN.1 submodule from being flagged as modified due to its
.DS_Store file being wiped locally by an automated process.

References: ajLangley12/language-asn1#1
2016-10-05 13:08:36 +11:00
Scott Mangiapane
8d55fc1bd5 Reordered extensions for TI PRGM 2016-10-04 14:38:28 -04:00
Alhadis
7e63399196 Delete colour property for ASN.1 language
This is classified on GitHub as "data", so the colour it's assigned only
wastes valuable "real estate" when checking colour proximity.

References: github/linguist#3113
2016-10-04 17:23:45 +11:00
Scott Mangiapane
520e5a5cfe Alphabetized extensions for TI PRGM (again) 2016-10-03 22:13:43 -04:00
Scott Mangiapane
5d85692c24 Updated TI PRGM languages.yml entry
Alphabetized extensions and added a language_id.
2016-10-03 22:07:23 -04:00
Scott Mangiapane
676861fff3 Removed binary files from the TI PRGM sample 2016-10-03 21:59:46 -04:00
Scott Mangiapane
6589bd9dc7 Added tm_scope to TI PRGM 2016-10-03 21:47:52 -04:00
Scott Mangiapane
e32a4f13ef Updated languages.yml 2016-10-03 21:37:56 -04:00
Scott Mangiapane
4e4d851f71 Alphabetized "TI PRGM" 2016-10-03 21:15:36 -04:00
Scott Mangiapane
a3628f86da Added example files for TI-83+/84 programs 2016-10-03 21:12:30 -04:00
Scott Mangiapane
fe70965906 Added entry for TI-83+/84 programs and apps 2016-10-03 21:11:28 -04:00
Lars Brinkhoff
c863435c84 Add '</' to Markdown heuristic. (#3255) 2016-10-03 19:22:34 +02:00
Paul Chaignon
eeec48198a Update submodules 2016-10-02 11:16:25 +02:00
Paul Chaignon
82167063da Tests to ensure the whitelists are up-to-date 2016-10-02 11:16:25 +02:00
Paul Chaignon
3ae89b48ba Improve Mathematica's heuristic rule
Use closing of Mathematica comment instead of opening
Unit test to check that test file is not detected as Mathematica anymore
2016-10-01 08:46:31 +02:00
Paul Chaignon
cd9401c424 Enable testing absence of heuristic result 2016-10-01 08:46:31 +02:00
Paul Chaignon
e7e8a7d835 Tests for .m heuristic rules 2016-10-01 08:46:31 +02:00
Arfon Smith
7654032d2e Merge pull request #3254 from alexruperez/master
Added BuddyBuildSDK.framework to lib/linguist/vendor.yml
2016-09-30 19:29:23 -07:00
alexruperez
05b536fc61 Added BuddyBuildSDK.framework to lib/linguist/vendor.yml 2016-09-28 11:24:17 +02:00
Paul Chaignon
ebe85788ab Rely solely on Licensee to recognize licenses
Remove our own license classification code
Add hashes for any project which does not have a standard license body
Add projects for which a license was not found to the whitelist

Requires Licensee v8.6.0 to correctly recognize TextMate bundles' .mdown README
2016-09-27 10:44:25 +02:00
Paul Chaignon
524337d07b Use Licensee hashes to uniquely identify licenses
Since v6.1.0, Licensee exposes the hash of the license
We can use it to uniquely identify unrecognized licenses,
Thus, tests will fail if the content of an unrecognized license changes

Projects for which no license was found are kept in the whitelist
2016-09-27 10:44:25 +02:00
Paul Chaignon
f8ce42e169 Recognize licenses in READMEs using Licensee
Since v7.0.0 Licensee can detect license text in READMEs
Using this, we might be able to rely solely on Licensee in the future
2016-09-27 10:44:25 +02:00
Arfon Smith
71032cd252 Merge pull request #3249 from pchaigno/update-season-package
Update season package in npm to fix parsing error
2016-09-25 21:25:19 -07:00
Paul Chaignon
41593b3ea7 Update season package in npm to fix parsing error
Fixes the Travis build failures
2016-09-25 16:41:49 +02:00
Cyril Ferlicot
bed8add2f5 Add smalltalk samples
(Issue: https://github.com/github/linguist/issues/3244)
2016-09-24 17:32:35 +02:00
Joshua Peek
e424e8e88c Linguist 4.8.15 2016-09-23 16:41:16 -07:00
Joshua Peek
07d4f218a3 Merge pull request #3243 from github/change_modes_to_mimetypes
Convert from mode names to mimetypes for better usage.
2016-09-23 16:38:13 -07:00
Joshua Peek
67ed060d37 Assert CodeMirror modes and mime types are valid against source 2016-09-23 16:33:12 -07:00
Joshua Peek
3abe081560 Validate codemirror modes 2016-09-23 16:30:38 -07:00
Josh Cheek
d3f3c0345c Add a sample showing the Julia interpreter is correctly analyzed 2016-09-23 15:29:02 -07:00
Joshua Peek
855f1a1f86 Validate CodeMirror modes 2016-09-23 14:47:49 -07:00
Joshua Peek
0406a5b326 Fix typescript indent 2016-09-23 14:39:15 -07:00
Joshua Peek
0108ef4386 Restore old mode 2016-09-23 14:35:02 -07:00
Joshua Peek
daefff86ff Fix JSX mode 2016-09-23 13:57:50 -07:00
Joshua Peek
fdb962518f Consistent CodeMirror casing 2016-09-23 13:54:55 -07:00
Joshua Peek
6564078061 Merge branch 'master' into change_modes_to_mimetypes 2016-09-23 13:54:20 -07:00
Joshua Peek
39ea9be5f8 Ignore ace mode warning while testing 2016-09-23 13:53:38 -07:00
Joshua Peek
152b5ade5e Fix shadowed path warning 2016-09-23 13:50:01 -07:00
Joshua Peek
c525e3fbef Ignore default external warnings 2016-09-23 13:49:30 -07:00
Todd Berman
88c74fa9c2 Convert from mode names to mimetypes for better usage. 2016-09-23 13:40:19 -07:00
Josh Cheek
6a54ee767f Set interpreters for Julia
Eg this file is not currently highlighted:
b766dcdbd2/julia/fullpath (L1)
2016-09-23 12:52:47 -07:00
Arfon Smith
2ea1ff2736 Merge pull request #3240 from github/cut-release-v4.8.14
v4.8.14 release
2016-09-22 21:36:30 -07:00
Arfon Smith
a1901fceff Bump version to v4.8.14 2016-09-22 21:03:33 -07:00
Arfon Smith
b4035a3804 Update grammars 2016-09-22 20:33:39 -07:00
Arfon Smith
fc67fc525c Merge pull request #3219 from Alhadis/emacs-files
Add .gnus, .viper and Project.ede as Emacs Lisp extensions
2016-09-22 08:10:12 -07:00
Arfon Smith
f0659d3aa5 Merge pull request #3213 from larsbrinkhoff/povray
POV-Ray heuristic: #declare
2016-09-21 22:36:49 -07:00
Lars Brinkhoff
a7a123a8db Add heuristic for .inc files: the #declare keyword is unique to POV-Ray.
Also added #local, #macro, and #while.
2016-09-22 07:02:44 +02:00
Arfon Smith
0e5327a77a Merge pull request #3238 from github/cut-release-v4.8.13
Bumping to v4.8.13
2016-09-21 21:35:19 -07:00
Andrew Case
82af10e3fd add support for MQL4 and MQL5 2016-09-19 22:03:57 +03:00
Andrey Osorgin
63c8d2284c Merge pull request #4 from github/master
Update from upstream repo github/linguist
2016-09-19 10:38:33 +03:00
Alhadis
81ca6e7766 Add "abbrev_defs" and "_emacs" as Elisp filenames 2016-09-15 18:57:35 +10:00
Alhadis
cd288a8ee4 Add .gnus, .viper and Project.ede as Emacs Lisp extensions 2016-09-15 17:06:59 +10:00
Alhadis
b61fe90d12 Terminate script if submodule registration failed 2016-09-12 02:17:10 +10:00
Alhadis
e6c849d92c Document --verbose option in usage message 2016-09-12 02:08:52 +10:00
Alhadis
3247d46e81 Chop trailing slashes before looking up language 2016-09-09 16:33:21 +10:00
Alhadis
be316c2943 Update contributor notes to mention new script 2016-09-07 05:36:00 +10:00
Alhadis
68c45be47d Flatten whitespace 2016-09-07 04:37:04 +10:00
Alhadis
4584963dd2 Add logic to update submodules and licenses 2016-09-07 04:26:45 +10:00
Alhadis
f382abc2f3 Add logic to consume and parse options 2016-09-07 03:14:49 +10:00
Alhadis
9d57e1e1b5 Remove debugging vestige 2016-09-07 00:59:13 +10:00
Alhadis
2a4150b104 Butcher whitespace to appease tab-hating heretics 2016-09-07 00:23:31 +10:00
Alhadis
09612ae42e Add logic to update Markdown file 2016-09-07 00:21:05 +10:00
Alhadis
49e9ee48d0 Make grammar-listing code more modular 2016-09-07 00:09:24 +10:00
Alhadis
a8719f3e82 Add script to generate grammars-list in Markdown 2016-09-06 23:07:18 +10:00
Adrian Sieber
00647be113 Exclude "dist" directory from language statistics 2016-08-31 14:56:07 +00:00
Alhadis
48b64c2d31 Add CSON samples 2016-08-29 01:57:32 +10:00
Alhadis
f95365946c Separate CSON from CoffeeScript 2016-08-29 01:56:05 +10:00
Andrey Osorgin
b056df06f4 Merge pull request #3 from github/master
Update from upstream repo github/linguist
2016-07-14 21:34:28 +03:00
osorgin
6bf223e641 Merge pull request #2 from github/master
Update from upstream repo github/linguist
2016-07-14 14:58:17 +03:00
osorgin
fa817b6a1d Merge pull request #1 from github/master
Update from upstream repo github/linguist
2016-07-12 16:14:26 +03:00
Brandon Keepers
789607d9bc Merge branch 'master' into go-vendor
* master: (168 commits)
  ruby for example
  Bumping version
  Updating grammars
  Grammar for Less from Atom package
  Remove Less grammar
  Updating to latest perl6 grammar
  Adding Perl6-specific grammar.
  Grammar for YANG from Atom package
  Support for YANG language
  Add detection of GrammarKit-generated files
  Add .xproj to list of XML extensions
  Test submodules are using HTTPS links
  Improved vim modeline detection
  Heuristic for Pod vs. Perl
  Bumping to v4.7.4
  Grammar update
  Support .rs.in as a file extension for Rust files.
  HTTPS links for submodules
  Add the LFE lexer as an example of erlang .xrl
  Add the Elixir parser as an example of erlang .yrl
  ...
2016-02-18 20:12:27 -05:00
Brandon Keepers
d46530989c Only treat .go files in ^vendor/ as generated 2016-02-18 19:57:34 -05:00
Keith Rarick
3c5bcb434c Add Go dependencies to generated.rb and test_blob.rb 2015-09-03 10:27:28 -07:00
382 changed files with 52623 additions and 3322 deletions

1
.gitignore vendored
View File

@@ -1,3 +1,4 @@
*.gem
/Gemfile.lock
.bundle/
.idea

104
.gitmodules vendored
View File

@@ -67,9 +67,6 @@
[submodule "vendor/grammars/language-javascript"]
path = vendor/grammars/language-javascript
url = https://github.com/atom/language-javascript
[submodule "vendor/grammars/language-python"]
path = vendor/grammars/language-python
url = https://github.com/atom/language-python
[submodule "vendor/grammars/language-shellscript"]
path = vendor/grammars/language-shellscript
url = https://github.com/atom/language-shellscript
@@ -130,9 +127,6 @@
[submodule "vendor/grammars/Sublime-Text-2-OpenEdge-ABL"]
path = vendor/grammars/Sublime-Text-2-OpenEdge-ABL
url = https://github.com/jfairbank/Sublime-Text-2-OpenEdge-ABL
[submodule "vendor/grammars/sublime-rust"]
path = vendor/grammars/sublime-rust
url = https://github.com/jhasse/sublime-rust
[submodule "vendor/grammars/sublime-befunge"]
path = vendor/grammars/sublime-befunge
url = https://github.com/johanasplund/sublime-befunge
@@ -180,7 +174,7 @@
url = https://github.com/mokus0/Agda.tmbundle
[submodule "vendor/grammars/Julia.tmbundle"]
path = vendor/grammars/Julia.tmbundle
url = https://github.com/nanoant/Julia.tmbundle
url = https://github.com/JuliaEditorSupport/Julia.tmbundle
[submodule "vendor/grammars/ooc.tmbundle"]
path = vendor/grammars/ooc.tmbundle
url = https://github.com/nilium/ooc.tmbundle
@@ -202,9 +196,6 @@
[submodule "vendor/grammars/sublime-robot-plugin"]
path = vendor/grammars/sublime-robot-plugin
url = https://github.com/shellderp/sublime-robot-plugin
[submodule "vendor/grammars/actionscript3-tmbundle"]
path = vendor/grammars/actionscript3-tmbundle
url = https://github.com/honzabrecka/actionscript3-tmbundle
[submodule "vendor/grammars/Sublime-QML"]
path = vendor/grammars/Sublime-QML
url = https://github.com/skozlovf/Sublime-QML
@@ -250,9 +241,6 @@
[submodule "vendor/grammars/cpp-qt.tmbundle"]
path = vendor/grammars/cpp-qt.tmbundle
url = https://github.com/textmate/cpp-qt.tmbundle
[submodule "vendor/grammars/css.tmbundle"]
path = vendor/grammars/css.tmbundle
url = https://github.com/textmate/css.tmbundle
[submodule "vendor/grammars/d.tmbundle"]
path = vendor/grammars/d.tmbundle
url = https://github.com/textmate/d.tmbundle
@@ -328,9 +316,6 @@
[submodule "vendor/grammars/nemerle.tmbundle"]
path = vendor/grammars/nemerle.tmbundle
url = https://github.com/textmate/nemerle.tmbundle
[submodule "vendor/grammars/ninja.tmbundle"]
path = vendor/grammars/ninja.tmbundle
url = https://github.com/textmate/ninja.tmbundle
[submodule "vendor/grammars/objective-c.tmbundle"]
path = vendor/grammars/objective-c.tmbundle
url = https://github.com/textmate/objective-c.tmbundle
@@ -358,9 +343,6 @@
[submodule "vendor/grammars/r.tmbundle"]
path = vendor/grammars/r.tmbundle
url = https://github.com/textmate/r.tmbundle
[submodule "vendor/grammars/ruby-haml.tmbundle"]
path = vendor/grammars/ruby-haml.tmbundle
url = https://github.com/textmate/ruby-haml.tmbundle
[submodule "vendor/grammars/scheme.tmbundle"]
path = vendor/grammars/scheme.tmbundle
url = https://github.com/textmate/scheme.tmbundle
@@ -452,9 +434,6 @@
[submodule "vendor/grammars/Sublime-Nit"]
path = vendor/grammars/Sublime-Nit
url = https://github.com/R4PaSs/Sublime-Nit
[submodule "vendor/grammars/language-hy"]
path = vendor/grammars/language-hy
url = https://github.com/rwtolbert/language-hy
[submodule "vendor/grammars/Racket"]
path = vendor/grammars/Racket
url = https://github.com/soegaard/racket-highlight-for-github
@@ -632,9 +611,6 @@
[submodule "vendor/grammars/language-yang"]
path = vendor/grammars/language-yang
url = https://github.com/DzonyKalafut/language-yang.git
[submodule "vendor/grammars/perl6fe"]
path = vendor/grammars/perl6fe
url = https://github.com/MadcapJake/language-perl6fe.git
[submodule "vendor/grammars/language-less"]
path = vendor/grammars/language-less
url = https://github.com/atom/language-less.git
@@ -779,9 +755,6 @@
[submodule "vendor/grammars/vhdl"]
path = vendor/grammars/vhdl
url = https://github.com/textmate/vhdl.tmbundle
[submodule "vendor/grammars/xquery"]
path = vendor/grammars/xquery
url = https://github.com/textmate/xquery.tmbundle
[submodule "vendor/grammars/language-rpm-spec"]
path = vendor/grammars/language-rpm-spec
url = https://github.com/waveclaw/language-rpm-spec
@@ -791,3 +764,78 @@
[submodule "vendor/grammars/language-babel"]
path = vendor/grammars/language-babel
url = https://github.com/github-linguist/language-babel
[submodule "vendor/CodeMirror"]
path = vendor/CodeMirror
url = https://github.com/codemirror/CodeMirror
[submodule "vendor/grammars/MQL5-sublime"]
path = vendor/grammars/MQL5-sublime
url = https://github.com/mqsoft/MQL5-sublime
[submodule "vendor/grammars/actionscript3-tmbundle"]
path = vendor/grammars/actionscript3-tmbundle
url = https://github.com/simongregory/actionscript3-tmbundle
[submodule "vendor/grammars/ABNF.tmbundle"]
path = vendor/grammars/ABNF.tmbundle
url = https://github.com/sanssecours/ABNF.tmbundle
[submodule "vendor/grammars/EBNF.tmbundle"]
path = vendor/grammars/EBNF.tmbundle
url = https://github.com/sanssecours/EBNF.tmbundle
[submodule "vendor/grammars/language-haml"]
path = vendor/grammars/language-haml
url = https://github.com/ezekg/language-haml
[submodule "vendor/grammars/language-ninja"]
path = vendor/grammars/language-ninja
url = https://github.com/khyo/language-ninja
[submodule "vendor/grammars/language-fontforge"]
path = vendor/grammars/language-fontforge
url = https://github.com/Alhadis/language-fontforge
[submodule "vendor/grammars/language-gn"]
path = vendor/grammars/language-gn
url = https://github.com/devoncarew/language-gn
[submodule "vendor/grammars/rascal-syntax-highlighting"]
path = vendor/grammars/rascal-syntax-highlighting
url = https://github.com/usethesource/rascal-syntax-highlighting
[submodule "vendor/grammars/atom-language-perl6"]
path = vendor/grammars/atom-language-perl6
url = https://github.com/perl6/atom-language-perl6
[submodule "vendor/grammars/reason"]
path = vendor/grammars/reason
url = https://github.com/facebook/reason
[submodule "vendor/grammars/language-xcompose"]
path = vendor/grammars/language-xcompose
url = https://github.com/samcv/language-xcompose
[submodule "vendor/grammars/SublimeEthereum"]
path = vendor/grammars/SublimeEthereum
url = https://github.com/davidhq/SublimeEthereum.git
[submodule "vendor/grammars/atom-language-rust"]
path = vendor/grammars/atom-language-rust
url = https://github.com/zargony/atom-language-rust
[submodule "vendor/grammars/language-css"]
path = vendor/grammars/language-css
url = https://github.com/atom/language-css
[submodule "vendor/grammars/language-regexp"]
path = vendor/grammars/language-regexp
url = https://github.com/Alhadis/language-regexp
[submodule "vendor/grammars/Terraform.tmLanguage"]
path = vendor/grammars/Terraform.tmLanguage
url = https://github.com/alexlouden/Terraform.tmLanguage
[submodule "vendor/grammars/shaders-tmLanguage"]
path = vendor/grammars/shaders-tmLanguage
url = https://github.com/tgjones/shaders-tmLanguage
[submodule "vendor/grammars/language-meson"]
path = vendor/grammars/language-meson
url = https://github.com/TingPing/language-meson
[submodule "vendor/grammars/atom-language-p4"]
path = vendor/grammars/atom-language-p4
url = https://github.com/TakeshiTseng/atom-language-p4
[submodule "vendor/grammars/language-jison"]
path = vendor/grammars/language-jison
url = https://github.com/cdibbs/language-jison
[submodule "vendor/grammars/openscad.tmbundle"]
path = vendor/grammars/openscad.tmbundle
url = https://github.com/tbuser/openscad.tmbundle
[submodule "vendor/grammars/marko-tmbundle"]
path = vendor/grammars/marko-tmbundle
url = https://github.com/marko-js/marko-tmbundle
[submodule "vendor/grammars/language-jolie"]
path = vendor/grammars/language-jolie
url = https://github.com/fmontesi/language-jolie

View File

@@ -1,20 +1,33 @@
language: ruby
sudo: false
addons:
apt:
packages:
- libicu-dev
- libicu48
before_install: script/travis/before_install
script:
- bundle exec rake
- script/licensed verify
rvm:
- 2.0.0
- 2.1
- 2.2
- 2.3.3
- 2.4.0
matrix:
allow_failures:
- rvm: 2.4.0
notifications:
disabled: true
git:
submodules: false
depth: 3
cache: bundler

View File

@@ -10,15 +10,15 @@ We try only to add new extensions once they have some usage on GitHub. In most c
To add support for a new extension:
0. Add your extension to the language entry in [`languages.yml`][languages], keeping the extensions in alphabetical order.
0. Add at least one sample for your extension to the [samples directory][samples] in the correct subdirectory.
0. Open a pull request, linking to a [GitHub search result](https://github.com/search?utf8=%E2%9C%93&q=extension%3Aboot+NOT+nothack&type=Code&ref=searchresults) showing in-the-wild usage.
1. Add your extension to the language entry in [`languages.yml`][languages], keeping the extensions in alphabetical order.
1. Add at least one sample for your extension to the [samples directory][samples] in the correct subdirectory.
1. Open a pull request, linking to a [GitHub search result](https://github.com/search?utf8=%E2%9C%93&q=extension%3Aboot+NOT+nothack&type=Code&ref=searchresults) showing in-the-wild usage.
In addition, if this extension is already listed in [`languages.yml`][languages] then sometimes a few more steps will need to be taken:
0. Make sure that example `.yourextension` files are present in the [samples directory][samples] for each language that uses `.yourextension`.
0. Test the performance of the Bayesian classifier with a relatively large number (1000s) of sample `.yourextension` files. (ping @arfon or @bkeepers to help with this) to ensure we're not misclassifying files.
0. If the Bayesian classifier does a bad job with the sample `.yourextension` files then a [heuristic](https://github.com/github/linguist/blob/master/lib/linguist/heuristics.rb) may need to be written to help.
1. Make sure that example `.yourextension` files are present in the [samples directory][samples] for each language that uses `.yourextension`.
1. Test the performance of the Bayesian classifier with a relatively large number (1000s) of sample `.yourextension` files. (ping **@bkeepers** to help with this) to ensure we're not misclassifying files.
1. If the Bayesian classifier does a bad job with the sample `.yourextension` files then a [heuristic](https://github.com/github/linguist/blob/master/lib/linguist/heuristics.rb) may need to be written to help.
## Adding a language
@@ -27,20 +27,17 @@ We try only to add languages once they have some usage on GitHub. In most cases
To add support for a new language:
0. Add an entry for your language to [`languages.yml`][languages].
0. Add a grammar for your language. Please only add grammars that have [one of these licenses](https://github.com/github/linguist/blob/257425141d4e2a5232786bf0b13c901ada075f93/vendor/licenses/config.yml#L2-L11).
0. Add your grammar as a submodule: `git submodule add https://github.com/JaneSmith/MyGrammar vendor/grammars/MyGrammar`.
0. Add your grammar to [`grammars.yml`][grammars] by running `script/convert-grammars --add vendor/grammars/MyGrammar`.
0. Download the license for the grammar: `script/licensed`. Be careful to only commit the file for the new grammar, as this script may update licenses for other grammars as well.
0. Add samples for your language to the [samples directory][samples] in the correct subdirectory.
0. Add a `language_id` for your language. See `script/set-language-ids` for more information. **You should only ever need to run `script/set-language-ids --update`. Anything other than this risks breaking GitHub search :cry:**
0. Open a pull request, linking to a [GitHub search result](https://github.com/search?utf8=%E2%9C%93&q=extension%3Aboot+NOT+nothack&type=Code&ref=searchresults) showing in-the-wild usage.
1. Add an entry for your language to [`languages.yml`][languages]. Omit the `language_id` field for now.
1. Add a grammar for your language: `script/add-grammar https://github.com/JaneSmith/MyGrammar`. Please only add grammars that have [one of these licenses][licenses].
1. Add samples for your language to the [samples directory][samples] in the correct subdirectory.
1. Add a `language_id` for your language using `script/set-language-ids`. **You should only ever need to run `script/set-language-ids --update`. Anything other than this risks breaking GitHub search :cry:**
1. Open a pull request, linking to a [GitHub search result](https://github.com/search?utf8=%E2%9C%93&q=extension%3Aboot+NOT+nothack&type=Code&ref=searchresults) showing in-the-wild usage.
In addition, if your new language defines an extension that's already listed in [`languages.yml`][languages] (such as `.foo`) then sometimes a few more steps will need to be taken:
0. Make sure that example `.foo` files are present in the [samples directory][samples] for each language that uses `.foo`.
0. Test the performance of the Bayesian classifier with a relatively large number (1000s) of sample `.foo` files. (ping @arfon or @bkeepers to help with this) to ensure we're not misclassifying files.
0. If the Bayesian classifier does a bad job with the sample `.foo` files then a [heuristic](https://github.com/github/linguist/blob/master/lib/linguist/heuristics.rb) may need to be written to help.
1. Make sure that example `.foo` files are present in the [samples directory][samples] for each language that uses `.foo`.
1. Test the performance of the Bayesian classifier with a relatively large number (1000s) of sample `.foo` files. (ping **@bkeepers** to help with this) to ensure we're not misclassifying files.
1. If the Bayesian classifier does a bad job with the sample `.foo` files then a [heuristic](https://github.com/github/linguist/blob/master/lib/linguist/heuristics.rb) may need to be written to help.
Remember, the goal here is to try and avoid false positives!
@@ -82,9 +79,15 @@ Here's our current build status: [![Build Status](https://api.travis-ci.org/gith
Linguist is maintained with :heart: by:
- @arfon (GitHub Staff)
- @larsbrinkhoff
- @pchaigno
- **@Alhadis**
- **@BenEddy** (GitHub staff)
- **@Caged** (GitHub staff)
- **@grantr** (GitHub staff)
- **@larsbrinkhoff**
- **@lildude** (GitHub staff)
- **@pchaigno**
- **@rafer** (GitHub staff)
- **@shreyasjoshis** (GitHub staff)
As Linguist is a production dependency for GitHub we have a couple of workflow restrictions:
@@ -95,23 +98,24 @@ As Linguist is a production dependency for GitHub we have a couple of workflow r
If you are the current maintainer of this gem:
0. Create a branch for the release: `git checkout -b cut-release-vxx.xx.xx`
0. Make sure your local dependencies are up to date: `script/bootstrap`
0. If grammar submodules have not been updated recently, update them: `git submodule update --remote && git commit -a`
0. Ensure that samples are updated: `bundle exec rake samples`
0. Ensure that tests are green: `bundle exec rake test`
0. Bump gem version in `lib/linguist/version.rb`, [like this](https://github.com/github/linguist/commit/8d2ea90a5ba3b2fe6e1508b7155aa4632eea2985).
0. Make a PR to github/linguist, [like this](https://github.com/github/linguist/pull/1238).
0. Build a local gem: `bundle exec rake build_gem`
0. Test the gem:
0. Bump the Gemfile and Gemfile.lock versions for an app which relies on this gem
0. Install the new gem locally
0. Test behavior locally, branch deploy, whatever needs to happen
0. Merge github/linguist PR
0. Tag and push: `git tag vx.xx.xx; git push --tags`
0. Push to rubygems.org -- `gem push github-linguist-3.0.0.gem`
1. Create a branch for the release: `git checkout -b cut-release-vxx.xx.xx`
1. Make sure your local dependencies are up to date: `script/bootstrap`
1. If grammar submodules have not been updated recently, update them: `git submodule update --remote && git commit -a`
1. Ensure that samples are updated: `bundle exec rake samples`
1. Ensure that tests are green: `bundle exec rake test`
1. Bump gem version in `lib/linguist/version.rb`, [like this](https://github.com/github/linguist/commit/8d2ea90a5ba3b2fe6e1508b7155aa4632eea2985).
1. Make a PR to github/linguist, [like this](https://github.com/github/linguist/pull/1238).
1. Build a local gem: `bundle exec rake build_gem`
1. Test the gem:
1. Bump the Gemfile and Gemfile.lock versions for an app which relies on this gem
1. Install the new gem locally
1. Test behavior locally, branch deploy, whatever needs to happen
1. Merge github/linguist PR
1. Tag and push: `git tag vx.xx.xx; git push --tags`
1. Push to rubygems.org -- `gem push github-linguist-3.0.0.gem`
[grammars]: /grammars.yml
[languages]: /lib/linguist/languages.yml
[licenses]: https://github.com/github/linguist/blob/257425141d4e2a5232786bf0b13c901ada075f93/vendor/licenses/config.yml#L2-L11
[samples]: /samples
[new-issue]: https://github.com/github/linguist/issues/new

View File

@@ -1,4 +1,4 @@
Copyright (c) 2011-2016 GitHub, Inc.
Copyright (c) 2017 GitHub, Inc.
Permission is hereby granted, free of charge, to any person
obtaining a copy of this software and associated documentation

View File

@@ -15,10 +15,16 @@ See [Troubleshooting](#troubleshooting) and [`CONTRIBUTING.md`](/CONTRIBUTING.md
The Language stats bar displays languages percentages for the files in the repository. The percentages are calculated based on the bytes of code for each language as reported by the [List Languages](https://developer.github.com/v3/repos/#list-languages) API. If the bar is reporting a language that you don't expect:
0. Click on the name of the language in the stats bar to see a list of the files that are identified as that language.
0. If you see files that you didn't write, consider moving the files into one of the [paths for vendored code](/lib/linguist/vendor.yml), or use the [manual overrides](#overrides) feature to ignore them.
0. If the files are being misclassified, search for [open issues][issues] to see if anyone else has already reported the issue. Any information you can add, especially links to public repositories, is helpful.
0. If there are no reported issues of this misclassification, [open an issue][new-issue] and include a link to the repository or a sample of the code that is being misclassified.
1. Click on the name of the language in the stats bar to see a list of the files that are identified as that language.
1. If you see files that you didn't write, consider moving the files into one of the [paths for vendored code](/lib/linguist/vendor.yml), or use the [manual overrides](#overrides) feature to ignore them.
1. If the files are being misclassified, search for [open issues][issues] to see if anyone else has already reported the issue. Any information you can add, especially links to public repositories, is helpful.
1. If there are no reported issues of this misclassification, [open an issue][new-issue] and include a link to the repository or a sample of the code that is being misclassified.
### There's a problem with the syntax highlighting of a file
Linguist detects the language of a file but the actual syntax-highlighting is powered by a set of language grammars which are included in this project as a set of submodules [and may be found here](https://github.com/github/linguist/blob/master/vendor/README.md).
If you experience an issue with the syntax-highlighting on GitHub, **please report the issue to the upstream grammar repository, not here.** Grammars are updated every time we build the Linguist gem and so upstream bug fixes are automatically incorporated as they are fixed.
## Overrides
@@ -26,13 +32,15 @@ Linguist supports a number of different custom overrides strategies for language
### Using gitattributes
Add a `.gitattributes` file to your project and use standard git-style path matchers for the files you want to override to set `linguist-documentation`, `linguist-language`, and `linguist-vendored`. `.gitattributes` will be used to determine language statistics, but will not be used to syntax highlight files. To manually set syntax highlighting, use [Vim or Emacs modelines](#using-emacs-or-vim-modelines).
Add a `.gitattributes` file to your project and use standard git-style path matchers for the files you want to override to set `linguist-documentation`, `linguist-language`, `linguist-vendored`, and `linguist-generated`. `.gitattributes` will be used to determine language statistics and will be used to syntax highlight files. You can also manually set syntax highlighting using [Vim or Emacs modelines](#using-emacs-or-vim-modelines).
```
$ cat .gitattributes
*.rb linguist-language=Java
```
#### Vendored code
Checking code you didn't write, such as JavaScript libraries, into your git repo is a common practice, but this often inflates your project's language stats and may even cause your project to be labeled as another language. By default, Linguist treats all of the paths defined in [lib/linguist/vendor.yml](https://github.com/github/linguist/blob/master/lib/linguist/vendor.yml) as vendored and therefore doesn't include them in the language statistics for a repository.
Use the `linguist-vendored` attribute to vendor or un-vendor paths.
@@ -43,6 +51,8 @@ special-vendored-path/* linguist-vendored
jquery.js linguist-vendored=false
```
#### Documentation
Just like vendored files, Linguist excludes documentation files from your project's language stats. [lib/linguist/documentation.yml](lib/linguist/documentation.yml) lists common documentation paths and excludes them from the language statistics for your repository.
Use the `linguist-documentation` attribute to mark or unmark paths as documentation.
@@ -53,19 +63,18 @@ project-docs/* linguist-documentation
docs/formatter.rb linguist-documentation=false
```
#### Generated file detection
#### Generated code
Not all plain text files are true source files. Generated files like minified js and compiled CoffeeScript can be detected and excluded from language stats. As an added bonus, unlike vendored and documentation files, these files are suppressed in diffs.
```ruby
Linguist::FileBlob.new("underscore.min.js").generated? # => true
```
See [Linguist::Generated#generated?](https://github.com/github/linguist/blob/master/lib/linguist/generated.rb).
$ cat .gitattributes
Api.elm linguist-generated=true
```
### Using Emacs or Vim modelines
Alternatively, you can use Vim or Emacs style modelines to set the language for a single file. Modelines can be placed anywhere within a file and are respected when determining how to syntax-highlight a file on GitHub.com
If you do not want to use `.gitattributes` to override the syntax highlighting used on GitHub.com, you can use Vim or Emacs style modelines to set the language for a single file. Modelines can be placed anywhere within a file and are respected when determining how to syntax-highlight a file on GitHub.com
##### Vim
```

View File

@@ -4,6 +4,7 @@ require 'rake/testtask'
require 'yaml'
require 'yajl'
require 'open-uri'
require 'json'
task :default => :test

View File

@@ -1,5 +1,7 @@
#!/usr/bin/env ruby
$LOAD_PATH[0, 0] = File.join(File.dirname(__FILE__), '..', 'lib')
require 'linguist'
require 'rugged'
require 'optparse'
@@ -102,10 +104,16 @@ def git_linguist(args)
commit = nil
parser = OptionParser.new do |opts|
opts.banner = "Usage: git-linguist [OPTIONS] stats|breakdown|dump-cache|clear|disable"
opts.banner = <<-HELP
Linguist v#{Linguist::VERSION}
Detect language type and determine language breakdown for a given Git repository.
Usage:
git-linguist [OPTIONS] stats|breakdown|dump-cache|clear|disable"
HELP
opts.on("-f", "--force", "Force a full rescan") { incremental = false }
opts.on("--commit=COMMIT", "Commit to index") { |v| commit = v}
opts.on("-c", "--commit=COMMIT", "Commit to index") { |v| commit = v}
end
parser.parse!(args)

View File

@@ -1,29 +1,37 @@
#!/usr/bin/env ruby
# linguist — detect language type for a file, or, given a directory, determine language breakdown
# usage: linguist <path> [<--breakdown>]
#
$LOAD_PATH[0, 0] = File.join(File.dirname(__FILE__), '..', 'lib')
require 'linguist'
require 'rugged'
require 'json'
require 'optparse'
path = ARGV[0] || Dir.pwd
# special case if not given a directory but still given the --breakdown option
# special case if not given a directory
# but still given the --breakdown or --json options/
if path == "--breakdown"
path = Dir.pwd
breakdown = true
elsif path == "--json"
path = Dir.pwd
json_breakdown = true
end
ARGV.shift
breakdown = true if ARGV[0] == "--breakdown"
json_breakdown = true if ARGV[0] == "--json"
if File.directory?(path)
rugged = Rugged::Repository.new(path)
repo = Linguist::Repository.new(rugged, rugged.head.target_id)
repo.languages.sort_by { |_, size| size }.reverse.each do |language, size|
percentage = ((size / repo.size.to_f) * 100)
percentage = sprintf '%.2f' % percentage
puts "%-7s %s" % ["#{percentage}%", language]
if !json_breakdown
repo.languages.sort_by { |_, size| size }.reverse.each do |language, size|
percentage = ((size / repo.size.to_f) * 100)
percentage = sprintf '%.2f' % percentage
puts "%-7s %s" % ["#{percentage}%", language]
end
end
if breakdown
puts
@@ -35,6 +43,8 @@ if File.directory?(path)
end
puts
end
elsif json_breakdown
puts JSON.dump(repo.breakdown_by_file)
end
elsif File.file?(path)
blob = Linguist::FileBlob.new(path, Dir.pwd)
@@ -63,5 +73,12 @@ elsif File.file?(path)
puts " appears to be a vendored file"
end
else
abort "usage: linguist <path>"
abort <<-HELP
Linguist v#{Linguist::VERSION}
Detect language type for a file, or, given a repository, determine language breakdown.
Usage: linguist <path>
linguist <path> [--breakdown] [--json]
linguist [--breakdown] [--json]
HELP
end

View File

@@ -16,7 +16,7 @@ Gem::Specification.new do |s|
s.add_dependency 'charlock_holmes', '~> 0.7.3'
s.add_dependency 'escape_utils', '~> 1.1.0'
s.add_dependency 'mime-types', '>= 1.19'
s.add_dependency 'rugged', '>= 0.23.0b'
s.add_dependency 'rugged', '>= 0.25.1'
s.add_development_dependency 'minitest', '>= 5.0'
s.add_development_dependency 'mocha'
@@ -26,6 +26,5 @@ Gem::Specification.new do |s|
s.add_development_dependency 'yajl-ruby'
s.add_development_dependency 'color-proximity', '~> 0.2.1'
s.add_development_dependency 'licensed'
s.add_development_dependency 'licensee', '>= 8.3.0'
s.add_development_dependency 'licensee', '~> 8.8.0'
end

View File

@@ -1,9 +1,11 @@
---
http://svn.edgewall.org/repos/genshi/contrib/textmate/Genshi.tmbundle/Syntaxes/Markup%20Template%20%28XML%29.tmLanguage:
- text.xml.genshi
https://bitbucket.org/Clams/sublimesystemverilog/get/default.tar.gz:
- source.systemverilog
- source.ucfconstraints
https://svn.edgewall.org/repos/genshi/contrib/textmate/Genshi.tmbundle/Syntaxes/Markup%20Template%20%28XML%29.tmLanguage:
- text.xml.genshi
vendor/grammars/ABNF.tmbundle:
- source.abnf
vendor/grammars/Agda.tmbundle:
- source.agda
vendor/grammars/Alloy.tmbundle:
@@ -20,6 +22,8 @@ vendor/grammars/ColdFusion:
- text.html.cfm
vendor/grammars/Docker.tmbundle:
- source.dockerfile
vendor/grammars/EBNF.tmbundle:
- source.ebnf
vendor/grammars/Elm/Syntaxes:
- source.elm
- text.html.mediawiki.elm-build-output
@@ -47,9 +51,13 @@ vendor/grammars/Lean.tmbundle:
- source.lean
vendor/grammars/LiveScript.tmbundle:
- source.livescript
vendor/grammars/MQL5-sublime:
- source.mql5
vendor/grammars/MagicPython:
- source.python
- source.regexp.python
- text.python.console
- text.python.traceback
vendor/grammars/Modelica:
- source.modelica
vendor/grammars/NSIS:
@@ -107,7 +115,9 @@ vendor/grammars/SublimeBrainfuck:
- source.bf
vendor/grammars/SublimeClarion:
- source.clarion
vendor/grammars/SublimeGDB:
vendor/grammars/SublimeEthereum:
- source.solidity
vendor/grammars/SublimeGDB/:
- source.disasm
- source.gdb
- source.gdb.session
@@ -122,6 +132,8 @@ vendor/grammars/TLA:
- source.tla
vendor/grammars/TXL:
- source.txl
vendor/grammars/Terraform.tmLanguage:
- source.terraform
vendor/grammars/Textmate-Gosu-Bundle:
- source.gosu.2
vendor/grammars/UrWeb-Language-Definition:
@@ -134,7 +146,7 @@ vendor/grammars/X10:
- source.x10
vendor/grammars/abap.tmbundle:
- source.abap
vendor/grammars/actionscript3-tmbundle:
vendor/grammars/actionscript3-tmbundle/:
- source.actionscript.3
- text.html.asdoc
- text.xml.flex-config
@@ -172,8 +184,18 @@ vendor/grammars/atom-language-1c-bsl:
- source.sdbl
vendor/grammars/atom-language-clean:
- source.clean
- text.restructuredtext.clean
vendor/grammars/atom-language-p4:
- source.p4
vendor/grammars/atom-language-perl6:
- source.meta-info
- source.perl6fe
- source.quoting.perl6fe
- source.regexp.perl6fe
vendor/grammars/atom-language-purescript:
- source.purescript
vendor/grammars/atom-language-rust:
- source.rust
vendor/grammars/atom-language-srt:
- text.srt
vendor/grammars/atom-language-stan:
@@ -205,7 +227,6 @@ vendor/grammars/capnproto.tmbundle:
vendor/grammars/carto-atom:
- source.css.mss
vendor/grammars/ceylon-sublimetext:
- module.ceylon
- source.ceylon
vendor/grammars/chapel-tmbundle:
- source.chapel
@@ -219,8 +240,6 @@ vendor/grammars/cpp-qt.tmbundle:
- source.qmake
vendor/grammars/creole:
- text.html.creole
vendor/grammars/css.tmbundle:
- source.css
vendor/grammars/cucumber-tmbundle:
- source.ruby.rspec.cucumber.steps
- text.gherkin.feature
@@ -354,12 +373,23 @@ vendor/grammars/language-csound:
- source.csound
- source.csound-document
- source.csound-score
vendor/grammars/language-css:
- source.css
vendor/grammars/language-emacs-lisp:
- source.emacs.lisp
vendor/grammars/language-fontforge:
- source.fontforge
- source.opentype
- text.sfd
vendor/grammars/language-gfm:
- source.gfm
vendor/grammars/language-gn:
- source.gn
vendor/grammars/language-graphql:
- source.graphql
vendor/grammars/language-haml:
- text.haml
- text.hamlc
vendor/grammars/language-haskell:
- hint.haskell
- hint.message.haskell
@@ -369,14 +399,19 @@ vendor/grammars/language-haskell:
- source.haskell
- source.hsc2hs
- text.tex.latex.haskell
vendor/grammars/language-hy:
- source.hy
vendor/grammars/language-inform7:
- source.inform7
vendor/grammars/language-javascript:
- source.js
- source.js.regexp
- source.js.regexp.replacement
- source.jsdoc
vendor/grammars/language-jison:
- source.jison
- source.jisonlex
- source.jisonlex-injection
vendor/grammars/language-jolie:
- source.jolie
vendor/grammars/language-jsoniq:
- source.jq
- source.xq
@@ -384,18 +419,24 @@ vendor/grammars/language-less:
- source.css.less
vendor/grammars/language-maxscript:
- source.maxscript
vendor/grammars/language-meson:
- source.meson
vendor/grammars/language-ncl:
- source.ncl
vendor/grammars/language-ninja:
- source.ninja
vendor/grammars/language-povray:
- source.pov-ray sdl
vendor/grammars/language-python:
- text.python.console
- text.python.traceback
vendor/grammars/language-regexp:
- source.regexp
- source.regexp.extended
vendor/grammars/language-renpy:
- source.renpy
vendor/grammars/language-restructuredtext:
- text.restructuredtext
vendor/grammars/language-roff:
- source.ditroff
- source.ditroff.desc
- source.ideal
- source.pic
- text.roff
@@ -419,6 +460,8 @@ vendor/grammars/language-wavefront:
- source.wavefront.obj
vendor/grammars/language-xbase:
- source.harbour
vendor/grammars/language-xcompose:
- config.xcompose
vendor/grammars/language-yaml:
- source.yaml
vendor/grammars/language-yang:
@@ -448,6 +491,8 @@ vendor/grammars/make.tmbundle:
- source.makefile
vendor/grammars/mako-tmbundle:
- text.html.mako
vendor/grammars/marko-tmbundle:
- text.marko
vendor/grammars/mathematica-tmbundle:
- source.mathematica
vendor/grammars/matlab.tmbundle:
@@ -467,8 +512,6 @@ vendor/grammars/nemerle.tmbundle:
- source.nemerle
vendor/grammars/nesC:
- source.nesc
vendor/grammars/ninja.tmbundle:
- source.ninja
vendor/grammars/nix:
- source.nix
vendor/grammars/nu.tmbundle:
@@ -487,6 +530,8 @@ vendor/grammars/ooc.tmbundle:
- source.ooc
vendor/grammars/opa.tmbundle:
- source.opa
vendor/grammars/openscad.tmbundle:
- source.scad
vendor/grammars/oz-tmbundle/Syntaxes/Oz.tmLanguage:
- source.oz
vendor/grammars/parrot:
@@ -498,10 +543,6 @@ vendor/grammars/pawn-sublime-language:
vendor/grammars/perl.tmbundle:
- source.perl
- source.perl.6
vendor/grammars/perl6fe:
- source.meta-info
- source.perl6fe
- source.regexp.perl6fe
vendor/grammars/php-smarty.tmbundle:
- text.html.smarty
vendor/grammars/php.tmbundle:
@@ -524,8 +565,10 @@ vendor/grammars/python-django.tmbundle:
vendor/grammars/r.tmbundle:
- source.r
- text.tex.latex.rd
vendor/grammars/ruby-haml.tmbundle:
- text.haml
vendor/grammars/rascal-syntax-highlighting:
- source.rascal
vendor/grammars/reason:
- source.reason
vendor/grammars/ruby-slim.tmbundle:
- text.slim
vendor/grammars/ruby.tmbundle:
@@ -545,6 +588,9 @@ vendor/grammars/scilab.tmbundle:
- source.scilab
vendor/grammars/secondlife-lsl:
- source.lsl
vendor/grammars/shaders-tmLanguage:
- source.hlsl
- source.shaderlab
vendor/grammars/smali-sublime:
- source.smali
vendor/grammars/smalltalk-tmbundle:
@@ -593,8 +639,6 @@ vendor/grammars/sublime-rexx:
- source.rexx
vendor/grammars/sublime-robot-plugin:
- text.robot
vendor/grammars/sublime-rust:
- source.rust
vendor/grammars/sublime-spintools:
- source.regexp.spin
- source.spin
@@ -646,7 +690,5 @@ vendor/grammars/xc.tmbundle:
vendor/grammars/xml.tmbundle:
- text.xml
- text.xml.xsl
vendor/grammars/xquery:
- source.xquery
vendor/grammars/zephir-sublime:
- source.php.zephir

View File

@@ -15,9 +15,9 @@ class << Linguist
# see Linguist::LazyBlob and Linguist::FileBlob for examples
#
# Returns Language or nil.
def detect(blob)
def detect(blob, allow_empty: false)
# Bail early if the blob is binary or empty.
return nil if blob.likely_binary? || blob.binary? || blob.empty?
return nil if blob.likely_binary? || blob.binary? || (!allow_empty && blob.empty?)
Linguist.instrument("linguist.detection", :blob => blob) do
# Call each strategy until one candidate is returned.
@@ -59,8 +59,9 @@ class << Linguist
# Strategies are called in turn until a single Language is returned.
STRATEGIES = [
Linguist::Strategy::Modeline,
Linguist::Shebang,
Linguist::Strategy::Filename,
Linguist::Shebang,
Linguist::Strategy::Extension,
Linguist::Heuristics,
Linguist::Classifier
]
@@ -73,7 +74,7 @@ class << Linguist
# end
# end
#
# Linguist.instrumenter = CustomInstrumenter
# Linguist.instrumenter = CustomInstrumenter.new
#
# The instrumenter must conform to the `ActiveSupport::Notifications`
# interface, which defines `#instrument` and accepts:

View File

@@ -63,7 +63,7 @@ module Linguist
#
# Returns an Array
def extensions
_, *segments = name.downcase.split(".")
_, *segments = name.downcase.split(".", -1)
segments.map.with_index do |segment, index|
"." + segments[index..-1].join(".")

View File

@@ -95,7 +95,7 @@ module Linguist
# Returns sorted Array of result pairs. Each pair contains the
# String language name and a Float score.
def classify(tokens, languages)
return [] if tokens.nil?
return [] if tokens.nil? || languages.empty?
tokens = Tokenizer.tokenize(tokens) if tokens.is_a?(String)
scores = {}

View File

@@ -9,11 +9,12 @@
## Documentation directories ##
- ^docs?/
- ^[Dd]ocs?/
- (^|/)[Dd]ocumentation/
- (^|/)javadoc/
- ^man/
- (^|/)[Jj]avadoc/
- ^[Mm]an/
- ^[Ee]xamples/
- ^[Dd]emos?/
## Documentation files ##
@@ -27,4 +28,4 @@
- (^|/)[Rr]eadme(\.|$)
# Samples folders
- ^[Ss]amples/
- ^[Ss]amples?/

View File

@@ -3,7 +3,7 @@ module Linguist
# Public: Is the blob a generated file?
#
# name - String filename
# data - String blob data. A block also maybe passed in for lazy
# data - String blob data. A block also may be passed in for lazy
# loading. This behavior is deprecated and you should always
# pass in a String.
#
@@ -56,6 +56,7 @@ module Linguist
generated_net_specflow_feature_file? ||
composer_lock? ||
node_modules? ||
go_vendor? ||
npm_shrinkwrap? ||
godeps? ||
generated_by_zephir? ||
@@ -69,6 +70,7 @@ module Linguist
compiled_cython_file? ||
generated_go? ||
generated_protocol_buffer? ||
generated_javascript_protocol_buffer? ||
generated_apache_thrift? ||
generated_jni_header? ||
vcr_cassette? ||
@@ -76,7 +78,10 @@ module Linguist
generated_unity3d_meta? ||
generated_racc? ||
generated_jflex? ||
generated_grammarkit?
generated_grammarkit? ||
generated_roxygen2? ||
generated_jison? ||
generated_yarn_lock?
end
# Internal: Is the blob an Xcode file?
@@ -274,16 +279,25 @@ module Linguist
return lines[0].include?("Generated by the protocol buffer compiler. DO NOT EDIT!")
end
APACHE_THRIFT_EXTENSIONS = ['.rb', '.py', '.go', '.js', '.m', '.java', '.h', '.cc', '.cpp']
# Internal: Is the blob a Javascript source file generated by the
# Protocol Buffer compiler?
#
# Returns true of false.
def generated_javascript_protocol_buffer?
return false unless extname == ".js"
return false unless lines.count > 6
return lines[5].include?("GENERATED CODE -- DO NOT EDIT!")
end
APACHE_THRIFT_EXTENSIONS = ['.rb', '.py', '.go', '.js', '.m', '.java', '.h', '.cc', '.cpp', '.php']
# Internal: Is the blob generated by Apache Thrift compiler?
#
# Returns true or false
def generated_apache_thrift?
return false unless APACHE_THRIFT_EXTENSIONS.include?(extname)
return false unless lines.count > 1
return lines[0].include?("Autogenerated by Thrift Compiler") || lines[1].include?("Autogenerated by Thrift Compiler")
return lines.first(6).any? { |l| l.include?("Autogenerated by Thrift Compiler") }
end
# Internal: Is the blob a C/C++ header generated by the Java JNI tool javah?
@@ -304,7 +318,15 @@ module Linguist
!!name.match(/node_modules\//)
end
# Internal: Is the blob a generated npm shrinkwrap file.
# Internal: Is the blob part of the Go vendor/ tree,
# not meant for humans in pull requests.
#
# Returns true or false.
def go_vendor?
!!name.match(/vendor\/((?!-)[-0-9A-Za-z]+(?<!-)\.)+(com|edu|gov|in|me|net|org|fm|io)/)
end
# Internal: Is the blob a generated npm shrinkwrap file?
#
# Returns true or false.
def npm_shrinkwrap?
@@ -326,7 +348,7 @@ module Linguist
!!name.match(/composer\.lock/)
end
# Internal: Is the blob a generated by Zephir
# Internal: Is the blob generated by Zephir?
#
# Returns true or false.
def generated_by_zephir?
@@ -426,5 +448,46 @@ module Linguist
return false unless lines.count > 1
return lines[0].start_with?("// This is a generated file. Not intended for manual editing.")
end
# Internal: Is this a roxygen2-generated file?
#
# A roxygen2-generated file typically contain:
# % Generated by roxygen2: do not edit by hand
# on the first line.
#
# Return true or false
def generated_roxygen2?
return false unless extname == '.Rd'
return false unless lines.count > 1
return lines[0].include?("% Generated by roxygen2: do not edit by hand")
end
# Internal: Is this a Jison-generated file?
#
# Jison-generated parsers typically contain:
# /* parser generated by jison
# on the first line.
#
# Jison-generated lexers typically contain:
# /* generated by jison-lex
# on the first line.
#
# Return true or false
def generated_jison?
return false unless extname == '.js'
return false unless lines.count > 1
return lines[0].start_with?("/* parser generated by jison ") ||
lines[0].start_with?("/* generated by jison-lex ")
end
# Internal: Is the blob a generated yarn lockfile?
#
# Returns true or false.
def generated_yarn_lock?
return false unless name.match(/yarn\.lock/)
return false unless lines.count > 0
return lines[0].include?("# THIS IS AN AUTOGENERATED FILE")
end
end
end

View File

@@ -110,6 +110,12 @@ module Linguist
end
end
disambiguate ".cls" do |data|
if /\\\w+{/.match(data)
Language["TeX"]
end
end
disambiguate ".cs" do |data|
if /![\w\s]+methodsFor: /.match(data)
Language["Smalltalk"]
@@ -119,11 +125,18 @@ module Linguist
end
disambiguate ".d" do |data|
if /^module /.match(data)
# see http://dlang.org/spec/grammar
# ModuleDeclaration | ImportDeclaration | FuncDeclaration | unittest
if /^module\s+[\w.]*\s*;|import\s+[\w\s,.:]*;|\w+\s+\w+\s*\(.*\)(?:\(.*\))?\s*{[^}]*}|unittest\s*(?:\(.*\))?\s*{[^}]*}/.match(data)
Language["D"]
elsif /^((dtrace:::)?BEGIN|provider |#pragma (D (option|attributes)|ident)\s)/.match(data)
# see http://dtrace.org/guide/chp-prog.html, http://dtrace.org/guide/chp-profile.html, http://dtrace.org/guide/chp-opt.html
elsif /^(\w+:\w*:\w*:\w*|BEGIN|END|provider\s+|(tick|profile)-\w+\s+{[^}]*}|#pragma\s+D\s+(option|attributes|depends_on)\s|#pragma\s+ident\s)/.match(data)
Language["DTrace"]
elsif /(\/.*:( .* \\)$| : \\$|^ : |: \\$)/.match(data)
# path/target : dependency \
# target : \
# : dependency
# path/file.ext1 : some/path/../file.ext2
elsif /([\/\\].*:\s+.*\s\\$|: \\$|^ : |^[\w\s\/\\.]+\w+\.\w+\s*:\s+[\w\s\/\\.]+\w+\.\w+)/.match(data)
Language["Makefile"]
end
end
@@ -152,7 +165,7 @@ module Linguist
elsif data.include?("flowop")
Language["Filebench WML"]
elsif fortran_rx.match(data)
Language["FORTRAN"]
Language["Fortran"]
end
end
@@ -160,7 +173,7 @@ module Linguist
if /^: /.match(data)
Language["Forth"]
elsif fortran_rx.match(data)
Language["FORTRAN"]
Language["Fortran"]
end
end
@@ -202,6 +215,8 @@ module Linguist
disambiguate ".inc" do |data|
if /^<\?(?:php)?/.match(data)
Language["PHP"]
elsif /^\s*#(declare|local|macro|while)\s/.match(data)
Language["POV-Ray SDL"]
end
end
@@ -211,7 +226,7 @@ module Linguist
elsif /^(%[%{}]xs|<.*>)/.match(data)
Language["Lex"]
elsif /^\.[a-z][a-z](\s|$)/i.match(data)
Language["Groff"]
Language["Roff"]
elsif /^\((de|class|rel|code|data|must)\s/.match(data)
Language["PicoLisp"]
end
@@ -242,7 +257,7 @@ module Linguist
Language["MUF"]
elsif /^\s*;/.match(data)
Language["M"]
elsif /^\s*\(\*/.match(data)
elsif /\*\)$/.match(data)
Language["Mathematica"]
elsif /^\s*%/.match(data)
Language["Matlab"]
@@ -252,10 +267,12 @@ module Linguist
end
disambiguate ".md" do |data|
if /^[-a-z0-9=#!\*\[|]/i.match(data)
if /(^[-a-z0-9=#!\*\[|>])|<\//i.match(data) || data.empty?
Language["Markdown"]
elsif /^(;;|\(define_)/.match(data)
Language["GCC machine description"]
Language["GCC Machine Description"]
else
Language["Markdown"]
end
end
@@ -270,7 +287,7 @@ module Linguist
disambiguate ".mod" do |data|
if data.include?('<!ENTITY ')
Language["XML"]
elsif /MODULE\s\w+\s*;/i.match(data) || /^\s*END \w+;$/i.match(data)
elsif /^\s*MODULE [\w\.]+;/i.match(data) || /^\s*END [\w\.]+;/i.match(data)
Language["Modula-2"]
else
[Language["Linux Kernel Module"], Language["AMPL"]]
@@ -279,9 +296,9 @@ module Linguist
disambiguate ".ms" do |data|
if /^[.'][a-z][a-z](\s|$)/i.match(data)
Language["Groff"]
Language["Roff"]
elsif /(?<!\S)\.(include|globa?l)\s/.match(data) || /(?<!\/\*)(\A|\n)\s*\.[A-Za-z]/.match(data.gsub(/"([^\\"]|\\.)*"|'([^\\']|\\.)*'|\\\s*(?:--.*)?\n/, ""))
Language["GAS"]
Language["Unix Assembly"]
else
Language["MAXScript"]
end
@@ -289,7 +306,7 @@ module Linguist
disambiguate ".n" do |data|
if /^[.']/.match(data)
Language["Groff"]
Language["Roff"]
elsif /^(module|namespace|using)\s/.match(data)
Language["Nemerle"]
end
@@ -318,7 +335,7 @@ module Linguist
end
disambiguate ".pl" do |data|
if /^[^#]+:-/.match(data)
if /^[^#]*:-/.match(data)
Language["Prolog"]
elsif /use strict|use\s+v?5\./.match(data)
Language["Perl"]
@@ -327,16 +344,16 @@ module Linguist
end
end
disambiguate ".pm", ".t" do |data|
if /use strict|use\s+v?5\./.match(data)
Language["Perl"]
elsif /^(use v6|(my )?class|module)/.match(data)
disambiguate ".pm" do |data|
if /^\s*(?:use\s+v6\s*;|(?:\bmy\s+)?class|module)\b/.match(data)
Language["Perl6"]
elsif /\buse\s+(?:strict\b|v?5\.)/.match(data)
Language["Perl"]
end
end
disambiguate ".pod" do |data|
if /^=\w+$/.match(data)
if /^=\w+\b/.match(data)
Language["Pod"]
else
Language["Perl"]
@@ -375,7 +392,7 @@ module Linguist
if /^\.!|^\.end lit(?:eral)?\b/i.match(data)
Language["RUNOFF"]
elsif /^\.\\" /.match(data)
Language["Groff"]
Language["Roff"]
end
end
@@ -426,10 +443,12 @@ module Linguist
end
disambiguate ".t" do |data|
if /^\s*%|^\s*var\s+\w+\s*:\s*\w+/.match(data)
if /^\s*%[ \t]+|^\s*var\s+\w+\s*:=\s*\w+/.match(data)
Language["Turing"]
elsif /^\s*use\s+v6\s*;/.match(data)
elsif /^\s*(?:use\s+v6\s*;|\bmodule\b|\b(?:my\s+)?class\b)/.match(data)
Language["Perl6"]
elsif /\buse\s+(?:strict\b|v?5\.)/.match(data)
Language["Perl"]
end
end
@@ -457,5 +476,13 @@ module Linguist
Language["Scilab"]
end
end
disambiguate ".tsx" do |data|
if /^\s*(import.+(from\s+|require\()['"]react|\/\/\/\s*<reference\s)/.match(data)
Language["TypeScript"]
elsif /^\s*<\?xml\s+version/i.match(data)
Language["XML"]
end
end
end
end

View File

@@ -11,6 +11,7 @@ require 'linguist/samples'
require 'linguist/file_blob'
require 'linguist/blob_helper'
require 'linguist/strategy/filename'
require 'linguist/strategy/extension'
require 'linguist/strategy/modeline'
require 'linguist/shebang'
@@ -90,17 +91,6 @@ module Linguist
language
end
# Public: Detects the Language of the blob.
#
# blob - an object that includes the Linguist `BlobHelper` interface;
# see Linguist::LazyBlob and Linguist::FileBlob for examples
#
# Returns Language or nil.
def self.detect(blob)
warn "[DEPRECATED] `Linguist::Language.detect` is deprecated. Use `Linguist.detect`. #{caller[0]}"
Linguist.detect(blob)
end
# Public: Get all Languages
#
# Returns an Array of Languages
@@ -140,46 +130,46 @@ module Linguist
# Public: Look up Languages by filename.
#
# The behaviour of this method recently changed.
# See the second example below.
#
# filename - The path String.
#
# Examples
#
# Language.find_by_filename('Cakefile')
# # => [#<Language name="CoffeeScript">]
# Language.find_by_filename('foo.rb')
# # => [#<Language name="Ruby">]
# # => []
#
# Returns all matching Languages or [] if none were found.
def self.find_by_filename(filename)
basename = File.basename(filename)
# find the first extension with language definitions
extname = FileBlob.new(filename).extensions.detect do |e|
!@extension_index[e].empty?
end
(@filename_index[basename] + @extension_index[extname]).compact.uniq
@filename_index[basename]
end
# Public: Look up Languages by file extension.
#
# extname - The extension String.
# The behaviour of this method recently changed.
# See the second example below.
#
# filename - The path String.
#
# Examples
#
# Language.find_by_extension('.rb')
# Language.find_by_extension('dummy.rb')
# # => [#<Language name="Ruby">]
#
# Language.find_by_extension('rb')
# # => [#<Language name="Ruby">]
# # => []
#
# Returns all matching Languages or [] if none were found.
def self.find_by_extension(extname)
extname = ".#{extname}" unless extname.start_with?(".")
@extension_index[extname.downcase]
end
def self.find_by_extension(filename)
# find the first extension with language definitions
extname = FileBlob.new(filename.downcase).extensions.detect do |e|
!@extension_index[e].empty?
end
# DEPRECATED
def self.find_by_shebang(data)
@interpreter_index[Shebang.interpreter(data)]
@extension_index[extname]
end
# Public: Look up Languages by interpreter.
@@ -225,7 +215,14 @@ module Linguist
# Returns the Language or nil if none was found.
def self.[](name)
return nil if name.to_s.empty?
name && (@index[name.downcase] || @index[name.split(',').first.downcase])
lang = @index[name.downcase]
return lang if lang
name = name.split(',').first
return nil if name.to_s.empty?
@index[name.downcase]
end
# Public: A List of popular languages
@@ -259,17 +256,6 @@ module Linguist
@colors ||= all.select(&:color).sort_by { |lang| lang.name.downcase }
end
# Public: A List of languages compatible with Ace.
#
# TODO: Remove this method in a 5.x release. Every language now needs an ace_mode
# key, so this function isn't doing anything unique anymore.
#
# Returns an Array of Languages.
def self.ace_modes
warn "This method will be deprecated in a future 5.x release. Every language now has an `ace_mode` set."
@ace_modes ||= all.select(&:ace_mode).sort_by { |lang| lang.name.downcase }
end
# Internal: Initialize a new Language
#
# attributes - A hash of attributes
@@ -286,7 +272,7 @@ module Linguist
@color = attributes[:color]
# Set aliases
@aliases = [default_alias_name] + (attributes[:aliases] || [])
@aliases = [default_alias] + (attributes[:aliases] || [])
# Load the TextMate scope name or try to guess one
@tm_scope = attributes[:tm_scope] || begin
@@ -301,12 +287,10 @@ module Linguist
@ace_mode = attributes[:ace_mode]
@codemirror_mode = attributes[:codemirror_mode]
@codemirror_mime_type = attributes[:codemirror_mime_type]
@wrap = attributes[:wrap] || false
# Set legacy search term
@search_term = attributes[:search_term] || default_alias_name
# Set the language_id
# Set the language_id
@language_id = attributes[:language_id]
# Set extensions or default to [].
@@ -360,17 +344,6 @@ module Linguist
# Returns an Array of String names
attr_reader :aliases
# Deprecated: Get code search term
#
# Examples
#
# # => "ruby"
# # => "python"
# # => "perl"
#
# Returns the name String
attr_reader :search_term
# Public: Get language_id (used in GitHub search)
#
# Examples
@@ -398,7 +371,10 @@ module Linguist
# Returns a String name or nil
attr_reader :ace_mode
# Public: Get Codemirror mode
# Public: Get CodeMirror mode
#
# Maps to a directory in the `mode/` source code.
# https://github.com/codemirror/CodeMirror/tree/master/mode
#
# Examples
#
@@ -409,6 +385,17 @@ module Linguist
# Returns a String name or nil
attr_reader :codemirror_mode
# Public: Get CodeMirror MIME type mode
#
# Examples
#
# # => "nil"
# # => "text/x-javascript"
# # => "text/x-csrc"
#
# Returns a String name or nil
attr_reader :codemirror_mime_type
# Public: Should language lines be wrapped
#
# Returns true or false
@@ -441,22 +428,6 @@ module Linguist
# Returns the extensions Array
attr_reader :filenames
# Deprecated: Get primary extension
#
# Defaults to the first extension but can be overridden
# in the languages.yml.
#
# The primary extension can not be nil. Tests should verify this.
#
# This method is only used by app/helpers/gists_helper.rb for creating
# the language dropdown. It really should be using `name` instead.
# Would like to drop primary extension.
#
# Returns the extension String.
def primary_extension
extensions.first
end
# Public: Get URL escaped name.
#
# Examples
@@ -470,12 +441,13 @@ module Linguist
EscapeUtils.escape_url(name).gsub('+', '%20')
end
# Internal: Get default alias name
# Public: Get default alias name
#
# Returns the alias name String
def default_alias_name
def default_alias
name.downcase.gsub(/\s/, '-')
end
alias_method :default_alias_name, :default_alias
# Public: Get Language group
#
@@ -586,10 +558,10 @@ module Linguist
:tm_scope => options['tm_scope'],
:ace_mode => options['ace_mode'],
:codemirror_mode => options['codemirror_mode'],
:codemirror_mime_type => options['codemirror_mime_type'],
:wrap => options['wrap'],
:group_name => options['group'],
:searchable => options.fetch('searchable', true),
:search_term => options['search_term'],
:language_id => options['language_id'],
:extensions => Array(options['extensions']),
:interpreters => options['interpreters'].sort,

File diff suppressed because it is too large Load Diff

View File

@@ -26,4 +26,4 @@
- Shell
- Swift
- TeX
- VimL
- Vim script

View File

@@ -0,0 +1,10 @@
module Linguist
module Strategy
# Detects language based on extension
class Extension
def self.call(blob, _)
Language.find_by_extension(blob.name.to_s)
end
end
end
end

View File

@@ -1,9 +1,10 @@
module Linguist
module Strategy
# Detects language based on filename and/or extension
# Detects language based on filename
class Filename
def self.call(blob, _)
Language.find_by_filename(blob.name.to_s)
name = blob.name.to_s
Language.find_by_filename(name)
end
end
end

View File

@@ -15,6 +15,9 @@
# Dependencies
- ^[Dd]ependencies/
# Distributions
- (^|/)dist/
# C deps
# https://github.com/joyent/node
- ^deps/
@@ -47,6 +50,9 @@
# Go dependencies
- Godeps/_workspace/
# GNU indent profiles
- .indent.pro
# Minified JavaScript and CSS
- (\.|-)min\.(js|css)$
@@ -165,7 +171,7 @@
# Chart.js
- (^|/)Chart\.js$
# Codemirror
# CodeMirror
- (^|/)[Cc]ode[Mm]irror/(\d+\.\d+/)?(lib|mode|theme|addon|keymap|demo)
# SyntaxHighlighter - http://alexgorbatchev.com/
@@ -229,6 +235,15 @@
# Fabric
- Fabric.framework/
# BuddyBuild
- BuddyBuildSDK.framework/
# Realm
- Realm.framework
# RealmSwift
- RealmSwift.framework
# git config files
- gitattributes$
- gitignore$

View File

@@ -1,3 +1,3 @@
module Linguist
VERSION = "4.8.13"
VERSION = "5.0.9"
end

View File

@@ -1,7 +1,7 @@
{
"repository": "https://github.com/github/linguist",
"dependencies": {
"season": "~>5.0"
"season": "~>5.4"
},
"license": "MIT"
}

190
samples/ABNF/toml.abnf Normal file
View File

@@ -0,0 +1,190 @@
; Source: https://github.com/toml-lang/toml
; License: MIT
;; This is an attempt to define TOML in ABNF according to the grammar defined
;; in RFC 4234 (http://www.ietf.org/rfc/rfc4234.txt).
;; TOML
toml = expression *( newline expression )
expression = (
ws /
ws comment /
ws keyval ws [ comment ] /
ws table ws [ comment ]
)
;; Newline
newline = (
%x0A / ; LF
%x0D.0A ; CRLF
)
newlines = 1*newline
;; Whitespace
ws = *(
%x20 / ; Space
%x09 ; Horizontal tab
)
;; Comment
comment-start-symbol = %x23 ; #
non-eol = %x09 / %x20-10FFFF
comment = comment-start-symbol *non-eol
;; Key-Value pairs
keyval-sep = ws %x3D ws ; =
keyval = key keyval-sep val
key = unquoted-key / quoted-key
unquoted-key = 1*( ALPHA / DIGIT / %x2D / %x5F ) ; A-Z / a-z / 0-9 / - / _
quoted-key = quotation-mark 1*basic-char quotation-mark ; See Basic Strings
val = integer / float / string / boolean / date-time / array / inline-table
;; Table
table = std-table / array-table
;; Standard Table
std-table-open = %x5B ws ; [ Left square bracket
std-table-close = ws %x5D ; ] Right square bracket
table-key-sep = ws %x2E ws ; . Period
std-table = std-table-open key *( table-key-sep key) std-table-close
;; Array Table
array-table-open = %x5B.5B ws ; [[ Double left square bracket
array-table-close = ws %x5D.5D ; ]] Double right square bracket
array-table = array-table-open key *( table-key-sep key) array-table-close
;; Integer
integer = [ minus / plus ] int
minus = %x2D ; -
plus = %x2B ; +
digit1-9 = %x31-39 ; 1-9
underscore = %x5F ; _
int = DIGIT / digit1-9 1*( DIGIT / underscore DIGIT )
;; Float
float = integer ( frac / frac exp / exp )
zero-prefixable-int = DIGIT *( DIGIT / underscore DIGIT )
frac = decimal-point zero-prefixable-int
decimal-point = %x2E ; .
exp = e integer
e = %x65 / %x45 ; e E
;; String
string = basic-string / ml-basic-string / literal-string / ml-literal-string
;; Basic String
basic-string = quotation-mark *basic-char quotation-mark
quotation-mark = %x22 ; "
basic-char = basic-unescaped / escaped
escaped = escape ( %x22 / ; " quotation mark U+0022
%x5C / ; \ reverse solidus U+005C
%x2F / ; / solidus U+002F
%x62 / ; b backspace U+0008
%x66 / ; f form feed U+000C
%x6E / ; n line feed U+000A
%x72 / ; r carriage return U+000D
%x74 / ; t tab U+0009
%x75 4HEXDIG / ; uXXXX U+XXXX
%x55 8HEXDIG ) ; UXXXXXXXX U+XXXXXXXX
basic-unescaped = %x20-21 / %x23-5B / %x5D-10FFFF
escape = %x5C ; \
;; Multiline Basic String
ml-basic-string-delim = quotation-mark quotation-mark quotation-mark
ml-basic-string = ml-basic-string-delim ml-basic-body ml-basic-string-delim
ml-basic-body = *( ml-basic-char / newline / ( escape newline ))
ml-basic-char = ml-basic-unescaped / escaped
ml-basic-unescaped = %x20-5B / %x5D-10FFFF
;; Literal String
literal-string = apostraphe *literal-char apostraphe
apostraphe = %x27 ; ' Apostrophe
literal-char = %x09 / %x20-26 / %x28-10FFFF
;; Multiline Literal String
ml-literal-string-delim = apostraphe apostraphe apostraphe
ml-literal-string = ml-literal-string-delim ml-literal-body ml-literal-string-delim
ml-literal-body = *( ml-literal-char / newline )
ml-literal-char = %x09 / %x20-10FFFF
;; Boolean
boolean = true / false
true = %x74.72.75.65 ; true
false = %x66.61.6C.73.65 ; false
;; Datetime (as defined in RFC 3339)
date-fullyear = 4DIGIT
date-month = 2DIGIT ; 01-12
date-mday = 2DIGIT ; 01-28, 01-29, 01-30, 01-31 based on month/year
time-hour = 2DIGIT ; 00-23
time-minute = 2DIGIT ; 00-59
time-second = 2DIGIT ; 00-58, 00-59, 00-60 based on leap second rules
time-secfrac = "." 1*DIGIT
time-numoffset = ( "+" / "-" ) time-hour ":" time-minute
time-offset = "Z" / time-numoffset
partial-time = time-hour ":" time-minute ":" time-second [time-secfrac]
full-date = date-fullyear "-" date-month "-" date-mday
full-time = partial-time time-offset
date-time = full-date "T" full-time
;; Array
array-open = %x5B ws ; [
array-close = ws %x5D ; ]
array = array-open array-values array-close
array-values = [ val [ array-sep ] [ ( comment newlines) / newlines ] /
val array-sep [ ( comment newlines) / newlines ] array-values ]
array-sep = ws %x2C ws ; , Comma
;; Inline Table
inline-table-open = %x7B ws ; {
inline-table-close = ws %x7D ; }
inline-table-sep = ws %x2C ws ; , Comma
inline-table = inline-table-open inline-table-keyvals inline-table-close
inline-table-keyvals = [ inline-table-keyvals-non-empty ]
inline-table-keyvals-non-empty = key keyval-sep val /
key keyval-sep val inline-table-sep inline-table-keyvals-non-empty
;; Built-in ABNF terms, reproduced here for clarity
; ALPHA = %x41-5A / %x61-7A ; A-Z / a-z
; DIGIT = %x30-39 ; 0-9
; HEXDIG = DIGIT / "A" / "B" / "C" / "D" / "E" / "F"

View File

@@ -0,0 +1,46 @@
#include <iostream>
#define YYCTYPE unsigned char
#define YYCURSOR cursor
#define YYLIMIT cursor
#define YYMARKER marker
#define YYFILL(n)
bool scan(const char *text)
{
YYCTYPE *start = (YYCTYPE *)text;
YYCTYPE *cursor = (YYCTYPE *)text;
YYCTYPE *marker = (YYCTYPE *)text;
next:
YYCTYPE *token = cursor;
/*!re2c
'(This file must be converted with BinHex 4.0)'
{
if (token == start || *(token - 1) == '\n')
return true; else goto next;
}
[\001-\377]
{ goto next; }
[\000]
{ return false; }
*/
return false;
}
#define do_scan(str, expect) \
res = scan(str) == expect ? 0 : 1; \
std::cerr << str << "\t-\t" << (res ? "fail" : "ok") << std::endl; \
result += res
/*!max:re2c */
int main(int,void**)
{
int res, result = 0;
do_scan("(This file must be converted with BinHex 4.0)", 1);
do_scan("x(This file must be converted with BinHex 4.0)", 0);
do_scan("(This file must be converted with BinHex 4.0)x", 1);
do_scan("x(This file must be converted with BinHex 4.0)x", 0);
return result;
}

239
samples/C++/cnokw.re Normal file
View File

@@ -0,0 +1,239 @@
#include <stdlib.h>
#include <stdio.h>
#include <string.h>
#define ADDEQ 257
#define ANDAND 258
#define ANDEQ 259
#define ARRAY 260
#define ASM 261
#define AUTO 262
#define BREAK 263
#define CASE 264
#define CHAR 265
#define CONST 266
#define CONTINUE 267
#define DECR 268
#define DEFAULT 269
#define DEREF 270
#define DIVEQ 271
#define DO 272
#define DOUBLE 273
#define ELLIPSIS 274
#define ELSE 275
#define ENUM 276
#define EQL 277
#define EXTERN 278
#define FCON 279
#define FLOAT 280
#define FOR 281
#define FUNCTION 282
#define GEQ 283
#define GOTO 284
#define ICON 285
#define ID 286
#define IF 287
#define INCR 288
#define INT 289
#define LEQ 290
#define LONG 291
#define LSHIFT 292
#define LSHIFTEQ 293
#define MODEQ 294
#define MULEQ 295
#define NEQ 296
#define OREQ 297
#define OROR 298
#define POINTER 299
#define REGISTER 300
#define RETURN 301
#define RSHIFT 302
#define RSHIFTEQ 303
#define SCON 304
#define SHORT 305
#define SIGNED 306
#define SIZEOF 307
#define STATIC 308
#define STRUCT 309
#define SUBEQ 310
#define SWITCH 311
#define TYPEDEF 312
#define UNION 313
#define UNSIGNED 314
#define VOID 315
#define VOLATILE 316
#define WHILE 317
#define XOREQ 318
#define EOI 319
typedef unsigned int uint;
typedef unsigned char uchar;
#define BSIZE 8192
#define YYCTYPE uchar
#define YYCURSOR cursor
#define YYLIMIT s->lim
#define YYMARKER s->ptr
#define YYFILL(n) {cursor = fill(s, cursor);}
#define RET(i) {s->cur = cursor; return i;}
typedef struct Scanner {
int fd;
uchar *bot, *tok, *ptr, *cur, *pos, *lim, *top, *eof;
uint line;
} Scanner;
uchar *fill(Scanner *s, uchar *cursor){
if(!s->eof){
uint cnt = s->tok - s->bot;
if(cnt){
memcpy(s->bot, s->tok, s->lim - s->tok);
s->tok = s->bot;
s->ptr -= cnt;
cursor -= cnt;
s->pos -= cnt;
s->lim -= cnt;
}
if((s->top - s->lim) < BSIZE){
uchar *buf = (uchar*) malloc(((s->lim - s->bot) + BSIZE)*sizeof(uchar));
memcpy(buf, s->tok, s->lim - s->tok);
s->tok = buf;
s->ptr = &buf[s->ptr - s->bot];
cursor = &buf[cursor - s->bot];
s->pos = &buf[s->pos - s->bot];
s->lim = &buf[s->lim - s->bot];
s->top = &s->lim[BSIZE];
free(s->bot);
s->bot = buf;
}
if((cnt = read(s->fd, (char*) s->lim, BSIZE)) != BSIZE){
s->eof = &s->lim[cnt]; *(s->eof)++ = '\n';
}
s->lim += cnt;
}
return cursor;
}
int scan(Scanner *s){
uchar *cursor = s->cur;
std:
s->tok = cursor;
/*!re2c
any = [\000-\377];
O = [0-7];
D = [0-9];
L = [a-zA-Z_];
H = [a-fA-F0-9];
E = [Ee] [+-]? D+;
FS = [fFlL];
IS = [uUlL]*;
ESC = [\\] ([abfnrtv?'"\\] | "x" H+ | O+);
*/
/*!re2c
"/*" { goto comment; }
L (L|D)* { RET(ID); }
("0" [xX] H+ IS?) | ("0" D+ IS?) | (D+ IS?) |
(['] (ESC|any\[\n\\'])* ['])
{ RET(ICON); }
(D+ E FS?) | (D* "." D+ E? FS?) | (D+ "." D* E? FS?)
{ RET(FCON); }
(["] (ESC|any\[\n\\"])* ["])
{ RET(SCON); }
"..." { RET(ELLIPSIS); }
">>=" { RET(RSHIFTEQ); }
"<<=" { RET(LSHIFTEQ); }
"+=" { RET(ADDEQ); }
"-=" { RET(SUBEQ); }
"*=" { RET(MULEQ); }
"/=" { RET(DIVEQ); }
"%=" { RET(MODEQ); }
"&=" { RET(ANDEQ); }
"^=" { RET(XOREQ); }
"|=" { RET(OREQ); }
">>" { RET(RSHIFT); }
"<<" { RET(LSHIFT); }
"++" { RET(INCR); }
"--" { RET(DECR); }
"->" { RET(DEREF); }
"&&" { RET(ANDAND); }
"||" { RET(OROR); }
"<=" { RET(LEQ); }
">=" { RET(GEQ); }
"==" { RET(EQL); }
"!=" { RET(NEQ); }
";" { RET(';'); }
"{" { RET('{'); }
"}" { RET('}'); }
"," { RET(','); }
":" { RET(':'); }
"=" { RET('='); }
"(" { RET('('); }
")" { RET(')'); }
"[" { RET('['); }
"]" { RET(']'); }
"." { RET('.'); }
"&" { RET('&'); }
"!" { RET('!'); }
"~" { RET('~'); }
"-" { RET('-'); }
"+" { RET('+'); }
"*" { RET('*'); }
"/" { RET('/'); }
"%" { RET('%'); }
"<" { RET('<'); }
">" { RET('>'); }
"^" { RET('^'); }
"|" { RET('|'); }
"?" { RET('?'); }
[ \t\v\f]+ { goto std; }
"\n"
{
if(cursor == s->eof) RET(EOI);
s->pos = cursor; s->line++;
goto std;
}
any
{
printf("unexpected character: %c\n", *s->tok);
goto std;
}
*/
comment:
/*!re2c
"*/" { goto std; }
"\n"
{
if(cursor == s->eof) RET(EOI);
s->tok = s->pos = cursor; s->line++;
goto comment;
}
any { goto comment; }
*/
}
main(){
Scanner in;
int t;
memset((char*) &in, 0, sizeof(in));
in.fd = 0;
while((t = scan(&in)) != EOI){
/*
printf("%d\t%.*s\n", t, in.cur - in.tok, in.tok);
printf("%d\n", t);
*/
}
close(in.fd);
}

63
samples/C++/cvsignore.re Normal file
View File

@@ -0,0 +1,63 @@
#define YYFILL(n) if (cursor >= limit) break;
#define YYCTYPE char
#define YYCURSOR cursor
#define YYLIMIT limit
#define YYMARKER marker
/*!re2c
any = (.|"\n");
value = (":" (.\"$")+)?;
cvsdat = "Date";
cvsid = "Id";
cvslog = "Log";
cvsrev = "Revision";
cvssrc = "Source";
*/
#define APPEND(text) \
append(output, outsize, text, sizeof(text) - sizeof(YYCTYPE))
inline void append(YYCTYPE *output, size_t & outsize, const YYCTYPE * text, size_t len)
{
memcpy(output + outsize, text, len);
outsize += (len / sizeof(YYCTYPE));
}
void scan(YYCTYPE *pText, size_t *pSize, int *pbChanged)
{
// rule
// scan lines
// find $ in lines
// compact $<keyword>: .. $ to $<keyword>$
YYCTYPE *output;
const YYCTYPE *cursor, *limit, *marker;
cursor = marker = output = *pText;
size_t insize = *pSize;
size_t outsize = 0;
limit = cursor + insize;
while(1) {
loop:
/*!re2c
"$" cvsdat value "$" { APPEND(L"$" L"Date$"); goto loop; }
"$" cvsid value "$" { APPEND(L"$" L"Id$"); goto loop; }
"$" cvslog value "$" { APPEND(L"$" L"Log$"); goto loop; }
"$" cvsrev value "$" { APPEND(L"$" L"Revision$"); goto loop; }
"$" cvssrc value "$" { APPEND(L"$" L"Source$"); goto loop; }
any { output[outsize++] = cursor[-1]; if (cursor >= limit) break; goto loop; }
*/
}
output[outsize] = '\0';
// set the new size
*pSize = outsize;
*pbChanged = (insize == outsize) ? 0 : 1;
}

13
samples/C++/simple.re Normal file
View File

@@ -0,0 +1,13 @@
#define NULL ((char*) 0)
char *scan(char *p){
char *q;
#define YYCTYPE char
#define YYCURSOR p
#define YYLIMIT p
#define YYMARKER q
#define YYFILL(n)
/*!re2c
[0-9]+ {return YYCURSOR;}
[\000-\377] {return NULL;}
*/
}

72
samples/CSON/base.cson Normal file
View File

@@ -0,0 +1,72 @@
'atom-text-editor':
# Platform Bindings
'home': 'editor:move-to-first-character-of-line'
'end': 'editor:move-to-end-of-screen-line'
'shift-home': 'editor:select-to-first-character-of-line'
'shift-end': 'editor:select-to-end-of-line'
'atom-text-editor:not([mini])':
# Atom Specific
'ctrl-C': 'editor:copy-path'
# Sublime Parity
'tab': 'editor:indent'
'enter': 'editor:newline'
'shift-tab': 'editor:outdent-selected-rows'
'ctrl-K': 'editor:delete-line'
'.select-list atom-text-editor[mini]':
'enter': 'core:confirm'
'.tool-panel.panel-left, .tool-panel.panel-right':
'escape': 'tool-panel:unfocus'
'atom-text-editor !important, atom-text-editor[mini] !important':
'escape': 'editor:consolidate-selections'
# allow standard input fields to work correctly
'body .native-key-bindings':
'tab': 'core:focus-next'
'shift-tab': 'core:focus-previous'
'enter': 'native!'
'backspace': 'native!'
'shift-backspace': 'native!'
'delete': 'native!'
'up': 'native!'
'down': 'native!'
'shift-up': 'native!'
'shift-down': 'native!'
'alt-up': 'native!'
'alt-down': 'native!'
'alt-shift-up': 'native!'
'alt-shift-down': 'native!'
'cmd-up': 'native!'
'cmd-down': 'native!'
'cmd-shift-up': 'native!'
'cmd-shift-down': 'native!'
'ctrl-up': 'native!'
'ctrl-down': 'native!'
'ctrl-shift-up': 'native!'
'ctrl-shift-down': 'native!'
'left': 'native!'
'right': 'native!'
'shift-left': 'native!'
'shift-right': 'native!'
'alt-left': 'native!'
'alt-right': 'native!'
'alt-shift-left': 'native!'
'alt-shift-right': 'native!'
'cmd-left': 'native!'
'cmd-right': 'native!'
'cmd-shift-left': 'native!'
'cmd-shift-right': 'native!'
'ctrl-left': 'native!'
'ctrl-right': 'native!'
'ctrl-shift-left': 'native!'
'ctrl-shift-right': 'native!'
'ctrl-b': 'native!'
'ctrl-f': 'native!'
'ctrl-F': 'native!'
'ctrl-B': 'native!'
'ctrl-h': 'native!'
'ctrl-d': 'native!'

59
samples/CSON/config.cson Normal file
View File

@@ -0,0 +1,59 @@
directoryIcons:
Atom:
icon: "atom"
match: /^\.atom$/
colour: "dark-green"
Bower:
icon: "bower"
match: /^bower[-_]components$/
colour: "bower"
Dropbox:
icon: "dropbox"
match: /^(?:Dropbox|\.dropbox\.cache)$/
colour: "medium-blue"
Git:
icon: "git"
match: /^\.git$/
GitHub:
icon: "github"
match: /^\.github$/
Meteor:
icon: "meteor"
match: /^\.meteor$/
NodeJS:
icon: "node"
match: /^node_modules$/
colour: "medium-green"
Package:
icon: "package"
match: /^\.bundle$/i
TextMate:
icon: "textmate"
match: ".tmBundle"
fileIcons:
ABAP:
icon: "abap"
scope: "abp"
match: ".abap"
colour: "medium-orange"
ActionScript: # Or Flash-related
icon: "as"
match: [
[".swf", "medium-blue"]
[".as", "medium-red", scope: /\.(?:flex-config|actionscript(?:\.\d+)?)$/i, alias: /ActionScript\s?3|as3/i]
[".jsfl", "auto-yellow"]
[".swc", "dark-red"]
]

108
samples/CSON/ff-sfd.cson Normal file
View File

@@ -0,0 +1,108 @@
name: "Spline Font Database"
scopeName: "text.sfd"
fileTypes: ["sfd"]
firstLineMatch: "^SplineFontDB: [\\d.]+"
patterns: [include: "#main"]
repository:
main:
patterns: [
{include: "#punctuation"}
{include: "#private"}
{include: "#image"}
{include: "#pickleData"}
{include: "#sections"}
{include: "#copyright"}
{include: "#property"}
{include: "#control"}
{include: "#address"}
{include: "#encoding"}
{include: "source.fontforge#shared"}
{include: "#colour"}
]
punctuation:
patterns: [
{match: "<|>", name: "punctuation.definition.brackets.angle.sfd"}
{match: "[{}]", name: "punctuation.definition.brackets.curly.sfd"}
]
private:
name: "meta.section.private.sfd"
begin: "^BeginPrivate(?=:)"
end: "^EndPrivate\\b"
beginCaptures: 0: name: "keyword.control.begin.private.sfd"
endCaptures: 0: name: "keyword.control.end.private.sfd"
patterns: [
{match: "^\\S+", name: "entity.name.private.property.sfd"}
{include: "$self"}
]
image:
name: "meta.image.sfd"
begin: "^(Image)(?=:)(.+)$"
end: "^(EndImage)\\b"
contentName: "string.unquoted.raw.data.sfd"
beginCaptures:
1: name: "keyword.control.begin.image.sfd"
2: patterns: [include: "$self"]
endCaptures:
1: name: "keyword.control.end.image.sfd"
pickleData:
name: "meta.pickle-data.sfd"
begin: "^(PickledData)(:)\\s*(\")"
end: '"'
beginCaptures:
1: name: "entity.name.property.sfd"
2: name: "punctuation.separator.dictionary.key-value.sfd"
3: name: "punctuation.definition.string.begin.sfd"
endCaptures:
0: name: "punctuation.definition.string.end.sfd"
patterns: [match: "\\\\.", name: "constant.character.escape.sfd"]
sections:
name: "meta.section.${2:/downcase}.sfd"
begin: "^(Start|Begin)([A-Z]\\w+)(?=:)"
end: "^(End\\2)\\b"
beginCaptures: 0: name: "keyword.control.begin.${2:/downcase}.sfd"
endCaptures: 0: name: "keyword.control.end.${2:/downcase}.sfd"
patterns: [include: "$self"]
control:
name: "keyword.control.${1:/downcase}.sfd"
match: "\\b(Fore|Back|SplineSet|^End\\w+)\\b"
colour:
name: "constant.other.hex.colour.sfd"
match: "(#)[A-Fa-f0-9]{3,}|(?<=\\s)[A-Fa-f0-9]{6,8}"
captures:
1: name: "punctuation.definition.colour.sfd"
encoding:
name: "constant.language.encoding.sfd"
match: "(?i)\\b(ISO[-\\w]+)(?<=\\d)(?=\\s|$)"
# Don't highlight numbers in freeform strings (years/version strings)
copyright:
name: "meta.${1:/downcase}-string.sfd"
begin: "^(Copyright|U?Comments?|\\w+Name)(:)"
end: "$"
beginCaptures:
1: name: "entity.name.property.sfd"
2: name: "punctuation.separator.dictionary.key-value.sfd"
patterns: [include: "source.fontforge#stringEscapes"]
# No idea what this is, but it looks distracting without a fix
# Assuming it's referring to a memory register or something.
address:
match: "\\d+[xX][A-Fa-f0-9]+"
name: "constant.numeric.hexadecimal.sfd"
property:
match: "^([^:]+)(:)"
name: "meta.dictionary.key-value.sfd"
captures:
1: name: "entity.name.property.sfd"
2: name: "punctuation.separator.dictionary.key-value.sfd"

View File

@@ -0,0 +1,11 @@
'menu': [
{
'label': 'Packages'
'submenu': [
'label': 'Wercker Status'
'submenu': [
{ 'label': 'Check now!', 'command': 'wercker-status:checknow' }
]
]
}
]

View File

@@ -1,707 +0,0 @@
Inductive day : Type :=
| monday : day
| tuesday : day
| wednesday : day
| thursday : day
| friday : day
| saturday : day
| sunday : day.
Definition next_weekday (d:day) : day :=
match d with
| monday => tuesday
| tuesday => wednesday
| wednesday => thursday
| thursday => friday
| friday => monday
| saturday => monday
| sunday => monday
end.
Example test_next_weekday:
(next_weekday (next_weekday saturday)) = tuesday.
Proof. simpl. reflexivity. Qed.
Inductive bool : Type :=
| true : bool
| false : bool.
Definition negb (b:bool) : bool :=
match b with
| true => false
| false => true
end.
Definition andb (b1:bool) (b2:bool) : bool :=
match b1 with
| true => b2
| false => false
end.
Definition orb (b1:bool) (b2:bool) : bool :=
match b1 with
| true => true
| false => b2
end.
Example test_orb1: (orb true false) = true.
Proof. simpl. reflexivity. Qed.
Example test_orb2: (orb false false) = false.
Proof. simpl. reflexivity. Qed.
Example test_orb3: (orb false true) = true.
Proof. simpl. reflexivity. Qed.
Example test_orb4: (orb true true) = true.
Proof. simpl. reflexivity. Qed.
Definition nandb (b1: bool) (b2:bool) : bool :=
match b1 with
| true => match b2 with
| false => true
| true => false
end
| false => true
end.
Example test_nandb1: (nandb true false) = true.
Proof. simpl. reflexivity. Qed.
Example test_nandb2: (nandb false false) = true.
Proof. simpl. reflexivity. Qed.
Example test_nandb3: (nandb false true) = true.
Proof. simpl. reflexivity. Qed.
Example test_nandb4: (nandb true true) = false.
Proof. simpl. reflexivity. Qed.
Definition andb3 (b1: bool) (b2:bool) (b3:bool) : bool :=
match b1 with
| false => false
| true => match b2 with
| false => false
| true => b3
end
end.
Example test_andb31: (andb3 true true true) = true.
Proof. simpl. reflexivity. Qed.
Example test_andb32: (andb3 false true true) = false.
Proof. simpl. reflexivity. Qed.
Example test_andb33: (andb3 true false true) = false.
Proof. simpl. reflexivity. Qed.
Example test_andb34: (andb3 true true false) = false.
Proof. simpl. reflexivity. Qed.
Module Playground1.
Inductive nat : Type :=
| O : nat
| S : nat -> nat.
Definition pred (n : nat) : nat :=
match n with
| O => O
| S n' => n'
end.
Definition minustwo (n : nat) : nat :=
match n with
| O => O
| S O => O
| S (S n') => n'
end.
Fixpoint evenb (n : nat) : bool :=
match n with
| O => true
| S O => false
| S (S n') => evenb n'
end.
Definition oddb (n : nat) : bool := negb (evenb n).
Example test_oddb1: (oddb (S O)) = true.
Proof. reflexivity. Qed.
Example test_oddb2: (oddb (S (S (S (S O))))) = false.
Proof. reflexivity. Qed.
Fixpoint plus (n : nat) (m : nat) : nat :=
match n with
| O => m
| S n' => S (plus n' m)
end.
Fixpoint mult (n m : nat) : nat :=
match n with
| O => O
| S n' => plus m (mult n' m)
end.
Fixpoint minus (n m : nat) : nat :=
match n, m with
| O, _ => n
| S n', O => S n'
| S n', S m' => minus n' m'
end.
Fixpoint exp (base power : nat) : nat :=
match power with
| O => S O
| S p => mult base (exp base p)
end.
Fixpoint factorial (n : nat) : nat :=
match n with
| O => S O
| S n' => mult n (factorial n')
end.
Example test_factorial1: (factorial (S (S (S O)))) = (S (S (S (S (S (S O)))))).
Proof. simpl. reflexivity. Qed.
Notation "x + y" := (plus x y) (at level 50, left associativity) : nat_scope.
Notation "x - y" := (minus x y) (at level 50, left associativity) : nat_scope.
Notation "x * y" := (mult x y) (at level 40, left associativity) : nat_scope.
Fixpoint beq_nat (n m : nat) : bool :=
match n with
| O => match m with
| O => true
| S m' => false
end
| S n' => match m with
| O => false
| S m' => beq_nat n' m'
end
end.
Fixpoint ble_nat (n m : nat) : bool :=
match n with
| O => true
| S n' =>
match m with
| O => false
| S m' => ble_nat n' m'
end
end.
Example test_ble_nat1: (ble_nat (S (S O)) (S (S O))) = true.
Proof. simpl. reflexivity. Qed.
Example test_ble_nat2: (ble_nat (S (S O)) (S (S (S (S O))))) = true.
Proof. simpl. reflexivity. Qed.
Example test_ble_nat3: (ble_nat (S (S (S (S O)))) (S (S O))) = false.
Proof. simpl. reflexivity. Qed.
Definition blt_nat (n m : nat) : bool :=
(andb (negb (beq_nat n m)) (ble_nat n m)).
Example test_blt_nat1: (blt_nat (S (S O)) (S (S O))) = false.
Proof. simpl. reflexivity. Qed.
Example test_blt_nat3: (blt_nat (S (S (S (S O)))) (S (S O))) = false.
Proof. simpl. reflexivity. Qed.
Example test_blt_nat2 : (blt_nat (S (S O)) (S (S (S (S O))))) = true.
Proof. simpl. reflexivity. Qed.
Theorem plus_O_n : forall n : nat, O + n = n.
Proof.
simpl. reflexivity. Qed.
Theorem plus_O_n' : forall n : nat, O + n = n.
Proof.
reflexivity. Qed.
Theorem plus_O_n'' : forall n : nat, O + n = n.
Proof.
intros n. reflexivity. Qed.
Theorem plus_1_1 : forall n : nat, (S O) + n = S n.
Proof.
intros n. reflexivity. Qed.
Theorem mult_0_1: forall n : nat, O * n = O.
Proof.
intros n. reflexivity. Qed.
Theorem plus_id_example : forall n m:nat,
n = m -> n + n = m + m.
Proof.
intros n m.
intros H.
rewrite -> H.
reflexivity. Qed.
Theorem plus_id_exercise : forall n m o: nat,
n = m -> m = o -> n + m = m + o.
Proof.
intros n m o.
intros H.
intros H'.
rewrite -> H.
rewrite <- H'.
reflexivity.
Qed.
Theorem mult_0_plus : forall n m : nat,
(O + n) * m = n * m.
Proof.
intros n m.
rewrite -> plus_O_n.
reflexivity. Qed.
Theorem mult_1_plus : forall n m: nat,
((S O) + n) * m = m + (n * m).
Proof.
intros n m.
rewrite -> plus_1_1.
reflexivity.
Qed.
Theorem mult_1 : forall n : nat,
n * (S O) = n.
Proof.
intros n.
induction n as [| n'].
reflexivity.
simpl.
rewrite -> IHn'.
reflexivity.
Qed.
Theorem plus_1_neq_0 : forall n : nat,
beq_nat (n + (S O)) O = false.
Proof.
intros n.
destruct n as [| n'].
reflexivity.
reflexivity.
Qed.
Theorem zero_nbeq_plus_1 : forall n : nat,
beq_nat O (n + (S O)) = false.
Proof.
intros n.
destruct n.
reflexivity.
reflexivity.
Qed.
Require String. Open Scope string_scope.
Ltac move_to_top x :=
match reverse goal with
| H : _ |- _ => try move x after H
end.
Tactic Notation "assert_eq" ident(x) constr(v) :=
let H := fresh in
assert (x = v) as H by reflexivity;
clear H.
Tactic Notation "Case_aux" ident(x) constr(name) :=
first [
set (x := name); move_to_top x
| assert_eq x name; move_to_top x
| fail 1 "because we are working on a different case" ].
Ltac Case name := Case_aux Case name.
Ltac SCase name := Case_aux SCase name.
Ltac SSCase name := Case_aux SSCase name.
Ltac SSSCase name := Case_aux SSSCase name.
Ltac SSSSCase name := Case_aux SSSSCase name.
Ltac SSSSSCase name := Case_aux SSSSSCase name.
Ltac SSSSSSCase name := Case_aux SSSSSSCase name.
Ltac SSSSSSSCase name := Case_aux SSSSSSSCase name.
Theorem andb_true_elim1 : forall b c : bool,
andb b c = true -> b = true.
Proof.
intros b c H.
destruct b.
Case "b = true".
reflexivity.
Case "b = false".
rewrite <- H. reflexivity. Qed.
Theorem plus_0_r : forall n : nat, n + O = n.
Proof.
intros n. induction n as [| n'].
Case "n = 0". reflexivity.
Case "n = S n'". simpl. rewrite -> IHn'. reflexivity. Qed.
Theorem minus_diag : forall n,
minus n n = O.
Proof.
intros n. induction n as [| n'].
Case "n = 0".
simpl. reflexivity.
Case "n = S n'".
simpl. rewrite -> IHn'. reflexivity. Qed.
Theorem mult_0_r : forall n:nat,
n * O = O.
Proof.
intros n. induction n as [| n'].
Case "n = 0".
reflexivity.
Case "n = S n'".
simpl. rewrite -> IHn'. reflexivity. Qed.
Theorem plus_n_Sm : forall n m : nat,
S (n + m) = n + (S m).
Proof.
intros n m. induction n as [| n'].
Case "n = 0".
reflexivity.
Case "n = S n'".
simpl. rewrite -> IHn'. reflexivity. Qed.
Theorem plus_assoc : forall n m p : nat,
n + (m + p) = (n + m) + p.
Proof.
intros n m p.
induction n as [| n'].
reflexivity.
simpl.
rewrite -> IHn'.
reflexivity. Qed.
Theorem plus_distr : forall n m: nat, S (n + m) = n + (S m).
Proof.
intros n m. induction n as [| n'].
Case "n = 0".
reflexivity.
Case "n = S n'".
simpl. rewrite -> IHn'. reflexivity. Qed.
Theorem mult_distr : forall n m: nat, n * ((S O) + m) = n * (S m).
Proof.
intros n m.
induction n as [| n'].
reflexivity.
reflexivity.
Qed.
Theorem plus_comm : forall n m : nat,
n + m = m + n.
Proof.
intros n m.
induction n as [| n'].
Case "n = 0".
simpl.
rewrite -> plus_0_r.
reflexivity.
Case "n = S n'".
simpl.
rewrite -> IHn'.
rewrite -> plus_distr.
reflexivity. Qed.
Fixpoint double (n:nat) :=
match n with
| O => O
| S n' => S (S (double n'))
end.
Lemma double_plus : forall n, double n = n + n.
Proof.
intros n. induction n as [| n'].
Case "n = 0".
reflexivity.
Case "n = S n'".
simpl. rewrite -> IHn'.
rewrite -> plus_distr. reflexivity.
Qed.
Theorem beq_nat_refl : forall n : nat,
true = beq_nat n n.
Proof.
intros n. induction n as [| n'].
Case "n = 0".
reflexivity.
Case "n = S n".
simpl. rewrite <- IHn'.
reflexivity. Qed.
Theorem plus_rearrange: forall n m p q : nat,
(n + m) + (p + q) = (m + n) + (p + q).
Proof.
intros n m p q.
assert(H: n + m = m + n).
Case "Proof by assertion".
rewrite -> plus_comm. reflexivity.
rewrite -> H. reflexivity. Qed.
Theorem plus_swap : forall n m p: nat,
n + (m + p) = m + (n + p).
Proof.
intros n m p.
rewrite -> plus_assoc.
assert(H: m + (n + p) = (m + n) + p).
rewrite -> plus_assoc.
reflexivity.
rewrite -> H.
assert(H2: m + n = n + m).
rewrite -> plus_comm.
reflexivity.
rewrite -> H2.
reflexivity.
Qed.
Theorem plus_swap' : forall n m p: nat,
n + (m + p) = m + (n + p).
Proof.
intros n m p.
rewrite -> plus_assoc.
assert(H: m + (n + p) = (m + n) + p).
rewrite -> plus_assoc.
reflexivity.
rewrite -> H.
replace (m + n) with (n + m).
rewrite -> plus_comm.
reflexivity.
rewrite -> plus_comm.
reflexivity.
Qed.
Theorem mult_1_distr: forall m n: nat,
n * ((S O) + m) = n * (S O) + n * m.
Proof.
intros n m.
rewrite -> mult_1.
rewrite -> plus_1_1.
simpl.
induction m as [|m'].
simpl.
reflexivity.
simpl.
rewrite -> plus_swap.
rewrite <- IHm'.
reflexivity.
Qed.
Theorem mult_comm: forall m n : nat,
m * n = n * m.
Proof.
intros m n.
induction n as [| n'].
Case "n = 0".
simpl.
rewrite -> mult_0_r.
reflexivity.
Case "n = S n'".
simpl.
rewrite <- mult_distr.
rewrite -> mult_1_distr.
rewrite -> mult_1.
rewrite -> IHn'.
reflexivity.
Qed.
Theorem evenb_next : forall n : nat,
evenb n = evenb (S (S n)).
Proof.
intros n.
Admitted.
Theorem negb_negb : forall n : bool,
n = negb (negb n).
Proof.
intros n.
destruct n.
reflexivity.
reflexivity.
Qed.
Theorem evenb_n_oddb_Sn : forall n : nat,
evenb n = negb (evenb (S n)).
Proof.
intros n.
induction n as [|n'].
reflexivity.
assert(H: evenb n' = evenb (S (S n'))).
reflexivity.
rewrite <- H.
rewrite -> IHn'.
rewrite <- negb_negb.
reflexivity.
Qed.
(*Fixpoint bad (n : nat) : bool :=
match n with
| O => true
| S O => bad (S n)
| S (S n') => bad n'
end.*)
Theorem ble_nat_refl : forall n:nat,
true = ble_nat n n.
Proof.
intros n.
induction n as [|n'].
Case "n = 0".
reflexivity.
Case "n = S n".
simpl.
rewrite <- IHn'.
reflexivity.
Qed.
Theorem zero_nbeq_S : forall n: nat,
beq_nat O (S n) = false.
Proof.
intros n.
reflexivity.
Qed.
Theorem andb_false_r : forall b : bool,
andb b false = false.
Proof.
intros b.
destruct b.
reflexivity.
reflexivity.
Qed.
Theorem plus_ble_compat_1 : forall n m p : nat,
ble_nat n m = true -> ble_nat (p + n) (p + m) = true.
Proof.
intros n m p.
intros H.
induction p.
Case "p = 0".
simpl.
rewrite -> H.
reflexivity.
Case "p = S p'".
simpl.
rewrite -> IHp.
reflexivity.
Qed.
Theorem S_nbeq_0 : forall n:nat,
beq_nat (S n) O = false.
Proof.
intros n.
reflexivity.
Qed.
Theorem mult_1_1 : forall n:nat, (S O) * n = n.
Proof.
intros n.
simpl.
rewrite -> plus_0_r.
reflexivity. Qed.
Theorem all3_spec : forall b c : bool,
orb (andb b c)
(orb (negb b)
(negb c))
= true.
Proof.
intros b c.
destruct b.
destruct c.
reflexivity.
reflexivity.
reflexivity.
Qed.
Lemma mult_plus_1 : forall n m : nat,
S(m + n) = m + (S n).
Proof.
intros n m.
induction m.
reflexivity.
simpl.
rewrite -> IHm.
reflexivity.
Qed.
Theorem mult_mult : forall n m : nat,
n * (S m) = n * m + n.
Proof.
intros n m.
induction n.
reflexivity.
simpl.
rewrite -> IHn.
rewrite -> plus_assoc.
rewrite -> mult_plus_1.
reflexivity.
Qed.
Theorem mult_plus_distr_r : forall n m p:nat,
(n + m) * p = (n * p) + (m * p).
Proof.
intros n m p.
induction p.
rewrite -> mult_0_r.
rewrite -> mult_0_r.
rewrite -> mult_0_r.
reflexivity.
rewrite -> mult_mult.
rewrite -> mult_mult.
rewrite -> mult_mult.
rewrite -> IHp.
assert(H1: ((n * p) + n) + (m * p + m) = (n * p) + (n + (m * p + m))).
rewrite <- plus_assoc.
reflexivity.
rewrite -> H1.
assert(H2: (n + (m * p + m)) = (m * p + (n + m))).
rewrite -> plus_swap.
reflexivity.
rewrite -> H2.
assert(H3: (n * p) + (m * p + (n + m)) = ((n * p ) + (m * p)) + (n + m)).
rewrite -> plus_assoc.
reflexivity.
rewrite -> H3.
reflexivity.
Qed.
Theorem mult_assoc : forall n m p : nat,
n * (m * p) = (n * m) * p.
Proof.
intros n m p.
induction n.
simpl.
reflexivity.
simpl.
rewrite -> mult_plus_distr_r.
rewrite -> IHn.
reflexivity.
Qed.
Inductive bin : Type :=
| BO : bin
| D : bin -> bin
| M : bin -> bin.
Fixpoint incbin (n : bin) : bin :=
match n with
| BO => M (BO)
| D n' => M n'
| M n' => D (incbin n')
end.
Fixpoint bin2un (n : bin) : nat :=
match n with
| BO => O
| D n' => double (bin2un n')
| M n' => S (double (bin2un n'))
end.
Theorem bin_comm : forall n : bin,
bin2un(incbin n) = S (bin2un n).
Proof.
intros n.
induction n.
reflexivity.
reflexivity.
simpl.
rewrite -> IHn.
reflexivity.
Qed.
End Playground1.

85
samples/Coq/Computation.v Normal file
View File

@@ -0,0 +1,85 @@
(** The definition of computations, used to represent interactive programs. *)
Require Import Coq.NArith.NArith.
Require Import ListString.All.
Local Open Scope type.
(** System calls. *)
Module Command.
Inductive t :=
| AskCard
| AskPIN
| CheckPIN (pin : N)
| AskAmount
| CheckAmount (amount : N)
| GiveCard
| GiveAmount (amount : N)
| ShowError (message : LString.t).
(** The type of an answer for a command depends on the value of the command. *)
Definition answer (command : t) : Type :=
match command with
| AskCard => bool (* If the given card seems valid. *)
| AskPIN => option N (* A number or cancellation. *)
| CheckPIN _ => bool (* If the PIN number is valid. *)
| AskAmount => option N (* A number or cancellation. *)
| CheckAmount _ => bool (* If the amount can be withdrawn. *)
| GiveCard => bool (* If the card was given. *)
| GiveAmount _ => bool (* If the money was given. *)
| ShowError _ => unit (* Show an error message. *)
end.
End Command.
(** Computations with I/Os. *)
Module C.
(** A computation can either does nothing, or do a system call and wait
for the answer to run another computation. *)
Inductive t : Type :=
| Ret : t
| Call : forall (command : Command.t), (Command.answer command -> t) -> t.
Arguments Ret.
Arguments Call _ _.
(** Some optional notations. *)
Module Notations.
(** A nicer notation for `Ret`. *)
Definition ret : t :=
Ret.
(** We define an explicit apply function so that Coq does not try to expand
the notations everywhere. *)
Definition apply {A B} (f : A -> B) (x : A) := f x.
(** System call. *)
Notation "'call!' answer ':=' command 'in' X" :=
(Call command (fun answer => X))
(at level 200, answer ident, command at level 100, X at level 200).
(** System call with typed answer. *)
Notation "'call!' answer : A ':=' command 'in' X" :=
(Call command (fun (answer : A) => X))
(at level 200, answer ident, command at level 100, A at level 200, X at level 200).
(** System call ignoring the answer. *)
Notation "'do_call!' command 'in' X" :=
(Call command (fun _ => X))
(at level 200, command at level 100, X at level 200).
(** This notation is useful to compose computations which wait for a
continuation. We do not have an explicit bind operator to simplify the
language and the proofs. *)
Notation "'let!' x ':=' X 'in' Y" :=
(apply X (fun x => Y))
(at level 200, x ident, X at level 100, Y at level 200).
(** Let with a typed answer. *)
Notation "'let!' x : A ':=' X 'in' Y" :=
(apply X (fun (x : A) => Y))
(at level 200, x ident, X at level 100, A at level 200, Y at level 200).
(** Let ignoring the answer. *)
Notation "'do!' X 'in' Y" :=
(apply X (fun _ => Y))
(at level 200, X at level 100, Y at level 200).
End Notations.
End C.

View File

@@ -1,290 +0,0 @@
(** A development of Treesort on Heap trees. It has an average
complexity of O(n.log n) but of O() in the worst case (e.g. if
the list is already sorted) *)
(* G. Huet 1-9-95 uses Multiset *)
Require Import List Multiset PermutSetoid Relations Sorting.
Section defs.
(** * Trees and heap trees *)
(** ** Definition of trees over an ordered set *)
Variable A : Type.
Variable leA : relation A.
Variable eqA : relation A.
Let gtA (x y:A) := ~ leA x y.
Hypothesis leA_dec : forall x y:A, {leA x y} + {leA y x}.
Hypothesis eqA_dec : forall x y:A, {eqA x y} + {~ eqA x y}.
Hypothesis leA_refl : forall x y:A, eqA x y -> leA x y.
Hypothesis leA_trans : forall x y z:A, leA x y -> leA y z -> leA x z.
Hypothesis leA_antisym : forall x y:A, leA x y -> leA y x -> eqA x y.
Hint Resolve leA_refl.
Hint Immediate eqA_dec leA_dec leA_antisym.
Let emptyBag := EmptyBag A.
Let singletonBag := SingletonBag _ eqA_dec.
Inductive Tree :=
| Tree_Leaf : Tree
| Tree_Node : A -> Tree -> Tree -> Tree.
(** [a] is lower than a Tree [T] if [T] is a Leaf
or [T] is a Node holding [b>a] *)
Definition leA_Tree (a:A) (t:Tree) :=
match t with
| Tree_Leaf => True
| Tree_Node b T1 T2 => leA a b
end.
Lemma leA_Tree_Leaf : forall a:A, leA_Tree a Tree_Leaf.
Proof.
simpl; auto with datatypes.
Qed.
Lemma leA_Tree_Node :
forall (a b:A) (G D:Tree), leA a b -> leA_Tree a (Tree_Node b G D).
Proof.
simpl; auto with datatypes.
Qed.
(** ** The heap property *)
Inductive is_heap : Tree -> Prop :=
| nil_is_heap : is_heap Tree_Leaf
| node_is_heap :
forall (a:A) (T1 T2:Tree),
leA_Tree a T1 ->
leA_Tree a T2 ->
is_heap T1 -> is_heap T2 -> is_heap (Tree_Node a T1 T2).
Lemma invert_heap :
forall (a:A) (T1 T2:Tree),
is_heap (Tree_Node a T1 T2) ->
leA_Tree a T1 /\ leA_Tree a T2 /\ is_heap T1 /\ is_heap T2.
Proof.
intros; inversion H; auto with datatypes.
Qed.
(* This lemma ought to be generated automatically by the Inversion tools *)
Lemma is_heap_rect :
forall P:Tree -> Type,
P Tree_Leaf ->
(forall (a:A) (T1 T2:Tree),
leA_Tree a T1 ->
leA_Tree a T2 ->
is_heap T1 -> P T1 -> is_heap T2 -> P T2 -> P (Tree_Node a T1 T2)) ->
forall T:Tree, is_heap T -> P T.
Proof.
simple induction T; auto with datatypes.
intros a G PG D PD PN.
elim (invert_heap a G D); auto with datatypes.
intros H1 H2; elim H2; intros H3 H4; elim H4; intros.
apply X0; auto with datatypes.
Qed.
(* This lemma ought to be generated automatically by the Inversion tools *)
Lemma is_heap_rec :
forall P:Tree -> Set,
P Tree_Leaf ->
(forall (a:A) (T1 T2:Tree),
leA_Tree a T1 ->
leA_Tree a T2 ->
is_heap T1 -> P T1 -> is_heap T2 -> P T2 -> P (Tree_Node a T1 T2)) ->
forall T:Tree, is_heap T -> P T.
Proof.
simple induction T; auto with datatypes.
intros a G PG D PD PN.
elim (invert_heap a G D); auto with datatypes.
intros H1 H2; elim H2; intros H3 H4; elim H4; intros.
apply X; auto with datatypes.
Qed.
Lemma low_trans :
forall (T:Tree) (a b:A), leA a b -> leA_Tree b T -> leA_Tree a T.
Proof.
simple induction T; auto with datatypes.
intros; simpl; apply leA_trans with b; auto with datatypes.
Qed.
(** ** Merging two sorted lists *)
Inductive merge_lem (l1 l2:list A) : Type :=
merge_exist :
forall l:list A,
Sorted leA l ->
meq (list_contents _ eqA_dec l)
(munion (list_contents _ eqA_dec l1) (list_contents _ eqA_dec l2)) ->
(forall a, HdRel leA a l1 -> HdRel leA a l2 -> HdRel leA a l) ->
merge_lem l1 l2.
Require Import Morphisms.
Instance: Equivalence (@meq A).
Proof. constructor; auto with datatypes. red. apply meq_trans. Defined.
Instance: Proper (@meq A ++> @meq _ ++> @meq _) (@munion A).
Proof. intros x y H x' y' H'. now apply meq_congr. Qed.
Lemma merge :
forall l1:list A, Sorted leA l1 ->
forall l2:list A, Sorted leA l2 -> merge_lem l1 l2.
Proof.
fix 1; intros; destruct l1.
apply merge_exist with l2; auto with datatypes.
rename l1 into l.
revert l2 H0. fix 1. intros.
destruct l2 as [|a0 l0].
apply merge_exist with (a :: l); simpl; auto with datatypes.
elim (leA_dec a a0); intros.
(* 1 (leA a a0) *)
apply Sorted_inv in H. destruct H.
destruct (merge l H (a0 :: l0) H0).
apply merge_exist with (a :: l1). clear merge merge0.
auto using cons_sort, cons_leA with datatypes.
simpl. rewrite m. now rewrite munion_ass.
intros. apply cons_leA.
apply (@HdRel_inv _ leA) with l; trivial with datatypes.
(* 2 (leA a0 a) *)
apply Sorted_inv in H0. destruct H0.
destruct (merge0 l0 H0). clear merge merge0.
apply merge_exist with (a0 :: l1);
auto using cons_sort, cons_leA with datatypes.
simpl; rewrite m. simpl. setoid_rewrite munion_ass at 1. rewrite munion_comm.
repeat rewrite munion_ass. setoid_rewrite munion_comm at 3. reflexivity.
intros. apply cons_leA.
apply (@HdRel_inv _ leA) with l0; trivial with datatypes.
Qed.
(** ** From trees to multisets *)
(** contents of a tree as a multiset *)
(** Nota Bene : In what follows the definition of SingletonBag
in not used. Actually, we could just take as postulate:
[Parameter SingletonBag : A->multiset]. *)
Fixpoint contents (t:Tree) : multiset A :=
match t with
| Tree_Leaf => emptyBag
| Tree_Node a t1 t2 =>
munion (contents t1) (munion (contents t2) (singletonBag a))
end.
(** equivalence of two trees is equality of corresponding multisets *)
Definition equiv_Tree (t1 t2:Tree) := meq (contents t1) (contents t2).
(** * From lists to sorted lists *)
(** ** Specification of heap insertion *)
Inductive insert_spec (a:A) (T:Tree) : Type :=
insert_exist :
forall T1:Tree,
is_heap T1 ->
meq (contents T1) (munion (contents T) (singletonBag a)) ->
(forall b:A, leA b a -> leA_Tree b T -> leA_Tree b T1) ->
insert_spec a T.
Lemma insert : forall T:Tree, is_heap T -> forall a:A, insert_spec a T.
Proof.
simple induction 1; intros.
apply insert_exist with (Tree_Node a Tree_Leaf Tree_Leaf);
auto using node_is_heap, nil_is_heap, leA_Tree_Leaf with datatypes.
simpl; unfold meq, munion; auto using node_is_heap with datatypes.
elim (leA_dec a a0); intros.
elim (X a0); intros.
apply insert_exist with (Tree_Node a T2 T0);
auto using node_is_heap, nil_is_heap, leA_Tree_Leaf with datatypes.
simpl; apply treesort_twist1; trivial with datatypes.
elim (X a); intros T3 HeapT3 ConT3 LeA.
apply insert_exist with (Tree_Node a0 T2 T3);
auto using node_is_heap, nil_is_heap, leA_Tree_Leaf with datatypes.
apply node_is_heap; auto using node_is_heap, nil_is_heap, leA_Tree_Leaf with datatypes.
apply low_trans with a; auto with datatypes.
apply LeA; auto with datatypes.
apply low_trans with a; auto with datatypes.
simpl; apply treesort_twist2; trivial with datatypes.
Qed.
(** ** Building a heap from a list *)
Inductive build_heap (l:list A) : Type :=
heap_exist :
forall T:Tree,
is_heap T ->
meq (list_contents _ eqA_dec l) (contents T) -> build_heap l.
Lemma list_to_heap : forall l:list A, build_heap l.
Proof.
simple induction l.
apply (heap_exist nil Tree_Leaf); auto with datatypes.
simpl; unfold meq; exact nil_is_heap.
simple induction 1.
intros T i m; elim (insert T i a).
intros; apply heap_exist with T1; simpl; auto with datatypes.
apply meq_trans with (munion (contents T) (singletonBag a)).
apply meq_trans with (munion (singletonBag a) (contents T)).
apply meq_right; trivial with datatypes.
apply munion_comm.
apply meq_sym; trivial with datatypes.
Qed.
(** ** Building the sorted list *)
Inductive flat_spec (T:Tree) : Type :=
flat_exist :
forall l:list A,
Sorted leA l ->
(forall a:A, leA_Tree a T -> HdRel leA a l) ->
meq (contents T) (list_contents _ eqA_dec l) -> flat_spec T.
Lemma heap_to_list : forall T:Tree, is_heap T -> flat_spec T.
Proof.
intros T h; elim h; intros.
apply flat_exist with (nil (A:=A)); auto with datatypes.
elim X; intros l1 s1 i1 m1; elim X0; intros l2 s2 i2 m2.
elim (merge _ s1 _ s2); intros.
apply flat_exist with (a :: l); simpl; auto with datatypes.
apply meq_trans with
(munion (list_contents _ eqA_dec l1)
(munion (list_contents _ eqA_dec l2) (singletonBag a))).
apply meq_congr; auto with datatypes.
apply meq_trans with
(munion (singletonBag a)
(munion (list_contents _ eqA_dec l1) (list_contents _ eqA_dec l2))).
apply munion_rotate.
apply meq_right; apply meq_sym; trivial with datatypes.
Qed.
(** * Specification of treesort *)
Theorem treesort :
forall l:list A,
{m : list A | Sorted leA m & permutation _ eqA_dec l m}.
Proof.
intro l; unfold permutation.
elim (list_to_heap l).
intros.
elim (heap_to_list T); auto with datatypes.
intros.
exists l0; auto with datatypes.
apply meq_trans with (contents T); trivial with datatypes.
Qed.
End defs.

5944
samples/Coq/JsCorrectness.v Normal file

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,249 @@
Set Implicit Arguments.
Require Import JsSyntax JsInterpreterMonads JsInterpreter JsInit.
Require Import LibFix LibList.
Require Export Shared.
Require Export LibTactics LibLogic LibReflect LibList
LibOperation LibStruct LibNat LibEpsilon LibFunc LibHeap.
Require Flocq.Appli.Fappli_IEEE Flocq.Appli.Fappli_IEEE_bits.
(* Here stands some commands to extract relatively correctly the interpreter to Ocaml. *)
Extraction Language Ocaml.
Require Import ExtrOcamlBasic.
Require Import ExtrOcamlNatInt.
Require Import ExtrOcamlString.
(* Optimal fixpoint. *)
Extraction Inline FixFun3 FixFun3Mod FixFun4 FixFun4Mod FixFunMod curry3 uncurry3 curry4 uncurry4.
(* As classical logic statements are now unused, they should not be extracted
(otherwise, useless errors will be launched). *)
Extraction Inline epsilon epsilon_def classicT arbitrary indefinite_description Inhab_witness Fix isTrue.
(**************************************************************)
(** ** Numerical values *)
(* number *)
Extract Inductive positive => float
[ "(fun p -> 1. +. (2. *. p))"
"(fun p -> 2. *. p)"
"1." ]
"(fun f2p1 f2p f1 p ->
if p <= 1. then f1 () else if mod_float p 2. = 0. then f2p (floor (p /. 2.)) else f2p1 (floor (p /. 2.)))".
Extract Inductive Z => float [ "0." "" "(~-.)" ]
"(fun f0 fp fn z -> if z=0. then f0 () else if z>0. then fp z else fn (~-. z))".
Extract Inductive N => float [ "0." "" ]
"(fun f0 fp n -> if n=0. then f0 () else fp n)".
Extract Constant Z.add => "(+.)".
Extract Constant Z.succ => "(+.) 1.".
Extract Constant Z.pred => "(fun x -> x -. 1.)".
Extract Constant Z.sub => "(-.)".
Extract Constant Z.mul => "( *. )".
Extract Constant Z.opp => "(~-.)".
Extract Constant Z.abs => "abs_float".
Extract Constant Z.min => "min".
Extract Constant Z.max => "max".
Extract Constant Z.compare =>
"fun x y -> if x=y then Eq else if x<y then Lt else Gt".
Extract Constant Pos.add => "(+.)".
Extract Constant Pos.succ => "(+.) 1.".
Extract Constant Pos.pred => "(fun x -> x -. 1.)".
Extract Constant Pos.sub => "(-.)".
Extract Constant Pos.mul => "( *. )".
Extract Constant Pos.min => "min".
Extract Constant Pos.max => "max".
Extract Constant Pos.compare =>
"fun x y -> if x=y then Eq else if x<y then Lt else Gt".
Extract Constant Pos.compare_cont =>
"fun x y c -> if x=y then c else if x<y then Lt else Gt".
Extract Constant N.add => "(+.)".
Extract Constant N.succ => "(+.) 1.".
Extract Constant N.pred => "(fun x -> x -. 1.)".
Extract Constant N.sub => "(-.)".
Extract Constant N.mul => "( *. )".
Extract Constant N.min => "min".
Extract Constant N.max => "max".
Extract Constant N.div => "(fun x y -> if x = 0. then 0. else floor (x /. y))".
Extract Constant N.modulo => "mod_float".
Extract Constant N.compare =>
"fun x y -> if x=y then Eq else if x<y then Lt else Gt".
Extract Inductive Fappli_IEEE.binary_float => float [
"(fun s -> if s then (0.) else (-0.))"
"(fun s -> if s then infinity else neg_infinity)"
"nan"
"(fun (s, m, e) -> failwith ""FIXME: No extraction from binary float allowed yet."")"
].
Extract Constant JsNumber.of_int => "fun x -> x".
Extract Constant JsNumber.nan => "nan".
Extract Constant JsNumber.zero => "0.".
Extract Constant JsNumber.neg_zero => "(-0.)".
Extract Constant JsNumber.one => "1.".
Extract Constant JsNumber.infinity => "infinity".
Extract Constant JsNumber.neg_infinity => "neg_infinity".
Extract Constant JsNumber.max_value => "max_float".
Extract Constant JsNumber.min_value => "(Int64.float_of_bits Int64.one)".
Extract Constant JsNumber.pi => "(4. *. atan 1.)".
Extract Constant JsNumber.e => "(exp 1.)".
Extract Constant JsNumber.ln2 => "(log 2.)".
Extract Constant JsNumber.floor => "floor".
Extract Constant JsNumber.absolute => "abs_float".
Extract Constant JsNumber.from_string =>
"(fun s ->
try
let s = (String.concat """" (List.map (String.make 1) s)) in
if s = """" then 0. else float_of_string s
with Failure ""float_of_string"" -> nan)
(* Note that we're using `float_of_string' there, which does not have the same
behavior than JavaScript. For instance it will read ""022"" as 22 instead of
18, which should be the JavaScript result for it. *)".
Extract Constant JsNumber.to_string =>
"(fun f ->
prerr_string (""Warning: JsNumber.to_string called. This might be responsible for errors. Argument value: "" ^ string_of_float f ^ ""."");
prerr_newline();
let string_of_number n =
let sfn = string_of_float n in
(if (sfn = ""inf"") then ""Infinity"" else
if (sfn = ""-inf"") then ""-Infinity"" else
if (sfn = ""nan"") then ""NaN"" else
let inum = int_of_float n in
if (float_of_int inum = n) then (string_of_int inum) else (string_of_float n)) in
let ret = ref [] in (* Ugly, but the API for OCaml string is not very functional... *)
String.iter (fun c -> ret := c :: !ret) (string_of_number f);
List.rev !ret)
(* Note that this is ugly, we should use the spec of JsNumber.to_string here (9.8.1). *)".
Extract Constant JsNumber.add => "(+.)".
Extract Constant JsNumber.sub => "(-.)".
Extract Constant JsNumber.mult => "( *. )".
Extract Constant JsNumber.div => "(/.)".
Extract Constant JsNumber.fmod => "mod_float".
Extract Constant JsNumber.neg => "(~-.)".
Extract Constant JsNumber.sign => "(fun f -> float_of_int (compare f 0.))".
Extract Constant JsNumber.number_comparable => "(fun n1 n2 -> 0 = compare n1 n2)".
Extract Constant JsNumber.lt_bool => "(<)".
Extract Constant JsNumber.to_int32 =>
"fun n ->
match classify_float n with
| FP_normal | FP_subnormal ->
let i32 = 2. ** 32. in
let i31 = 2. ** 31. in
let posint = (if n < 0. then (-1.) else 1.) *. (floor (abs_float n)) in
let int32bit =
let smod = mod_float posint i32 in
if smod < 0. then smod +. i32 else smod
in
(if int32bit >= i31 then int32bit -. i32 else int32bit)
| _ -> 0.". (* LATER: do in Coq. Spec is 9.5, p. 47.*)
Extract Constant JsNumber.to_uint32 =>
"fun n ->
match classify_float n with
| FP_normal | FP_subnormal ->
let i32 = 2. ** 32. in
let posint = (if n < 0. then (-1.) else 1.) *. (floor (abs_float n)) in
let int32bit =
let smod = mod_float posint i32 in
if smod < 0. then smod +. i32 else smod
in
int32bit
| _ -> 0.". (* LAER: do in Coq. Spec is 9.6, p47.*)
Extract Constant JsNumber.modulo_32 => "(fun x -> let r = mod_float x 32. in if x < 0. then r +. 32. else r)".
Extract Constant JsNumber.int32_bitwise_not => "fun x -> Int32.to_float (Int32.lognot (Int32.of_float x))".
Extract Constant JsNumber.int32_bitwise_and => "fun x y -> Int32.to_float (Int32.logand (Int32.of_float x) (Int32.of_float y))".
Extract Constant JsNumber.int32_bitwise_or => "fun x y -> Int32.to_float (Int32.logor (Int32.of_float x) (Int32.of_float y))".
Extract Constant JsNumber.int32_bitwise_xor => "fun x y -> Int32.to_float (Int32.logxor (Int32.of_float x) (Int32.of_float y))".
Extract Constant JsNumber.int32_left_shift => "(fun x y -> Int32.to_float (Int32.shift_left (Int32.of_float x) (int_of_float y)))".
Extract Constant JsNumber.int32_right_shift => "(fun x y -> Int32.to_float (Int32.shift_right (Int32.of_float x) (int_of_float y)))".
Extract Constant JsNumber.uint32_right_shift =>
"(fun x y ->
let i31 = 2. ** 31. in
let i32 = 2. ** 32. in
let newx = if x >= i31 then x -. i32 else x in
let r = Int32.to_float (Int32.shift_right_logical (Int32.of_float newx) (int_of_float y)) in
if r < 0. then r +. i32 else r)".
Extract Constant int_of_char => "(fun c -> float_of_int (int_of_char c))".
Extract Constant ascii_comparable => "(=)".
Extract Constant lt_int_decidable => "(<)".
Extract Constant le_int_decidable => "(<=)".
Extract Constant ge_nat_decidable => "(>=)".
(* TODO ARTHUR: This TLC lemma does not extract to something computable... whereas it should! *)
Extract Constant prop_eq_decidable => "(=)".
Extract Constant env_loc_global_env_record => "0".
(* The following functions make pattern matches with floats and shall thus be removed. *)
Extraction Inline Fappli_IEEE.Bplus Fappli_IEEE.binary_normalize Fappli_IEEE_bits.b64_plus.
Extraction Inline Fappli_IEEE.Bmult Fappli_IEEE.Bmult_FF Fappli_IEEE_bits.b64_mult.
Extraction Inline Fappli_IEEE.Bdiv Fappli_IEEE_bits.b64_div.
(* New options for the interpreter to work in Coq 8.4 *)
Set Extraction AccessOpaque.
(* These parameters are implementation-dependant according to the spec.
I've chosed some very simple values, but we could choose another thing for them. *)
Extract Constant object_prealloc_global_proto => "(Coq_value_prim Coq_prim_null)".
Extract Constant object_prealloc_global_class => "(
let rec aux s = function
| 0 -> []
| n -> let n' = n - 1 in
s.[n'] :: aux s n'
in let aux2 s =
List.rev (aux s (String.length s))
in aux2 ""GlobalClass"")".
(* Parsing *)
Extract Constant parse_pickable => "(fun s strict ->
let str = String.concat """" (List.map (String.make 1) s) in
try
let parserExp = Parser_main.exp_from_string ~force_strict:strict str in
Some (JsSyntaxInfos.add_infos_prog strict
(Translate_syntax.exp_to_prog parserExp))
with
(* | Translate_syntax.CoqSyntaxDoesNotSupport _ -> assert false (* Temporary *) *)
| Parser.ParserFailure _
| Parser.InvalidArgument ->
prerr_string (""Warning: Parser error on eval. Input string: \"""" ^ str ^ ""\""\n"");
None
)".
(* Debugging *)
Extract Inlined Constant not_yet_implemented_because => "(fun s ->
print_endline (__LOC__ ^ "": Not implemented because: "" ^ Prheap.string_of_char_list s) ;
Coq_result_not_yet_implemented)".
Extract Inlined Constant impossible_because => "(fun s ->
print_endline (__LOC__ ^ "": Stuck because: "" ^ Prheap.string_of_char_list s) ;
Coq_result_impossible)".
Extract Inlined Constant impossible_with_heap_because => "(fun s message ->
print_endline (__LOC__ ^ "": Stuck!\nState: "" ^ Prheap.prstate true s
^ ""\nMessage:\t"" ^ Prheap.string_of_char_list message) ;
Coq_result_impossible)".
(* Final Extraction *)
Extraction Blacklist string list bool.
Separate Extraction runs run_javascript.
(* -- LATER: extract inequality_test_string in more efficient way*)

1051
samples/Coq/JsNumber.v Normal file

File diff suppressed because it is too large Load Diff

1766
samples/Coq/JsPrettyInterm.v Normal file

File diff suppressed because it is too large Load Diff

42
samples/Coq/Main.v Normal file
View File

@@ -0,0 +1,42 @@
Require Import FunctionNinjas.All.
Require Import ListString.All.
Require Import Computation.
Import C.Notations.
Definition error (message : LString.t) : C.t :=
do_call! Command.ShowError message in
ret.
Definition main : C.t :=
call! card_is_valid := Command.AskCard in
if card_is_valid then
call! pin := Command.AskPIN in
match pin with
| None => error @@ LString.s "No PIN given."
| Some pin =>
call! pin_is_valid := Command.CheckPIN pin in
if pin_is_valid then
call! ask_amount := Command.AskAmount in
match ask_amount with
| None => error @@ LString.s "No amount given."
| Some amount =>
call! amount_is_valid := Command.CheckAmount amount in
if amount_is_valid then
call! card_is_given := Command.GiveCard in
if card_is_given then
call! amount_is_given := Command.GiveAmount amount in
if amount_is_given then
ret
else
error @@ LString.s "Cannot give you the amount. Please contact your bank."
else
error @@ LString.s "Cannot give you back the card. Please contact your bank."
else
error @@ LString.s "Invalid amount."
end
else
error @@ LString.s "Invalid PIN."
end
else
error @@ LString.s "Invalid card.".

View File

@@ -1,539 +0,0 @@
Require Import Omega Relations Multiset SetoidList.
(** This file is deprecated, use [Permutation.v] instead.
Indeed, this file defines a notion of permutation based on
multisets (there exists a permutation between two lists iff every
elements have the same multiplicity in the two lists) which
requires a more complex apparatus (the equipment of the domain
with a decidable equality) than [Permutation] in [Permutation.v].
The relation between the two relations are in lemma
[permutation_Permutation].
File [Permutation] concerns Leibniz equality : it shows in particular
that [List.Permutation] and [permutation] are equivalent in this context.
*)
Set Implicit Arguments.
Local Notation "[ ]" := nil.
Local Notation "[ a ; .. ; b ]" := (a :: .. (b :: []) ..).
Section Permut.
(** * From lists to multisets *)
Variable A : Type.
Variable eqA : relation A.
Hypothesis eqA_equiv : Equivalence eqA.
Hypothesis eqA_dec : forall x y:A, {eqA x y} + {~ eqA x y}.
Let emptyBag := EmptyBag A.
Let singletonBag := SingletonBag _ eqA_dec.
(** contents of a list *)
Fixpoint list_contents (l:list A) : multiset A :=
match l with
| [] => emptyBag
| a :: l => munion (singletonBag a) (list_contents l)
end.
Lemma list_contents_app :
forall l m:list A,
meq (list_contents (l ++ m)) (munion (list_contents l) (list_contents m)).
Proof.
simple induction l; simpl; auto with datatypes.
intros.
apply meq_trans with
(munion (singletonBag a) (munion (list_contents l0) (list_contents m)));
auto with datatypes.
Qed.
(** * [permutation]: definition and basic properties *)
Definition permutation (l m:list A) := meq (list_contents l) (list_contents m).
Lemma permut_refl : forall l:list A, permutation l l.
Proof.
unfold permutation; auto with datatypes.
Qed.
Lemma permut_sym :
forall l1 l2 : list A, permutation l1 l2 -> permutation l2 l1.
Proof.
unfold permutation, meq; intros; symmetry; trivial.
Qed.
Lemma permut_trans :
forall l m n:list A, permutation l m -> permutation m n -> permutation l n.
Proof.
unfold permutation; intros.
apply meq_trans with (list_contents m); auto with datatypes.
Qed.
Lemma permut_cons_eq :
forall l m:list A,
permutation l m -> forall a a', eqA a a' -> permutation (a :: l) (a' :: m).
Proof.
unfold permutation; simpl; intros.
apply meq_trans with (munion (singletonBag a') (list_contents l)).
apply meq_left, meq_singleton; auto.
auto with datatypes.
Qed.
Lemma permut_cons :
forall l m:list A,
permutation l m -> forall a:A, permutation (a :: l) (a :: m).
Proof.
unfold permutation; simpl; auto with datatypes.
Qed.
Lemma permut_app :
forall l l' m m':list A,
permutation l l' -> permutation m m' -> permutation (l ++ m) (l' ++ m').
Proof.
unfold permutation; intros.
apply meq_trans with (munion (list_contents l) (list_contents m));
auto using permut_cons, list_contents_app with datatypes.
apply meq_trans with (munion (list_contents l') (list_contents m'));
auto using permut_cons, list_contents_app with datatypes.
apply meq_trans with (munion (list_contents l') (list_contents m));
auto using permut_cons, list_contents_app with datatypes.
Qed.
Lemma permut_add_inside_eq :
forall a a' l1 l2 l3 l4, eqA a a' ->
permutation (l1 ++ l2) (l3 ++ l4) ->
permutation (l1 ++ a :: l2) (l3 ++ a' :: l4).
Proof.
unfold permutation, meq in *; intros.
specialize H0 with a0.
repeat rewrite list_contents_app in *; simpl in *.
destruct (eqA_dec a a0) as [Ha|Ha]; rewrite H in Ha;
decide (eqA_dec a' a0) with Ha; simpl; auto with arith.
do 2 rewrite <- plus_n_Sm; f_equal; auto.
Qed.
Lemma permut_add_inside :
forall a l1 l2 l3 l4,
permutation (l1 ++ l2) (l3 ++ l4) ->
permutation (l1 ++ a :: l2) (l3 ++ a :: l4).
Proof.
unfold permutation, meq in *; intros.
generalize (H a0); clear H.
do 4 rewrite list_contents_app.
simpl.
destruct (eqA_dec a a0); simpl; auto with arith.
do 2 rewrite <- plus_n_Sm; f_equal; auto.
Qed.
Lemma permut_add_cons_inside_eq :
forall a a' l l1 l2, eqA a a' ->
permutation l (l1 ++ l2) ->
permutation (a :: l) (l1 ++ a' :: l2).
Proof.
intros;
replace (a :: l) with ([] ++ a :: l); trivial;
apply permut_add_inside_eq; trivial.
Qed.
Lemma permut_add_cons_inside :
forall a l l1 l2,
permutation l (l1 ++ l2) ->
permutation (a :: l) (l1 ++ a :: l2).
Proof.
intros;
replace (a :: l) with ([] ++ a :: l); trivial;
apply permut_add_inside; trivial.
Qed.
Lemma permut_middle :
forall (l m:list A) (a:A), permutation (a :: l ++ m) (l ++ a :: m).
Proof.
intros; apply permut_add_cons_inside; auto using permut_sym, permut_refl.
Qed.
Lemma permut_sym_app :
forall l1 l2, permutation (l1 ++ l2) (l2 ++ l1).
Proof.
intros l1 l2;
unfold permutation, meq;
intro a; do 2 rewrite list_contents_app; simpl;
auto with arith.
Qed.
Lemma permut_rev :
forall l, permutation l (rev l).
Proof.
induction l.
simpl; trivial using permut_refl.
simpl.
apply permut_add_cons_inside.
rewrite <- app_nil_end. trivial.
Qed.
(** * Some inversion results. *)
Lemma permut_conv_inv :
forall e l1 l2, permutation (e :: l1) (e :: l2) -> permutation l1 l2.
Proof.
intros e l1 l2; unfold permutation, meq; simpl; intros H a;
generalize (H a); apply plus_reg_l.
Qed.
Lemma permut_app_inv1 :
forall l l1 l2, permutation (l1 ++ l) (l2 ++ l) -> permutation l1 l2.
Proof.
intros l l1 l2; unfold permutation, meq; simpl;
intros H a; generalize (H a); clear H.
do 2 rewrite list_contents_app.
simpl.
intros; apply plus_reg_l with (multiplicity (list_contents l) a).
rewrite plus_comm; rewrite H; rewrite plus_comm.
trivial.
Qed.
(** we can use [multiplicity] to define [InA] and [NoDupA]. *)
Fact if_eqA_then : forall a a' (B:Type)(b b':B),
eqA a a' -> (if eqA_dec a a' then b else b') = b.
Proof.
intros. destruct eqA_dec as [_|NEQ]; auto.
contradict NEQ; auto.
Qed.
Lemma permut_app_inv2 :
forall l l1 l2, permutation (l ++ l1) (l ++ l2) -> permutation l1 l2.
Proof.
intros l l1 l2; unfold permutation, meq; simpl;
intros H a; generalize (H a); clear H.
do 2 rewrite list_contents_app.
simpl.
intros; apply plus_reg_l with (multiplicity (list_contents l) a).
trivial.
Qed.
Lemma permut_remove_hd_eq :
forall l l1 l2 a b, eqA a b ->
permutation (a :: l) (l1 ++ b :: l2) -> permutation l (l1 ++ l2).
Proof.
unfold permutation, meq; simpl; intros l l1 l2 a b Heq H a0.
specialize H with a0.
rewrite list_contents_app in *; simpl in *.
apply plus_reg_l with (if eqA_dec a a0 then 1 else 0).
rewrite H; clear H.
symmetry; rewrite plus_comm, <- ! plus_assoc; f_equal.
rewrite plus_comm.
destruct (eqA_dec a a0) as [Ha|Ha]; rewrite Heq in Ha;
decide (eqA_dec b a0) with Ha; reflexivity.
Qed.
Lemma permut_remove_hd :
forall l l1 l2 a,
permutation (a :: l) (l1 ++ a :: l2) -> permutation l (l1 ++ l2).
Proof.
eauto using permut_remove_hd_eq, Equivalence_Reflexive.
Qed.
Fact if_eqA_else : forall a a' (B:Type)(b b':B),
~eqA a a' -> (if eqA_dec a a' then b else b') = b'.
Proof.
intros. decide (eqA_dec a a') with H; auto.
Qed.
Fact if_eqA_refl : forall a (B:Type)(b b':B),
(if eqA_dec a a then b else b') = b.
Proof.
intros; apply (decide_left (eqA_dec a a)); auto with *.
Qed.
(** PL: Inutilisable dans un rewrite sans un change prealable. *)
Global Instance if_eqA (B:Type)(b b':B) :
Proper (eqA==>eqA==>@eq _) (fun x y => if eqA_dec x y then b else b').
Proof.
intros x x' Hxx' y y' Hyy'.
intros; destruct (eqA_dec x y) as [H|H];
destruct (eqA_dec x' y') as [H'|H']; auto.
contradict H'; transitivity x; auto with *; transitivity y; auto with *.
contradict H; transitivity x'; auto with *; transitivity y'; auto with *.
Qed.
Fact if_eqA_rewrite_l : forall a1 a1' a2 (B:Type)(b b':B),
eqA a1 a1' -> (if eqA_dec a1 a2 then b else b') =
(if eqA_dec a1' a2 then b else b').
Proof.
intros; destruct (eqA_dec a1 a2) as [A1|A1];
destruct (eqA_dec a1' a2) as [A1'|A1']; auto.
contradict A1'; transitivity a1; eauto with *.
contradict A1; transitivity a1'; eauto with *.
Qed.
Fact if_eqA_rewrite_r : forall a1 a2 a2' (B:Type)(b b':B),
eqA a2 a2' -> (if eqA_dec a1 a2 then b else b') =
(if eqA_dec a1 a2' then b else b').
Proof.
intros; destruct (eqA_dec a1 a2) as [A2|A2];
destruct (eqA_dec a1 a2') as [A2'|A2']; auto.
contradict A2'; transitivity a2; eauto with *.
contradict A2; transitivity a2'; eauto with *.
Qed.
Global Instance multiplicity_eqA (l:list A) :
Proper (eqA==>@eq _) (multiplicity (list_contents l)).
Proof.
intros x x' Hxx'.
induction l as [|y l Hl]; simpl; auto.
rewrite (@if_eqA_rewrite_r y x x'); auto.
Qed.
Lemma multiplicity_InA :
forall l a, InA eqA a l <-> 0 < multiplicity (list_contents l) a.
Proof.
induction l.
simpl.
split; inversion 1.
simpl.
intros a'; split; intros H. inversion_clear H.
apply (decide_left (eqA_dec a a')); auto with *.
destruct (eqA_dec a a'); auto with *. simpl; rewrite <- IHl; auto.
destruct (eqA_dec a a'); auto with *. right. rewrite IHl; auto.
Qed.
Lemma multiplicity_InA_O :
forall l a, ~ InA eqA a l -> multiplicity (list_contents l) a = 0.
Proof.
intros l a; rewrite multiplicity_InA;
destruct (multiplicity (list_contents l) a); auto with arith.
destruct 1; auto with arith.
Qed.
Lemma multiplicity_InA_S :
forall l a, InA eqA a l -> multiplicity (list_contents l) a >= 1.
Proof.
intros l a; rewrite multiplicity_InA; auto with arith.
Qed.
Lemma multiplicity_NoDupA : forall l,
NoDupA eqA l <-> (forall a, multiplicity (list_contents l) a <= 1).
Proof.
induction l.
simpl.
split; auto with arith.
split; simpl.
inversion_clear 1.
rewrite IHl in H1.
intros; destruct (eqA_dec a a0) as [EQ|NEQ]; simpl; auto with *.
rewrite <- EQ.
rewrite multiplicity_InA_O; auto.
intros; constructor.
rewrite multiplicity_InA.
specialize (H a).
rewrite if_eqA_refl in H.
clear IHl; omega.
rewrite IHl; intros.
specialize (H a0). omega.
Qed.
(** Permutation is compatible with InA. *)
Lemma permut_InA_InA :
forall l1 l2 e, permutation l1 l2 -> InA eqA e l1 -> InA eqA e l2.
Proof.
intros l1 l2 e.
do 2 rewrite multiplicity_InA.
unfold permutation, meq.
intros H;rewrite H; auto.
Qed.
Lemma permut_cons_InA :
forall l1 l2 e, permutation (e :: l1) l2 -> InA eqA e l2.
Proof.
intros; apply (permut_InA_InA (e:=e) H); auto with *.
Qed.
(** Permutation of an empty list. *)
Lemma permut_nil :
forall l, permutation l [] -> l = [].
Proof.
intro l; destruct l as [ | e l ]; trivial.
assert (InA eqA e (e::l)) by (auto with *).
intro Abs; generalize (permut_InA_InA Abs H).
inversion 1.
Qed.
(** Permutation for short lists. *)
Lemma permut_length_1:
forall a b, permutation [a] [b] -> eqA a b.
Proof.
intros a b; unfold permutation, meq.
intro P; specialize (P b); simpl in *.
rewrite if_eqA_refl in *.
destruct (eqA_dec a b); simpl; auto; discriminate.
Qed.
Lemma permut_length_2 :
forall a1 b1 a2 b2, permutation [a1; b1] [a2; b2] ->
(eqA a1 a2) /\ (eqA b1 b2) \/ (eqA a1 b2) /\ (eqA a2 b1).
Proof.
intros a1 b1 a2 b2 P.
assert (H:=permut_cons_InA P).
inversion_clear H.
left; split; auto.
apply permut_length_1.
red; red; intros.
specialize (P a). simpl in *.
rewrite (@if_eqA_rewrite_l a1 a2 a) in P by auto. omega.
right.
inversion_clear H0; [|inversion H].
split; auto.
apply permut_length_1.
red; red; intros.
specialize (P a); simpl in *.
rewrite (@if_eqA_rewrite_l a1 b2 a) in P by auto. omega.
Qed.
(** Permutation is compatible with length. *)
Lemma permut_length :
forall l1 l2, permutation l1 l2 -> length l1 = length l2.
Proof.
induction l1; intros l2 H.
rewrite (permut_nil (permut_sym H)); auto.
assert (H0:=permut_cons_InA H).
destruct (InA_split H0) as (h2,(b,(t2,(H1,H2)))).
subst l2.
rewrite app_length.
simpl; rewrite <- plus_n_Sm; f_equal.
rewrite <- app_length.
apply IHl1.
apply permut_remove_hd with b.
apply permut_trans with (a::l1); auto.
revert H1; unfold permutation, meq; simpl.
intros; f_equal; auto.
rewrite (@if_eqA_rewrite_l a b a0); auto.
Qed.
Lemma NoDupA_equivlistA_permut :
forall l l', NoDupA eqA l -> NoDupA eqA l' ->
equivlistA eqA l l' -> permutation l l'.
Proof.
intros.
red; unfold meq; intros.
rewrite multiplicity_NoDupA in H, H0.
generalize (H a) (H0 a) (H1 a); clear H H0 H1.
do 2 rewrite multiplicity_InA.
destruct 3; omega.
Qed.
End Permut.
Section Permut_map.
Variables A B : Type.
Variable eqA : relation A.
Hypothesis eqA_dec : forall x y:A, {eqA x y} + {~ eqA x y}.
Hypothesis eqA_equiv : Equivalence eqA.
Variable eqB : B->B->Prop.
Hypothesis eqB_dec : forall x y:B, { eqB x y }+{ ~eqB x y }.
Hypothesis eqB_trans : Transitive eqB.
(** Permutation is compatible with map. *)
Lemma permut_map :
forall f,
(Proper (eqA==>eqB) f) ->
forall l1 l2, permutation _ eqA_dec l1 l2 ->
permutation _ eqB_dec (map f l1) (map f l2).
Proof.
intros f; induction l1.
intros l2 P; rewrite (permut_nil eqA_equiv (permut_sym P)); apply permut_refl.
intros l2 P.
simpl.
assert (H0:=permut_cons_InA eqA_equiv P).
destruct (InA_split H0) as (h2,(b,(t2,(H1,H2)))).
subst l2.
rewrite map_app.
simpl.
apply permut_trans with (f b :: map f l1).
revert H1; unfold permutation, meq; simpl.
intros; f_equal; auto.
destruct (eqB_dec (f b) a0) as [H2|H2];
destruct (eqB_dec (f a) a0) as [H3|H3]; auto.
destruct H3; transitivity (f b); auto with *.
destruct H2; transitivity (f a); auto with *.
apply permut_add_cons_inside.
rewrite <- map_app.
apply IHl1; auto.
apply permut_remove_hd with b; trivial.
apply permut_trans with (a::l1); auto.
revert H1; unfold permutation, meq; simpl.
intros; f_equal; auto.
rewrite (@if_eqA_rewrite_l _ _ eqA_equiv eqA_dec a b a0); auto.
Qed.
End Permut_map.
Require Import Permutation.
Section Permut_permut.
Variable A : Type.
Variable eqA : relation A.
Hypothesis eqA_dec : forall x y:A, {eqA x y} + {~ eqA x y}.
Hypothesis eqA_equiv : Equivalence eqA.
Lemma Permutation_impl_permutation : forall l l',
Permutation l l' -> permutation _ eqA_dec l l'.
Proof.
induction 1.
apply permut_refl.
apply permut_cons; auto using Equivalence_Reflexive.
change (x :: y :: l) with ([x] ++ y :: l);
apply permut_add_cons_inside; simpl;
apply permut_cons_eq; auto using Equivalence_Reflexive, permut_refl.
apply permut_trans with l'; trivial.
Qed.
Lemma permut_eqA : forall l l', Forall2 eqA l l' -> permutation _ eqA_dec l l'.
Proof.
induction 1.
apply permut_refl.
apply permut_cons_eq; trivial.
Qed.
Lemma permutation_Permutation : forall l l',
permutation _ eqA_dec l l' <->
exists l'', Permutation l l'' /\ Forall2 eqA l'' l'.
Proof.
split; intro H.
(* -> *)
induction l in l', H |- *.
exists []; apply permut_sym, permut_nil in H as ->; auto using Forall2.
pose proof H as H'.
apply permut_cons_InA, InA_split in H
as (l1 & y & l2 & Heq & ->); trivial.
apply permut_remove_hd_eq, IHl in H'
as (l'' & IHP & IHA); clear IHl; trivial.
apply Forall2_app_inv_r in IHA as (l1'' & l2'' & Hl1 & Hl2 & ->).
exists (l1'' ++ a :: l2''); split.
apply Permutation_cons_app; trivial.
apply Forall2_app, Forall2_cons; trivial.
(* <- *)
destruct H as (l'' & H & Heq).
apply permut_trans with l''.
apply Permutation_impl_permutation; trivial.
apply permut_eqA; trivial.
Qed.
End Permut_permut.
(* begin hide *)
(** For compatibilty *)
Notation permut_right := permut_cons (only parsing).
Notation permut_tran := permut_trans (only parsing).
(* end hide *)

View File

@@ -1,632 +0,0 @@
(* Adapted in May 2006 by Jean-Marc Notin from initial contents by
Laurent Thery (Huffmann contribution, October 2003) *)
Require Import List Setoid Compare_dec Morphisms.
Import ListNotations. (* For notations [] and [a;b;c] *)
Set Implicit Arguments.
Section Permutation.
Variable A:Type.
Inductive Permutation : list A -> list A -> Prop :=
| perm_nil: Permutation [] []
| perm_skip x l l' : Permutation l l' -> Permutation (x::l) (x::l')
| perm_swap x y l : Permutation (y::x::l) (x::y::l)
| perm_trans l l' l'' :
Permutation l l' -> Permutation l' l'' -> Permutation l l''.
Local Hint Constructors Permutation.
(** Some facts about [Permutation] *)
Theorem Permutation_nil : forall (l : list A), Permutation [] l -> l = [].
Proof.
intros l HF.
remember (@nil A) as m in HF.
induction HF; discriminate || auto.
Qed.
Theorem Permutation_nil_cons : forall (l : list A) (x : A),
~ Permutation nil (x::l).
Proof.
intros l x HF.
apply Permutation_nil in HF; discriminate.
Qed.
(** Permutation over lists is a equivalence relation *)
Theorem Permutation_refl : forall l : list A, Permutation l l.
Proof.
induction l; constructor. exact IHl.
Qed.
Theorem Permutation_sym : forall l l' : list A,
Permutation l l' -> Permutation l' l.
Proof.
intros l l' Hperm; induction Hperm; auto.
apply perm_trans with (l':=l'); assumption.
Qed.
Theorem Permutation_trans : forall l l' l'' : list A,
Permutation l l' -> Permutation l' l'' -> Permutation l l''.
Proof.
exact perm_trans.
Qed.
End Permutation.
Hint Resolve Permutation_refl perm_nil perm_skip.
(* These hints do not reduce the size of the problem to solve and they
must be used with care to avoid combinatoric explosions *)
Local Hint Resolve perm_swap perm_trans.
Local Hint Resolve Permutation_sym Permutation_trans.
(* This provides reflexivity, symmetry and transitivity and rewriting
on morphims to come *)
Instance Permutation_Equivalence A : Equivalence (@Permutation A) | 10 := {
Equivalence_Reflexive := @Permutation_refl A ;
Equivalence_Symmetric := @Permutation_sym A ;
Equivalence_Transitive := @Permutation_trans A }.
Instance Permutation_cons A :
Proper (Logic.eq ==> @Permutation A ==> @Permutation A) (@cons A) | 10.
Proof.
repeat intro; subst; auto using perm_skip.
Qed.
Section Permutation_properties.
Variable A:Type.
Implicit Types a b : A.
Implicit Types l m : list A.
(** Compatibility with others operations on lists *)
Theorem Permutation_in : forall (l l' : list A) (x : A),
Permutation l l' -> In x l -> In x l'.
Proof.
intros l l' x Hperm; induction Hperm; simpl; tauto.
Qed.
Global Instance Permutation_in' :
Proper (Logic.eq ==> @Permutation A ==> iff) (@In A) | 10.
Proof.
repeat red; intros; subst; eauto using Permutation_in.
Qed.
Lemma Permutation_app_tail : forall (l l' tl : list A),
Permutation l l' -> Permutation (l++tl) (l'++tl).
Proof.
intros l l' tl Hperm; induction Hperm as [|x l l'|x y l|l l' l'']; simpl; auto.
eapply Permutation_trans with (l':=l'++tl); trivial.
Qed.
Lemma Permutation_app_head : forall (l tl tl' : list A),
Permutation tl tl' -> Permutation (l++tl) (l++tl').
Proof.
intros l tl tl' Hperm; induction l;
[trivial | repeat rewrite <- app_comm_cons; constructor; assumption].
Qed.
Theorem Permutation_app : forall (l m l' m' : list A),
Permutation l l' -> Permutation m m' -> Permutation (l++m) (l'++m').
Proof.
intros l m l' m' Hpermll' Hpermmm';
induction Hpermll' as [|x l l'|x y l|l l' l''];
repeat rewrite <- app_comm_cons; auto.
apply Permutation_trans with (l' := (x :: y :: l ++ m));
[idtac | repeat rewrite app_comm_cons; apply Permutation_app_head]; trivial.
apply Permutation_trans with (l' := (l' ++ m')); try assumption.
apply Permutation_app_tail; assumption.
Qed.
Global Instance Permutation_app' :
Proper (@Permutation A ==> @Permutation A ==> @Permutation A) (@app A) | 10.
Proof.
repeat intro; now apply Permutation_app.
Qed.
Lemma Permutation_add_inside : forall a (l l' tl tl' : list A),
Permutation l l' -> Permutation tl tl' ->
Permutation (l ++ a :: tl) (l' ++ a :: tl').
Proof.
intros; apply Permutation_app; auto.
Qed.
Lemma Permutation_cons_append : forall (l : list A) x,
Permutation (x :: l) (l ++ x :: nil).
Proof. induction l; intros; auto. simpl. rewrite <- IHl; auto. Qed.
Local Hint Resolve Permutation_cons_append.
Theorem Permutation_app_comm : forall (l l' : list A),
Permutation (l ++ l') (l' ++ l).
Proof.
induction l as [|x l]; simpl; intro l'.
rewrite app_nil_r; trivial. rewrite IHl.
rewrite app_comm_cons, Permutation_cons_append.
now rewrite <- app_assoc.
Qed.
Local Hint Resolve Permutation_app_comm.
Theorem Permutation_cons_app : forall (l l1 l2:list A) a,
Permutation l (l1 ++ l2) -> Permutation (a :: l) (l1 ++ a :: l2).
Proof.
intros l l1 l2 a H. rewrite H.
rewrite app_comm_cons, Permutation_cons_append.
now rewrite <- app_assoc.
Qed.
Local Hint Resolve Permutation_cons_app.
Theorem Permutation_middle : forall (l1 l2:list A) a,
Permutation (a :: l1 ++ l2) (l1 ++ a :: l2).
Proof.
auto.
Qed.
Local Hint Resolve Permutation_middle.
Theorem Permutation_rev : forall (l : list A), Permutation l (rev l).
Proof.
induction l as [| x l]; simpl; trivial. now rewrite IHl at 1.
Qed.
Global Instance Permutation_rev' :
Proper (@Permutation A ==> @Permutation A) (@rev A) | 10.
Proof.
repeat intro; now rewrite <- 2 Permutation_rev.
Qed.
Theorem Permutation_length : forall (l l' : list A),
Permutation l l' -> length l = length l'.
Proof.
intros l l' Hperm; induction Hperm; simpl; auto. now transitivity (length l').
Qed.
Global Instance Permutation_length' :
Proper (@Permutation A ==> Logic.eq) (@length A) | 10.
Proof.
exact Permutation_length.
Qed.
Theorem Permutation_ind_bis :
forall P : list A -> list A -> Prop,
P [] [] ->
(forall x l l', Permutation l l' -> P l l' -> P (x :: l) (x :: l')) ->
(forall x y l l', Permutation l l' -> P l l' -> P (y :: x :: l) (x :: y :: l')) ->
(forall l l' l'', Permutation l l' -> P l l' -> Permutation l' l'' -> P l' l'' -> P l l'') ->
forall l l', Permutation l l' -> P l l'.
Proof.
intros P Hnil Hskip Hswap Htrans.
induction 1; auto.
apply Htrans with (x::y::l); auto.
apply Hswap; auto.
induction l; auto.
apply Hskip; auto.
apply Hskip; auto.
induction l; auto.
eauto.
Qed.
Ltac break_list l x l' H :=
destruct l as [|x l']; simpl in *;
injection H; intros; subst; clear H.
Theorem Permutation_nil_app_cons : forall (l l' : list A) (x : A),
~ Permutation nil (l++x::l').
Proof.
intros l l' x HF.
apply Permutation_nil in HF. destruct l; discriminate.
Qed.
Theorem Permutation_app_inv : forall (l1 l2 l3 l4:list A) a,
Permutation (l1++a::l2) (l3++a::l4) -> Permutation (l1++l2) (l3 ++ l4).
Proof.
intros l1 l2 l3 l4 a; revert l1 l2 l3 l4.
set (P l l' :=
forall l1 l2 l3 l4, l=l1++a::l2 -> l'=l3++a::l4 ->
Permutation (l1++l2) (l3++l4)).
cut (forall l l', Permutation l l' -> P l l').
intros H; intros; eapply H; eauto.
apply (Permutation_ind_bis P); unfold P; clear P.
- (* nil *)
intros; now destruct l1.
- (* skip *)
intros x l l' H IH; intros.
break_list l1 b l1' H0; break_list l3 c l3' H1.
auto.
now rewrite H.
now rewrite <- H.
now rewrite (IH _ _ _ _ eq_refl eq_refl).
- (* swap *)
intros x y l l' Hp IH; intros.
break_list l1 b l1' H; break_list l3 c l3' H0.
auto.
break_list l3' b l3'' H.
auto.
constructor. now rewrite Permutation_middle.
break_list l1' c l1'' H1.
auto.
constructor. now rewrite Permutation_middle.
break_list l3' d l3'' H; break_list l1' e l1'' H1.
auto.
rewrite perm_swap. constructor. now rewrite Permutation_middle.
rewrite perm_swap. constructor. now rewrite Permutation_middle.
now rewrite perm_swap, (IH _ _ _ _ eq_refl eq_refl).
- (*trans*)
intros.
destruct (In_split a l') as (l'1,(l'2,H6)).
rewrite <- H.
subst l.
apply in_or_app; right; red; auto.
apply perm_trans with (l'1++l'2).
apply (H0 _ _ _ _ H3 H6).
apply (H2 _ _ _ _ H6 H4).
Qed.
Theorem Permutation_cons_inv l l' a :
Permutation (a::l) (a::l') -> Permutation l l'.
Proof.
intro H; exact (Permutation_app_inv [] l [] l' a H).
Qed.
Theorem Permutation_cons_app_inv l l1 l2 a :
Permutation (a :: l) (l1 ++ a :: l2) -> Permutation l (l1 ++ l2).
Proof.
intro H; exact (Permutation_app_inv [] l l1 l2 a H).
Qed.
Theorem Permutation_app_inv_l : forall l l1 l2,
Permutation (l ++ l1) (l ++ l2) -> Permutation l1 l2.
Proof.
induction l; simpl; auto.
intros.
apply IHl.
apply Permutation_cons_inv with a; auto.
Qed.
Theorem Permutation_app_inv_r : forall l l1 l2,
Permutation (l1 ++ l) (l2 ++ l) -> Permutation l1 l2.
Proof.
induction l.
intros l1 l2; do 2 rewrite app_nil_r; auto.
intros.
apply IHl.
apply Permutation_app_inv with a; auto.
Qed.
Lemma Permutation_length_1_inv: forall a l, Permutation [a] l -> l = [a].
Proof.
intros a l H; remember [a] as m in H.
induction H; try (injection Heqm as -> ->; clear Heqm);
discriminate || auto.
apply Permutation_nil in H as ->; trivial.
Qed.
Lemma Permutation_length_1: forall a b, Permutation [a] [b] -> a = b.
Proof.
intros a b H.
apply Permutation_length_1_inv in H; injection H as ->; trivial.
Qed.
Lemma Permutation_length_2_inv :
forall a1 a2 l, Permutation [a1;a2] l -> l = [a1;a2] \/ l = [a2;a1].
Proof.
intros a1 a2 l H; remember [a1;a2] as m in H.
revert a1 a2 Heqm.
induction H; intros; try (injection Heqm; intros; subst; clear Heqm);
discriminate || (try tauto).
apply Permutation_length_1_inv in H as ->; left; auto.
apply IHPermutation1 in Heqm as [H1|H1]; apply IHPermutation2 in H1 as ();
auto.
Qed.
Lemma Permutation_length_2 :
forall a1 a2 b1 b2, Permutation [a1;a2] [b1;b2] ->
a1 = b1 /\ a2 = b2 \/ a1 = b2 /\ a2 = b1.
Proof.
intros a1 b1 a2 b2 H.
apply Permutation_length_2_inv in H as [H|H]; injection H as -> ->; auto.
Qed.
Let in_middle l l1 l2 (a:A) : l = l1 ++ a :: l2 ->
forall x, In x l <-> a = x \/ In x (l1++l2).
Proof.
intros; subst; rewrite !in_app_iff; simpl. tauto.
Qed.
Lemma NoDup_cardinal_incl (l l' : list A) : NoDup l -> NoDup l' ->
length l = length l' -> incl l l' -> incl l' l.
Proof.
intros N. revert l'. induction N as [|a l Hal Hl IH].
- destruct l'; now auto.
- intros l' Hl' E H x Hx.
assert (Ha : In a l') by (apply H; simpl; auto).
destruct (in_split _ _ Ha) as (l1 & l2 & H12). clear Ha.
rewrite in_middle in Hx; eauto.
destruct Hx as [Hx|Hx]; [left|right]; auto.
apply (IH (l1++l2)); auto.
* apply NoDup_remove_1 with a; rewrite <- H12; auto.
* apply eq_add_S.
simpl in E; rewrite E, H12, !app_length; simpl; auto with arith.
* intros y Hy. assert (Hy' : In y l') by (apply H; simpl; auto).
rewrite in_middle in Hy'; eauto.
destruct Hy'; auto. subst y; intuition.
Qed.
Lemma NoDup_Permutation l l' : NoDup l -> NoDup l' ->
(forall x:A, In x l <-> In x l') -> Permutation l l'.
Proof.
intros N. revert l'. induction N as [|a l Hal Hl IH].
- destruct l'; simpl; auto.
intros Hl' H. exfalso. rewrite (H a); auto.
- intros l' Hl' H.
assert (Ha : In a l') by (apply H; simpl; auto).
destruct (In_split _ _ Ha) as (l1 & l2 & H12).
rewrite H12.
apply Permutation_cons_app.
apply IH; auto.
* apply NoDup_remove_1 with a; rewrite <- H12; auto.
* intro x. split; intros Hx.
+ assert (Hx' : In x l') by (apply H; simpl; auto).
rewrite in_middle in Hx'; eauto.
destruct Hx'; auto. subst; intuition.
+ assert (Hx' : In x l') by (rewrite (in_middle l1 l2 a); eauto).
rewrite <- H in Hx'. destruct Hx'; auto.
subst. destruct (NoDup_remove_2 _ _ _ Hl' Hx).
Qed.
Lemma NoDup_Permutation_bis l l' : NoDup l -> NoDup l' ->
length l = length l' -> incl l l' -> Permutation l l'.
Proof.
intros. apply NoDup_Permutation; auto.
split; auto. apply NoDup_cardinal_incl; auto.
Qed.
Lemma Permutation_NoDup l l' : Permutation l l' -> NoDup l -> NoDup l'.
Proof.
induction 1; auto.
* inversion_clear 1; constructor; eauto using Permutation_in.
* inversion_clear 1 as [|? ? H1 H2]. inversion_clear H2; simpl in *.
constructor. simpl; intuition. constructor; intuition.
Qed.
Global Instance Permutation_NoDup' :
Proper (@Permutation A ==> iff) (@NoDup A) | 10.
Proof.
repeat red; eauto using Permutation_NoDup.
Qed.
End Permutation_properties.
Section Permutation_map.
Variable A B : Type.
Variable f : A -> B.
Lemma Permutation_map l l' :
Permutation l l' -> Permutation (map f l) (map f l').
Proof.
induction 1; simpl; eauto.
Qed.
Global Instance Permutation_map' :
Proper (@Permutation A ==> @Permutation B) (map f) | 10.
Proof.
exact Permutation_map.
Qed.
End Permutation_map.
Section Injection.
Definition injective {A B} (f : A->B) :=
forall x y, f x = f y -> x = y.
Lemma injective_map_NoDup {A B} (f:A->B) (l:list A) :
injective f -> NoDup l -> NoDup (map f l).
Proof.
intros Hf. induction 1 as [|x l Hx Hl IH]; simpl; constructor; trivial.
rewrite in_map_iff. intros (y & Hy & Hy'). apply Hf in Hy. now subst.
Qed.
Lemma injective_bounded_surjective n f :
injective f ->
(forall x, x < n -> f x < n) ->
(forall y, y < n -> exists x, x < n /\ f x = y).
Proof.
intros Hf H.
set (l := seq 0 n).
assert (P : incl (map f l) l).
{ intros x. rewrite in_map_iff. intros (y & <- & Hy').
unfold l in *. rewrite in_seq in *. simpl in *.
destruct Hy' as (_,Hy'). auto with arith. }
assert (P' : incl l (map f l)).
{ unfold l.
apply NoDup_cardinal_incl; auto using injective_map_NoDup, seq_NoDup.
now rewrite map_length. }
intros x Hx.
assert (Hx' : In x l) by (unfold l; rewrite in_seq; auto with arith).
apply P' in Hx'.
rewrite in_map_iff in Hx'. destruct Hx' as (y & Hy & Hy').
exists y; split; auto. unfold l in *; rewrite in_seq in Hy'.
destruct Hy'; auto with arith.
Qed.
Lemma nat_bijection_Permutation n f :
injective f -> (forall x, x < n -> f x < n) ->
let l := seq 0 n in Permutation (map f l) l.
Proof.
intros Hf BD.
apply NoDup_Permutation_bis; auto using injective_map_NoDup, seq_NoDup.
* now rewrite map_length.
* intros x. rewrite in_map_iff. intros (y & <- & Hy').
rewrite in_seq in *. simpl in *.
destruct Hy' as (_,Hy'). auto with arith.
Qed.
End Injection.
Section Permutation_alt.
Variable A:Type.
Implicit Type a : A.
Implicit Type l : list A.
(** Alternative characterization of permutation
via [nth_error] and [nth] *)
Let adapt f n :=
let m := f (S n) in if le_lt_dec m (f 0) then m else pred m.
Let adapt_injective f : injective f -> injective (adapt f).
Proof.
unfold adapt. intros Hf x y EQ.
destruct le_lt_dec as [LE|LT]; destruct le_lt_dec as [LE'|LT'].
- now apply eq_add_S, Hf.
- apply Lt.le_lt_or_eq in LE.
destruct LE as [LT|EQ']; [|now apply Hf in EQ'].
unfold lt in LT. rewrite EQ in LT.
rewrite <- (Lt.S_pred _ _ LT') in LT.
elim (Lt.lt_not_le _ _ LT' LT).
- apply Lt.le_lt_or_eq in LE'.
destruct LE' as [LT'|EQ']; [|now apply Hf in EQ'].
unfold lt in LT'. rewrite <- EQ in LT'.
rewrite <- (Lt.S_pred _ _ LT) in LT'.
elim (Lt.lt_not_le _ _ LT LT').
- apply eq_add_S, Hf.
now rewrite (Lt.S_pred _ _ LT), (Lt.S_pred _ _ LT'), EQ.
Qed.
Let adapt_ok a l1 l2 f : injective f -> length l1 = f 0 ->
forall n, nth_error (l1++a::l2) (f (S n)) = nth_error (l1++l2) (adapt f n).
Proof.
unfold adapt. intros Hf E n.
destruct le_lt_dec as [LE|LT].
- apply Lt.le_lt_or_eq in LE.
destruct LE as [LT|EQ]; [|now apply Hf in EQ].
rewrite <- E in LT.
rewrite 2 nth_error_app1; auto.
- rewrite (Lt.S_pred _ _ LT) at 1.
rewrite <- E, (Lt.S_pred _ _ LT) in LT.
rewrite 2 nth_error_app2; auto with arith.
rewrite <- Minus.minus_Sn_m; auto with arith.
Qed.
Lemma Permutation_nth_error l l' :
Permutation l l' <->
(length l = length l' /\
exists f:nat->nat,
injective f /\ forall n, nth_error l' n = nth_error l (f n)).
Proof.
split.
{ intros P.
split; [now apply Permutation_length|].
induction P.
- exists (fun n => n).
split; try red; auto.
- destruct IHP as (f & Hf & Hf').
exists (fun n => match n with O => O | S n => S (f n) end).
split; try red.
* intros [|y] [|z]; simpl; now auto.
* intros [|n]; simpl; auto.
- exists (fun n => match n with 0 => 1 | 1 => 0 | n => n end).
split; try red.
* intros [|[|z]] [|[|t]]; simpl; now auto.
* intros [|[|n]]; simpl; auto.
- destruct IHP1 as (f & Hf & Hf').
destruct IHP2 as (g & Hg & Hg').
exists (fun n => f (g n)).
split; try red.
* auto.
* intros n. rewrite <- Hf'; auto. }
{ revert l. induction l'.
- intros [|l] (E & _); now auto.
- intros l (E & f & Hf & Hf').
simpl in E.
assert (Ha : nth_error l (f 0) = Some a)
by (symmetry; apply (Hf' 0)).
destruct (nth_error_split l (f 0) Ha) as (l1 & l2 & L12 & L1).
rewrite L12. rewrite <- Permutation_middle. constructor.
apply IHl'; split; [|exists (adapt f); split].
* revert E. rewrite L12, !app_length. simpl.
rewrite <- plus_n_Sm. now injection 1.
* now apply adapt_injective.
* intro n. rewrite <- (adapt_ok a), <- L12; auto.
apply (Hf' (S n)). }
Qed.
Lemma Permutation_nth_error_bis l l' :
Permutation l l' <->
exists f:nat->nat,
injective f /\
(forall n, n < length l -> f n < length l) /\
(forall n, nth_error l' n = nth_error l (f n)).
Proof.
rewrite Permutation_nth_error; split.
- intros (E & f & Hf & Hf').
exists f. do 2 (split; trivial).
intros n Hn.
destruct (Lt.le_or_lt (length l) (f n)) as [LE|LT]; trivial.
rewrite <- nth_error_None, <- Hf', nth_error_None, <- E in LE.
elim (Lt.lt_not_le _ _ Hn LE).
- intros (f & Hf & Hf2 & Hf3); split; [|exists f; auto].
assert (H : length l' <= length l') by auto with arith.
rewrite <- nth_error_None, Hf3, nth_error_None in H.
destruct (Lt.le_or_lt (length l) (length l')) as [LE|LT];
[|apply Hf2 in LT; elim (Lt.lt_not_le _ _ LT H)].
apply Lt.le_lt_or_eq in LE. destruct LE as [LT|EQ]; trivial.
rewrite <- nth_error_Some, Hf3, nth_error_Some in LT.
destruct (injective_bounded_surjective Hf Hf2 LT) as (y & Hy & Hy').
apply Hf in Hy'. subst y. elim (Lt.lt_irrefl _ Hy).
Qed.
Lemma Permutation_nth l l' d :
Permutation l l' <->
(let n := length l in
length l' = n /\
exists f:nat->nat,
(forall x, x < n -> f x < n) /\
(forall x y, x < n -> y < n -> f x = f y -> x = y) /\
(forall x, x < n -> nth x l' d = nth (f x) l d)).
Proof.
split.
- intros H.
assert (E := Permutation_length H).
split; auto.
apply Permutation_nth_error_bis in H.
destruct H as (f & Hf & Hf2 & Hf3).
exists f. split; [|split]; auto.
intros n Hn. rewrite <- 2 nth_default_eq. unfold nth_default.
now rewrite Hf3.
- intros (E & f & Hf1 & Hf2 & Hf3).
rewrite Permutation_nth_error.
split; auto.
exists (fun n => if le_lt_dec (length l) n then n else f n).
split.
* intros x y.
destruct le_lt_dec as [LE|LT];
destruct le_lt_dec as [LE'|LT']; auto.
+ apply Hf1 in LT'. intros ->.
elim (Lt.lt_irrefl (f y)). eapply Lt.lt_le_trans; eauto.
+ apply Hf1 in LT. intros <-.
elim (Lt.lt_irrefl (f x)). eapply Lt.lt_le_trans; eauto.
* intros n.
destruct le_lt_dec as [LE|LT].
+ assert (LE' : length l' <= n) by (now rewrite E).
rewrite <- nth_error_None in LE, LE'. congruence.
+ assert (LT' : n < length l') by (now rewrite E).
specialize (Hf3 n LT). rewrite <- 2 nth_default_eq in Hf3.
unfold nth_default in Hf3.
apply Hf1 in LT.
rewrite <- nth_error_Some in LT, LT'.
do 2 destruct nth_error; congruence.
Qed.
End Permutation_alt.
(* begin hide *)
Notation Permutation_app_swap := Permutation_app_comm (only parsing).
(* end hide *)

View File

@@ -1,2 +0,0 @@
Require Export Sorted.
Require Export Mergesort.

62
samples/Coq/Spec.v Normal file
View File

@@ -0,0 +1,62 @@
(** Specifications. *)
Require Import Coq.Lists.List.
Require Import Coq.Strings.Ascii.
Require Import FunctionNinjas.All.
Require Import ListString.All.
Require Import Computation.
Import ListNotations.
Local Open Scope char.
(** A run is an execution of the program with explicit answers for the
system calls. *)
Module Run.
(** We define a run by induction on the structure of a computation. *)
Inductive t : C.t -> Type :=
| Ret : t C.Ret
| Call : forall (command : Command.t) (answer : Command.answer command)
{handler : Command.answer command -> C.t}, t (handler answer) ->
t (C.Call command handler).
(** The trace of a run. *)
Fixpoint trace {x : C.t} (run : t x)
: list {command : Command.t & Command.answer command} :=
match run with
| Ret => []
| Call command answer _ run => existT _ command answer :: trace run
end.
End Run.
Module Temporal.
Module All.
Inductive t (P : Command.t -> Prop) : C.t -> Prop :=
| Ret : t P C.Ret
| Call : forall (c : Command.t) (h : Command.answer c -> C.t),
P c -> (forall a, t P (h a)) ->
t P (C.Call c h).
End All.
Module One.
Inductive t (P : Command.t -> Prop) : C.t -> Prop :=
| CallThis : forall (c : Command.t) (h : Command.answer c -> C.t),
P c ->
t P (C.Call c h)
| CallOther : forall (c : Command.t) (h : Command.answer c -> C.t),
(forall a, t P (h a)) ->
t P (C.Call c h).
End One.
Module Then.
Inductive t (P1 P2 : Command.t -> Prop) : C.t -> Prop :=
| Ret : t P1 P2 C.Ret
| Call : forall (c : Command.t) (h : Command.answer c -> C.t),
(forall a, t P1 P2 (h a)) ->
t P1 P2 (C.Call c h)
| CallThen : forall (c : Command.t) (h : Command.answer c -> C.t),
P1 c -> (forall a, One.t P2 (h a)) ->
t P1 P2 (C.Call c h).
End Then.
End Temporal.
Module CardBeforeMoney.
End CardBeforeMoney.

View File

@@ -1,419 +0,0 @@
(** Sketch of the proof of {p:nat|p<=n} = {p:nat|p<=m} -> n=m
- preliminary results on the irrelevance of boundedness proofs
- introduce the notion of finite cardinal |A|
- prove that |{p:nat|p<=n}| = n
- prove that |A| = n /\ |A| = m -> n = m if equality is decidable on A
- prove that equality is decidable on A
- conclude
*)
(** * Preliminary results on [nat] and [le] *)
(** Proving axiom K on [nat] *)
Require Import Eqdep_dec.
Require Import Arith.
Theorem eq_rect_eq_nat :
forall (p:nat) (Q:nat->Type) (x:Q p) (h:p=p), x = eq_rect p Q x p h.
Proof.
intros.
apply K_dec_set with (p := h).
apply eq_nat_dec.
reflexivity.
Qed.
(** Proving unicity of proofs of [(n<=m)%nat] *)
Scheme le_ind' := Induction for le Sort Prop.
Theorem le_uniqueness_proof : forall (n m : nat) (p q : n <= m), p = q.
Proof.
induction p using le_ind'; intro q.
replace (le_n n) with
(eq_rect _ (fun n0 => n <= n0) (le_n n) _ (refl_equal n)).
2:reflexivity.
generalize (refl_equal n).
pattern n at 2 4 6 10, q; case q; [intro | intros m l e].
rewrite <- eq_rect_eq_nat; trivial.
contradiction (le_Sn_n m); rewrite <- e; assumption.
replace (le_S n m p) with
(eq_rect _ (fun n0 => n <= n0) (le_S n m p) _ (refl_equal (S m))).
2:reflexivity.
generalize (refl_equal (S m)).
pattern (S m) at 1 3 4 6, q; case q; [intro Heq | intros m0 l HeqS].
contradiction (le_Sn_n m); rewrite Heq; assumption.
injection HeqS; intro Heq; generalize l HeqS.
rewrite <- Heq; intros; rewrite <- eq_rect_eq_nat.
rewrite (IHp l0); reflexivity.
Qed.
(** Proving irrelevance of boundedness proofs while building
elements of interval *)
Lemma dep_pair_intro :
forall (n x y:nat) (Hx : x<=n) (Hy : y<=n), x=y ->
exist (fun x => x <= n) x Hx = exist (fun x => x <= n) y Hy.
Proof.
intros n x y Hx Hy Heq.
generalize Hy.
rewrite <- Heq.
intros.
rewrite (le_uniqueness_proof x n Hx Hy0).
reflexivity.
Qed.
(** * Proving that {p:nat|p<=n} = {p:nat|p<=m} -> n=m *)
(** Definition of having finite cardinality [n+1] for a set [A] *)
Definition card (A:Set) n :=
exists f,
(forall x:A, f x <= n) /\
(forall x y:A, f x = f y -> x = y) /\
(forall m, m <= n -> exists x:A, f x = m).
Require Import Arith.
(** Showing that the interval [0;n] has cardinality [n+1] *)
Theorem card_interval : forall n, card {x:nat|x<=n} n.
Proof.
intro n.
exists (fun x:{x:nat|x<=n} => proj1_sig x).
split.
(* bounded *)
intro x; apply (proj2_sig x).
split.
(* injectivity *)
intros (p,Hp) (q,Hq).
simpl.
intro Hpq.
apply dep_pair_intro; assumption.
(* surjectivity *)
intros m Hmn.
exists (exist (fun x : nat => x <= n) m Hmn).
reflexivity.
Qed.
(** Showing that equality on the interval [0;n] is decidable *)
Lemma interval_dec :
forall n (x y : {m:nat|m<=n}), {x=y}+{x<>y}.
Proof.
intros n (p,Hp).
induction p; intros ([|q],Hq).
left.
apply dep_pair_intro.
reflexivity.
right.
intro H; discriminate H.
right.
intro H; discriminate H.
assert (Hp' : p <= n).
apply le_Sn_le; assumption.
assert (Hq' : q <= n).
apply le_Sn_le; assumption.
destruct (IHp Hp' (exist (fun m => m <= n) q Hq'))
as [Heq|Hneq].
left.
injection Heq; intro Heq'.
apply dep_pair_intro.
apply eq_S.
assumption.
right.
intro HeqS.
injection HeqS; intro Heq.
apply Hneq.
apply dep_pair_intro.
assumption.
Qed.
(** Showing that the cardinality relation is functional on decidable sets *)
Lemma card_inj_aux :
forall (A:Type) f g n,
(forall x:A, f x <= 0) ->
(forall x y:A, f x = f y -> x = y) ->
(forall m, m <= S n -> exists x:A, g x = m)
-> False.
Proof.
intros A f g n Hfbound Hfinj Hgsurj.
destruct (Hgsurj (S n) (le_n _)) as (x,Hx).
destruct (Hgsurj n (le_S _ _ (le_n _))) as (x',Hx').
assert (Hfx : 0 = f x).
apply le_n_O_eq.
apply Hfbound.
assert (Hfx' : 0 = f x').
apply le_n_O_eq.
apply Hfbound.
assert (x=x').
apply Hfinj.
rewrite <- Hfx.
rewrite <- Hfx'.
reflexivity.
rewrite H in Hx.
rewrite Hx' in Hx.
apply (n_Sn _ Hx).
Qed.
(** For [dec_restrict], we use a lemma on the negation of equality
that requires proof-irrelevance. It should be possible to avoid this
lemma by generalizing over a first-order definition of [x<>y], say
[neq] such that [{x=y}+{neq x y}] and [~(x=y /\ neq x y)]; for such
[neq], unicity of proofs could be proven *)
Require Import Classical.
Lemma neq_dep_intro :
forall (A:Set) (z x y:A) (p:x<>z) (q:y<>z), x=y ->
exist (fun x => x <> z) x p = exist (fun x => x <> z) y q.
Proof.
intros A z x y p q Heq.
generalize q; clear q; rewrite <- Heq; intro q.
rewrite (proof_irrelevance _ p q); reflexivity.
Qed.
Lemma dec_restrict :
forall (A:Set),
(forall x y :A, {x=y}+{x<>y}) ->
forall z (x y :{a:A|a<>z}), {x=y}+{x<>y}.
Proof.
intros A Hdec z (x,Hx) (y,Hy).
destruct (Hdec x y) as [Heq|Hneq].
left; apply neq_dep_intro; assumption.
right; intro Heq; injection Heq; exact Hneq.
Qed.
Lemma pred_inj : forall n m,
0 <> n -> 0 <> m -> pred m = pred n -> m = n.
Proof.
destruct n.
intros m H; destruct H; reflexivity.
destruct m.
intros _ H; destruct H; reflexivity.
simpl; intros _ _ H.
rewrite H.
reflexivity.
Qed.
Lemma le_neq_lt : forall n m, n <= m -> n<>m -> n < m.
Proof.
intros n m Hle Hneq.
destruct (le_lt_eq_dec n m Hle).
assumption.
contradiction.
Qed.
Lemma inj_restrict :
forall (A:Set) (f:A->nat) x y z,
(forall x y : A, f x = f y -> x = y)
-> x <> z -> f y < f z -> f z <= f x
-> pred (f x) = f y
-> False.
(* Search error sans le type de f !! *)
Proof.
intros A f x y z Hfinj Hneqx Hfy Hfx Heq.
assert (f z <> f x).
apply sym_not_eq.
intro Heqf.
apply Hneqx.
apply Hfinj.
assumption.
assert (f x = S (f y)).
assert (0 < f x).
apply le_lt_trans with (f z).
apply le_O_n.
apply le_neq_lt; assumption.
apply pred_inj.
apply O_S.
apply lt_O_neq; assumption.
exact Heq.
assert (f z <= f y).
destruct (le_lt_or_eq _ _ Hfx).
apply lt_n_Sm_le.
rewrite <- H0.
assumption.
contradiction Hneqx.
symmetry.
apply Hfinj.
assumption.
contradiction (lt_not_le (f y) (f z)).
Qed.
Theorem card_inj : forall m n (A:Set),
(forall x y :A, {x=y}+{x<>y}) ->
card A m -> card A n -> m = n.
Proof.
induction m; destruct n;
intros A Hdec
(f,(Hfbound,(Hfinj,Hfsurj)))
(g,(Hgbound,(Hginj,Hgsurj))).
(* 0/0 *)
reflexivity.
(* 0/Sm *)
destruct (card_inj_aux _ _ _ _ Hfbound Hfinj Hgsurj).
(* Sn/0 *)
destruct (card_inj_aux _ _ _ _ Hgbound Hginj Hfsurj).
(* Sn/Sm *)
destruct (Hgsurj (S n) (le_n _)) as (xSn,HSnx).
rewrite IHm with (n:=n) (A := {x:A|x<>xSn}).
reflexivity.
(* decidability of eq on {x:A|x<>xSm} *)
apply dec_restrict.
assumption.
(* cardinality of {x:A|x<>xSn} is m *)
pose (f' := fun x' : {x:A|x<>xSn} =>
let (x,Hneq) := x' in
if le_lt_dec (f xSn) (f x)
then pred (f x)
else f x).
exists f'.
split.
(* f' is bounded *)
unfold f'.
intros (x,_).
destruct (le_lt_dec (f xSn) (f x)) as [Hle|Hge].
change m with (pred (S m)).
apply le_pred.
apply Hfbound.
apply le_S_n.
apply le_trans with (f xSn).
exact Hge.
apply Hfbound.
split.
(* f' is injective *)
unfold f'.
intros (x,Hneqx) (y,Hneqy) Heqf'.
destruct (le_lt_dec (f xSn) (f x)) as [Hlefx|Hgefx];
destruct (le_lt_dec (f xSn) (f y)) as [Hlefy|Hgefy].
(* f xSn <= f x et f xSn <= f y *)
assert (Heq : x = y).
apply Hfinj.
assert (f xSn <> f y).
apply sym_not_eq.
intro Heqf.
apply Hneqy.
apply Hfinj.
assumption.
assert (0 < f y).
apply le_lt_trans with (f xSn).
apply le_O_n.
apply le_neq_lt; assumption.
assert (f xSn <> f x).
apply sym_not_eq.
intro Heqf.
apply Hneqx.
apply Hfinj.
assumption.
assert (0 < f x).
apply le_lt_trans with (f xSn).
apply le_O_n.
apply le_neq_lt; assumption.
apply pred_inj.
apply lt_O_neq; assumption.
apply lt_O_neq; assumption.
assumption.
apply neq_dep_intro; assumption.
(* f y < f xSn <= f x *)
destruct (inj_restrict A f x y xSn); assumption.
(* f x < f xSn <= f y *)
symmetry in Heqf'.
destruct (inj_restrict A f y x xSn); assumption.
(* f x < f xSn et f y < f xSn *)
assert (Heq : x=y).
apply Hfinj; assumption.
apply neq_dep_intro; assumption.
(* f' is surjective *)
intros p Hlep.
destruct (le_lt_dec (f xSn) p) as [Hle|Hlt].
(* case f xSn <= p *)
destruct (Hfsurj (S p) (le_n_S _ _ Hlep)) as (x,Hx).
assert (Hneq : x <> xSn).
intro Heqx.
rewrite Heqx in Hx.
rewrite Hx in Hle.
apply le_Sn_n with p; assumption.
exists (exist (fun a => a<>xSn) x Hneq).
unfold f'.
destruct (le_lt_dec (f xSn) (f x)) as [Hle'|Hlt'].
rewrite Hx; reflexivity.
rewrite Hx in Hlt'.
contradiction (le_not_lt (f xSn) p).
apply lt_trans with (S p).
apply lt_n_Sn.
assumption.
(* case p < f xSn *)
destruct (Hfsurj p (le_S _ _ Hlep)) as (x,Hx).
assert (Hneq : x <> xSn).
intro Heqx.
rewrite Heqx in Hx.
rewrite Hx in Hlt.
apply (lt_irrefl p).
assumption.
exists (exist (fun a => a<>xSn) x Hneq).
unfold f'.
destruct (le_lt_dec (f xSn) (f x)) as [Hle'|Hlt'].
rewrite Hx in Hle'.
contradiction (lt_irrefl p).
apply lt_le_trans with (f xSn); assumption.
assumption.
(* cardinality of {x:A|x<>xSn} is n *)
pose (g' := fun x' : {x:A|x<>xSn} =>
let (x,Hneq) := x' in
if Hdec x xSn then 0 else g x).
exists g'.
split.
(* g is bounded *)
unfold g'.
intros (x,_).
destruct (Hdec x xSn) as [_|Hneq].
apply le_O_n.
assert (Hle_gx:=Hgbound x).
destruct (le_lt_or_eq _ _ Hle_gx).
apply lt_n_Sm_le.
assumption.
contradiction Hneq.
apply Hginj.
rewrite HSnx.
assumption.
split.
(* g is injective *)
unfold g'.
intros (x,Hneqx) (y,Hneqy) Heqg'.
destruct (Hdec x xSn) as [Heqx|_].
contradiction Hneqx.
destruct (Hdec y xSn) as [Heqy|_].
contradiction Hneqy.
assert (Heq : x=y).
apply Hginj; assumption.
apply neq_dep_intro; assumption.
(* g is surjective *)
intros p Hlep.
destruct (Hgsurj p (le_S _ _ Hlep)) as (x,Hx).
assert (Hneq : x<>xSn).
intro Heq.
rewrite Heq in Hx.
rewrite Hx in HSnx.
rewrite HSnx in Hlep.
contradiction (le_Sn_n _ Hlep).
exists (exist (fun a => a<>xSn) x Hneq).
simpl.
destruct (Hdec x xSn) as [Heqx|_].
contradiction Hneq.
assumption.
Qed.
(** Conclusion *)
Theorem interval_discr :
forall n m, {p:nat|p<=n} = {p:nat|p<=m} -> n=m.
Proof.
intros n m Heq.
apply card_inj with (A := {p:nat|p<=n}).
apply interval_dec.
apply card_interval.
rewrite Heq.
apply card_interval.
Qed.

440
samples/D/aa.d Normal file
View File

@@ -0,0 +1,440 @@
/**
* Implementation of associative arrays.
*
* Copyright: Martin Nowak 2015 -.
* License: $(LINK2 http://www.boost.org/LICENSE_1_0.txt, Boost License 1.0)
* Authors: Martin Nowak
*/
module core.aa;
import core.memory : GC;
private
{
// grow threshold
enum GROW_NUM = 4;
enum GROW_DEN = 5;
// shrink threshold
enum SHRINK_NUM = 1;
enum SHRINK_DEN = 8;
// grow factor
enum GROW_FAC = 4;
// growing the AA doubles it's size, so the shrink threshold must be
// smaller than half the grow threshold to have a hysteresis
static assert(GROW_FAC * SHRINK_NUM * GROW_DEN < GROW_NUM * SHRINK_DEN);
// initial load factor (for literals), mean of both thresholds
enum INIT_NUM = (GROW_DEN * SHRINK_NUM + GROW_NUM * SHRINK_DEN) / 2;
enum INIT_DEN = SHRINK_DEN * GROW_DEN;
// magic hash constants to distinguish empty, deleted, and filled buckets
enum HASH_EMPTY = 0;
enum HASH_DELETED = 0x1;
enum HASH_FILLED_MARK = size_t(1) << 8 * size_t.sizeof - 1;
}
enum INIT_NUM_BUCKETS = 8;
struct AA(Key, Val)
{
this(size_t sz)
{
impl = new Impl(nextpow2(sz));
}
@property bool empty() const pure nothrow @safe @nogc
{
return !length;
}
@property size_t length() const pure nothrow @safe @nogc
{
return impl is null ? 0 : impl.length;
}
void opIndexAssign(Val val, in Key key)
{
// lazily alloc implementation
if (impl is null)
impl = new Impl(INIT_NUM_BUCKETS);
// get hash and bucket for key
immutable hash = calcHash(key);
// found a value => assignment
if (auto p = impl.findSlotLookup(hash, key))
{
p.entry.val = val;
return;
}
auto p = findSlotInsert(hash);
if (p.deleted)
--deleted;
// check load factor and possibly grow
else if (++used * GROW_DEN > dim * GROW_NUM)
{
grow();
p = findSlotInsert(hash);
assert(p.empty);
}
// update search cache and allocate entry
firstUsed = min(firstUsed, cast(uint)(p - buckets.ptr));
p.hash = hash;
p.entry = new Impl.Entry(key, val); // TODO: move
return;
}
ref inout(Val) opIndex(in Key key) inout @trusted
{
auto p = opIn_r(key);
assert(p !is null);
return *p;
}
inout(Val)* opIn_r(in Key key) inout @trusted
{
if (empty)
return null;
immutable hash = calcHash(key);
if (auto p = findSlotLookup(hash, key))
return &p.entry.val;
return null;
}
bool remove(in Key key)
{
if (empty)
return false;
immutable hash = calcHash(key);
if (auto p = findSlotLookup(hash, key))
{
// clear entry
p.hash = HASH_DELETED;
p.entry = null;
++deleted;
if (length * SHRINK_DEN < dim * SHRINK_NUM)
shrink();
return true;
}
return false;
}
Val get(in Key key, lazy Val val)
{
auto p = opIn_r(key);
return p is null ? val : *p;
}
ref Val getOrSet(in Key key, lazy Val val)
{
// lazily alloc implementation
if (impl is null)
impl = new Impl(INIT_NUM_BUCKETS);
// get hash and bucket for key
immutable hash = calcHash(key);
// found a value => assignment
if (auto p = impl.findSlotLookup(hash, key))
return p.entry.val;
auto p = findSlotInsert(hash);
if (p.deleted)
--deleted;
// check load factor and possibly grow
else if (++used * GROW_DEN > dim * GROW_NUM)
{
grow();
p = findSlotInsert(hash);
assert(p.empty);
}
// update search cache and allocate entry
firstUsed = min(firstUsed, cast(uint)(p - buckets.ptr));
p.hash = hash;
p.entry = new Impl.Entry(key, val);
return p.entry.val;
}
/**
Convert the AA to the type of the builtin language AA.
*/
Val[Key] toBuiltinAA() pure nothrow
{
return cast(Val[Key]) _aaFromCoreAA(impl, rtInterface);
}
private:
private this(inout(Impl)* impl) inout
{
this.impl = impl;
}
ref Val getLValue(in Key key)
{
// lazily alloc implementation
if (impl is null)
impl = new Impl(INIT_NUM_BUCKETS);
// get hash and bucket for key
immutable hash = calcHash(key);
// found a value => assignment
if (auto p = impl.findSlotLookup(hash, key))
return p.entry.val;
auto p = findSlotInsert(hash);
if (p.deleted)
--deleted;
// check load factor and possibly grow
else if (++used * GROW_DEN > dim * GROW_NUM)
{
grow();
p = findSlotInsert(hash);
assert(p.empty);
}
// update search cache and allocate entry
firstUsed = min(firstUsed, cast(uint)(p - buckets.ptr));
p.hash = hash;
p.entry = new Impl.Entry(key); // TODO: move
return p.entry.val;
}
static struct Impl
{
this(size_t sz)
{
buckets = allocBuckets(sz);
}
@property size_t length() const pure nothrow @nogc
{
assert(used >= deleted);
return used - deleted;
}
@property size_t dim() const pure nothrow @nogc
{
return buckets.length;
}
@property size_t mask() const pure nothrow @nogc
{
return dim - 1;
}
// find the first slot to insert a value with hash
inout(Bucket)* findSlotInsert(size_t hash) inout pure nothrow @nogc
{
for (size_t i = hash & mask, j = 1;; ++j)
{
if (!buckets[i].filled)
return &buckets[i];
i = (i + j) & mask;
}
}
// lookup a key
inout(Bucket)* findSlotLookup(size_t hash, in Key key) inout
{
for (size_t i = hash & mask, j = 1;; ++j)
{
if (buckets[i].hash == hash && key == buckets[i].entry.key)
return &buckets[i];
else if (buckets[i].empty)
return null;
i = (i + j) & mask;
}
}
void grow()
{
// If there are so many deleted entries, that growing would push us
// below the shrink threshold, we just purge deleted entries instead.
if (length * SHRINK_DEN < GROW_FAC * dim * SHRINK_NUM)
resize(dim);
else
resize(GROW_FAC * dim);
}
void shrink()
{
if (dim > INIT_NUM_BUCKETS)
resize(dim / GROW_FAC);
}
void resize(size_t ndim) pure nothrow
{
auto obuckets = buckets;
buckets = allocBuckets(ndim);
foreach (ref b; obuckets)
if (b.filled)
*findSlotInsert(b.hash) = b;
firstUsed = 0;
used -= deleted;
deleted = 0;
GC.free(obuckets.ptr); // safe to free b/c impossible to reference
}
static struct Entry
{
Key key;
Val val;
}
static struct Bucket
{
size_t hash;
Entry* entry;
@property bool empty() const
{
return hash == HASH_EMPTY;
}
@property bool deleted() const
{
return hash == HASH_DELETED;
}
@property bool filled() const
{
return cast(ptrdiff_t) hash < 0;
}
}
Bucket[] allocBuckets(size_t dim) @trusted pure nothrow
{
enum attr = GC.BlkAttr.NO_INTERIOR;
immutable sz = dim * Bucket.sizeof;
return (cast(Bucket*) GC.calloc(sz, attr))[0 .. dim];
}
Bucket[] buckets;
uint used;
uint deleted;
uint firstUsed;
}
RTInterface* rtInterface()() pure nothrow @nogc
{
static size_t aaLen(in void* pimpl) pure nothrow @nogc
{
auto aa = const(AA)(cast(const(Impl)*) pimpl);
return aa.length;
}
static void* aaGetY(void** pimpl, in void* pkey)
{
auto aa = AA(cast(Impl*)*pimpl);
auto res = &aa.getLValue(*cast(const(Key*)) pkey);
*pimpl = aa.impl; // might have changed
return res;
}
static inout(void)* aaInX(inout void* pimpl, in void* pkey)
{
auto aa = inout(AA)(cast(inout(Impl)*) pimpl);
return aa.opIn_r(*cast(const(Key*)) pkey);
}
static bool aaDelX(void* pimpl, in void* pkey)
{
auto aa = AA(cast(Impl*) pimpl);
return aa.remove(*cast(const(Key*)) pkey);
}
static immutable vtbl = RTInterface(&aaLen, &aaGetY, &aaInX, &aaDelX);
return cast(RTInterface*)&vtbl;
}
static size_t calcHash(in ref Key key)
{
return hashOf(key) | HASH_FILLED_MARK;
}
Impl* impl;
alias impl this;
}
package extern (C) void* _aaFromCoreAA(void* impl, RTInterface* rtIntf) pure nothrow;
private:
struct RTInterface
{
alias AA = void*;
size_t function(in AA aa) pure nothrow @nogc len;
void* function(AA* aa, in void* pkey) getY;
inout(void)* function(inout AA aa, in void* pkey) inX;
bool function(AA aa, in void* pkey) delX;
}
unittest
{
AA!(int, int) aa;
assert(aa.length == 0);
aa[0] = 1;
assert(aa.length == 1 && aa[0] == 1);
aa[1] = 2;
assert(aa.length == 2 && aa[1] == 2);
import core.stdc.stdio;
int[int] rtaa = aa.toBuiltinAA();
assert(rtaa.length == 2);
puts("length");
assert(rtaa[0] == 1);
assert(rtaa[1] == 2);
rtaa[2] = 3;
assert(aa[2] == 3);
}
unittest
{
auto aa = AA!(int, int)(3);
aa[0] = 0;
aa[1] = 1;
aa[2] = 2;
assert(aa.length == 3);
}
//==============================================================================
// Helper functions
//------------------------------------------------------------------------------
size_t nextpow2(in size_t n) pure nothrow @nogc
{
import core.bitop : bsr;
if (n < 2)
return 1;
return size_t(1) << bsr(n - 1) + 1;
}
pure nothrow @nogc unittest
{
// 0, 1, 2, 3, 4, 5, 6, 7, 8, 9
foreach (const n, const pow2; [1, 1, 2, 4, 4, 8, 8, 8, 8, 16])
assert(nextpow2(n) == pow2);
}
T min(T)(T a, T b) pure nothrow @nogc
{
return a < b ? a : b;
}
T max(T)(T a, T b) pure nothrow @nogc
{
return b < a ? a : b;
}

187
samples/D/arrayops.d Normal file
View File

@@ -0,0 +1,187 @@
/**
* Benchmark for array ops.
*
* Copyright: Copyright Martin Nowak 2016 -.
* License: $(LINK2 http://www.boost.org/LICENSE_1_0.txt, Boost License 1.0)
* Authors: Martin Nowak
*/
import core.cpuid, std.algorithm, std.datetime, std.meta, std.stdio, std.string,
std.range;
float[6] getLatencies(T, string op)()
{
enum N = (64 * (1 << 6) + 64) * T.sizeof;
auto a = Array!T(N), b = Array!T(N), c = Array!T(N);
float[6] latencies = float.max;
foreach (i, ref latency; latencies)
{
auto len = 1 << i;
foreach (_; 1 .. 32)
{
a[] = 24;
b[] = 4;
c[] = 2;
auto sw = StopWatch(AutoStart.yes);
foreach (off; size_t(0) .. size_t(64))
{
off = off * len + off;
enum op = op.replace("const", "2").replace("a",
"a[off .. off + len]").replace("b",
"b[off .. off + len]").replace("c", "c[off .. off + len]");
mixin(op ~ ";");
}
latency = min(latency, sw.peek.nsecs);
}
}
float[6] res = latencies[] / 1024;
return res;
}
float[4] getThroughput(T, string op)()
{
enum N = (40 * 1024 * 1024 + 64 * T.sizeof) / T.sizeof;
auto a = Array!T(N), b = Array!T(N), c = Array!T(N);
float[4] latencies = float.max;
size_t[4] lengths = [
8 * 1024 / T.sizeof, 32 * 1024 / T.sizeof, 512 * 1024 / T.sizeof, 32 * 1024 * 1024 / T
.sizeof
];
foreach (i, ref latency; latencies)
{
auto len = lengths[i] / 64;
foreach (_; 1 .. 4)
{
a[] = 24;
b[] = 4;
c[] = 2;
auto sw = StopWatch(AutoStart.yes);
foreach (off; size_t(0) .. size_t(64))
{
off = off * len + off;
enum op = op.replace("const", "2").replace("a",
"a[off .. off + len]").replace("b",
"b[off .. off + len]").replace("c", "c[off .. off + len]");
mixin(op ~ ";");
}
immutable nsecs = sw.peek.nsecs;
runMasked({latency = min(latency, nsecs);});
}
}
float[4] throughputs = void;
runMasked({throughputs = T.sizeof * lengths[] / latencies[];});
return throughputs;
}
string[] genOps()
{
string[] ops;
foreach (op1; ["+", "-", "*", "/"])
{
ops ~= "a " ~ op1 ~ "= b";
ops ~= "a " ~ op1 ~ "= const";
foreach (op2; ["+", "-", "*", "/"])
{
ops ~= "a " ~ op1 ~ "= b " ~ op2 ~ " c";
ops ~= "a " ~ op1 ~ "= b " ~ op2 ~ " const";
}
}
return ops;
}
void runOp(string op)()
{
foreach (T; AliasSeq!(ubyte, ushort, uint, ulong, byte, short, int, long, float,
double))
writefln("%s, %s, %(%.2f, %), %(%s, %)", T.stringof, op,
getLatencies!(T, op), getThroughput!(T, op));
}
struct Array(T)
{
import core.stdc.stdlib : free, malloc;
this(size_t n)
{
ary = (cast(T*) malloc(T.sizeof * n))[0 .. n];
}
~this()
{
free(ary.ptr);
}
T[] ary;
alias ary this;
}
version (X86)
version = SSE;
else version (X86_64)
version = SSE;
else
static assert(0, "unimplemented");
version (SSE)
{
uint mxcsr()
{
uint ret = void;
asm
{
stmxcsr ret;
}
return ret;
}
void mxcsr(uint val)
{
asm
{
ldmxcsr val;
}
}
// http://softpixel.com/~cwright/programming/simd/sse.php
enum FPU_EXCEPTION_MASKS = 1 << 12 | 1 << 11 | 1 << 10 | 1 << 9 | 1 << 8 | 1 << 7;
enum FPU_EXCEPTION_FLAGS = 1 << 5 | 1 << 4 | 1 << 3 | 1 << 2 | 1 << 1 | 1 << 0;
void maskFPUExceptions()
{
mxcsr = mxcsr | FPU_EXCEPTION_MASKS;
}
void unmaskFPUExceptions()
{
mxcsr = mxcsr & ~FPU_EXCEPTION_MASKS;
}
uint FPUExceptionFlags()
{
return mxcsr & FPU_EXCEPTION_FLAGS;
}
void clearFPUExceptionFlags()
{
mxcsr = mxcsr & ~FPU_EXCEPTION_FLAGS;
}
}
void runMasked(scope void delegate() dg)
{
assert(FPUExceptionFlags == 0);
maskFPUExceptions;
dg();
clearFPUExceptionFlags;
unmaskFPUExceptions;
}
void main()
{
unmaskFPUExceptions;
writefln("type, op, %(latency%s, %), %-(throughput%s, %)", iota(6)
.map!(i => 1 << i), ["8KB", "32KB", "512KB", "32MB"]);
foreach (op; mixin("AliasSeq!(%(%s, %))".format(genOps)))
runOp!op;
maskFPUExceptions;
}

3
samples/D/function.d Normal file
View File

@@ -0,0 +1,3 @@
void foo()
{
}

6
samples/D/hello_world.d Normal file
View File

@@ -0,0 +1,6 @@
import std.stdio;
void main()
{
writeln("Hello World");
}

7
samples/D/template.d Normal file
View File

@@ -0,0 +1,7 @@
template Fib(size_t N)
{
static if (N < 2)
enum Fib = size_t(1);
else
enum Fib = Fib!(N - 2) + Fib!(N - 1);
}

View File

@@ -0,0 +1,3 @@
void bar(T)(T t)
{
}

3
samples/D/unittest1.d Normal file
View File

@@ -0,0 +1,3 @@
unittest
{
}

3
samples/D/unittest2.d Normal file
View File

@@ -0,0 +1,3 @@
unittest("optional name")
{
}

24
samples/EBNF/grammar.ebnf Normal file
View File

@@ -0,0 +1,24 @@
(*
Source: https://github.com/sunjay/lion
License: MIT
*)
Statement = ( NamedFunction | AnonymousFunction | Assignment | Expr ) , "\n" ;
Expr = AnonymousFunction | Term | "(" , Expr , ")" ,
{ AnonymousFunction | Term | "(" , Expr , ")" } ;
Assignment = Symbol , "=" , Expr ;
AnonymousFunction = "\" , FunctionRHS ;
NamedFunction = Symbol , FunctionRHS ;
FunctionRHS = FunctionParams , "=" , FunctionBody ;
FunctionParams = FunctionParam , { FunctionParam } ;
FunctionParam = Term ;
FunctionBody = Expr ;
Term = Symbol | Number | SingleWordString ;
SingleWordString = '"' , Symbol , '"' ;
(* Symbol is a collection of valid symbol characters, not defined here *)
(* Number is a valid numeric literal *)

View File

@@ -0,0 +1,40 @@
(*
Source: https://github.com/io7m/jsom0
License: ISC
*)
name =
"name" , string , ";" ;
diffuse =
"diffuse" , real , real , real , ";" ;
ambient =
"ambient" , real , real , real , ";" ;
specular =
"specular" , real , real , real , real , ";" ;
shininess =
"shininess" , real , ";" ;
alpha =
"alpha" , real , ";" ;
mapping =
"map_chrome" | "map_uv" ;
texture =
"texture" , string , real , mapping , ";" ;
material =
"material" , ";" ,
name ,
diffuse ,
ambient ,
specular ,
shininess ,
alpha ,
[ texture ] ,
"end" , ";" ;

61
samples/EBNF/object.ebnf Normal file
View File

@@ -0,0 +1,61 @@
(*
Source: https://github.com/io7m/jsom0
License: ISC
*)
vertex_p3n3_name =
"vertex_p3n3" ;
vertex_p3n3t2_name =
"vertex_p3n3t2" ;
vertex_type =
vertex_p3n3_name | vertex_p3n3t2_name ;
vertex_position =
"position" , real , real , real , ";" ;
vertex_normal =
"normal" , real , real , real , ";" ;
vertex_uv =
"uv" , real , real , ";" ;
vertex_p3n3 =
vertex_p3n3_name , vertex_position , vertex_normal , "end" , ";" ;
vertex_p3n3t2 =
vertex_p3n3t2_name , vertex_position , vertex_normal , vertex_uv , "end" , ";" ;
vertex =
vertex_p3n3 | vertex_p3n3t2 ;
vertex_array =
"array" , positive , vertex_type , { vertex } , "end" , ";" ;
vertices =
"vertices" , ";" , vertex_array , "end" , ";" ;
triangle =
"triangle" , natural , natural , natural , ";" ;
triangle_array =
"array" , positive, "triangle" , { triangle } , "end" , ";" ;
triangles =
"triangles" , ";" , triangle_array , "end" , ";" ;
name =
"name" , string , ";" ;
material_name =
"material_name" , string , ";" ;
object =
"object" , ";" ,
name ,
material_name ,
vertices ,
triangles ,
"end" , ";" ;

20
samples/EBNF/types.ebnf Normal file
View File

@@ -0,0 +1,20 @@
(*
Source: https://github.com/io7m/jsom0
License: ISC
*)
digit_without_zero =
"1" | "2" | "3" | "4" | "5" | "6" | "7" | "8" | "9" ;
digit =
"0" | digit_without_zero ;
positive =
digit_without_zero , { digit } ;
natural =
"0" | positive ;
real =
[ "-" ] , digit , [ "." , { digit } ] ;

View File

@@ -0,0 +1,6 @@
(define-abbrev-table 'c-mode-abbrev-table '(
))
(define-abbrev-table 'fundamental-mode-abbrev-table '(
("TM" "™" nil 0)
("(R)" "®" nil 0)
("C=" "€" nil 0)))

View File

@@ -0,0 +1,20 @@
(setq user-full-name "Alhadis")
(setq user-mail-address "fake.account@gmail.com")
(auto-image-file-mode)
(setq mm-inline-large-images t)
(add-to-list 'mm-attachment-override-types "image/*")
(setq gnus-select-method
'(nnimap "gmail"
(nnimap-address "imap.gmail.com")
(nnimap-server-port 777)
(nnimap-stream ssl)))
(setq message-send-mail-function 'smtpmail-send-it
smtpmail-starttls-credentials '(("smtp.gmail.com" 600 nil nil))
smtpmail-auth-credentials '(("smtp.gmail.com" 700 "me@lisp.com" nil))
smtpmail-default-smtp-server "smtp.gmail.com"
smtpmail-smtp-server "smtp.gmail.com"
smtpmail-smtp-service 800
setq gnus-ignored-from-addresses "^from\\.Telstra[ \t\r\n]+Thanks")

View File

@@ -0,0 +1,10 @@
(setq viper-inhibit-startup-message 't)
(setq viper-expert-level '5)
; Key bindings
(define-key viper-vi-global-user-map "\C-d" 'end-of-line)
; Return to top of window
(defun my-viper-return-to-top ()
(interactive)
(beginning-of-buffer))

View File

@@ -0,0 +1,9 @@
(package "composer" "0.0.7" "Interface to PHP Composer")
(source "melpa" "https://melpa.org/packages/")
(package-file "composer.el")
(depends-on "f")
(depends-on "s")
(depends-on "request")
(depends-on "seq")

View File

@@ -0,0 +1,34 @@
;; Object EDE
(ede-proj-project "Linguist"
:name "Linguist"
:version "4.9"
:file "Project.ede"
:targets (list
(ede-proj-target-elisp-autoloads "autoloads"
:name "autoloads"
:path "test/samples/Emacs Lisp"
:autoload-file "dude.el"
)
(ede-proj-target-elisp "init"
:name "init"
:path ""
:source '("ede-load.el" "wait-what.el")
:compiler 'ede-emacs-preload-compiler
:pre-load-packages '("sample-names")
)
(ede-proj-target-elisp "what"
:name "the"
:path ""
:source '("h.el" "am-i-writing.el")
:versionsource '("hell.el")
:compiler 'ede-emacs-preload-compiler
:aux-packages '("what" "the" "hell-files" "am-i-writing")
)
)
:web-site-url "https://github.com/github/linguist"
:web-site-directory "../"
:web-site-file "CONTRIBUTING.md"
:ftp-upload-site "/ftp@git.hub.com:/madeup"
:configuration-variables 'nil
:metasubproject 't
)

View File

@@ -0,0 +1,70 @@
;; UTF-8 support
;; (set-language-environment "UTF-8")
(setenv "LANG" "en_AU.UTF-8")
(setenv "LC_ALL" "en_AU.UTF-8")
(setq default-tab-width 4)
;;; Function to load all ".el" files in ~/.emacs.d/config
(defun load-directory (directory)
"Recursively load all Emacs Lisp files in a directory."
(dolist (element (directory-files-and-attributes directory nil nil nil))
(let* ((path (car element))
(fullpath (concat directory "/" path))
(isdir (car (cdr element)))
(ignore-dir (or (string= path ".") (string= path ".."))))
(cond
((and (eq isdir t) (not ignore-dir))
(load-directory fullpath))
((and (eq isdir nil) (string= (substring path -3) ".el"))
(load (file-name-sans-extension fullpath)))))))
;; Tell Emacs we'd like to use Hunspell for spell-checking
(setq ispell-program-name (executable-find "hunspell"))
;; Load Homebrew-installed packages
(let ((default-directory "/usr/local/share/emacs/site-lisp/"))
(normal-top-level-add-subdirs-to-load-path))
(load "aggressive-indent")
(add-hook 'emacs-lisp-mode-hook #'aggressive-indent-mode)
(autoload 'rust-mode "rust-mode" nil t)
(add-to-list 'auto-mode-alist '("\\.rs\\'" . rust-mode))
;; Load Git-related syntax highlighting
(add-to-list 'load-path "~/.emacs.d/lisp/")
(load "git-modes")
(load "git-commit")
;; Keybindings
(global-set-key (kbd "C-u") (lambda ()
(interactive)
(kill-line 0)))
;; Show cursor's current column number
(setq column-number-mode t)
;; Disable autosave
(setq auto-save-default nil)
;; Use a single directory for storing backup files
(setq backup-directory-alist `(("." . "~/.emacs.d/auto-save-list")))
(setq backup-by-copying t)
(setq delete-old-versions t
kept-new-versions 6
kept-old-versions 2
version-control t)
(custom-set-variables
;; custom-set-variables was added by Custom.
;; If you edit it by hand, you could mess it up, so be careful.
;; Your init file should contain only one such instance.
;; If there is more than one, they won't work right.
'(blink-cursor-mode nil)
'(column-number-mode t)
'(show-paren-mode t))
(custom-set-faces
;; custom-set-faces was added by Custom.
;; If you edit it by hand, you could mess it up, so be careful.
;; Your init file should contain only one such instance.
;; If there is more than one, they won't work right.
)

View File

@@ -0,0 +1,8 @@
(define-abbrev-table 'fundamental-mode-abbrev-table '(
("cat" "Concatenate" nil 0)
("WTF" "World Trade Federation " nil 0)
("rtbtm" "Read that back to me" nil 0)))
(define-abbrev-table 'shell-script-mode-abbrev-table '(
("brake", "bundle rake exec" nil 0)
("pls", "warning: setting Encoding.default_external")))

View File

@@ -0,0 +1,7 @@
{"src/*", [
report,
verbose,
{i, "include"},
{outdir, "ebin"},
debug_info
]}.

161
samples/GLSL/SyLens.shader Normal file
View File

@@ -0,0 +1,161 @@
#version 120
/*
Original Lens Distortion Algorithm from SSontech (Syntheyes)
http://www.ssontech.com/content/lensalg.htm
r2 is radius squared.
r2 = image_aspect*image_aspect*u*u + v*v
f = 1 + r2*(k + kcube*sqrt(r2))
u' = f*u
v' = f*v
*/
// Controls
uniform float kCoeff, kCube, uShift, vShift;
uniform float chroma_red, chroma_green, chroma_blue;
uniform bool apply_disto;
// Uniform inputs
uniform sampler2D input1;
uniform float adsk_input1_w, adsk_input1_h, adsk_input1_aspect, adsk_input1_frameratio;
uniform float adsk_result_w, adsk_result_h;
float distortion_f(float r) {
float f = 1 + (r*r)*(kCoeff + kCube * r);
return f;
}
float inverse_f(float r)
{
// Build a lookup table on the radius, as a fixed-size table.
// We will use a vec3 since we will store the multipled number in the Z coordinate.
// So to recap: x will be the radius, y will be the f(x) distortion, and Z will be x * y;
vec3[48] lut;
// Since out LUT is shader-global check if it's been computed alrite
// Flame has no overflow bbox so we can safely max out at the image edge, plus some cushion
float max_r = sqrt((adsk_input1_frameratio * adsk_input1_frameratio) + 1) + 0.1;
float incr = max_r / 48;
float lut_r = 0;
float f;
for(int i=0; i < 48; i++) {
f = distortion_f(lut_r);
lut[i] = vec3(lut_r, f, lut_r * f);
lut_r += incr;
}
float t;
// Now find the nehgbouring elements
// only iterate to 46 since we will need
// 47 as i+1
for(int i=0; i < 47; i++) {
if(lut[i].z < r && lut[i+1].z > r) {
// BAM! our value is between these two segments
// get the T interpolant and mix
t = (r - lut[i].z) / (lut[i+1].z - lut[i]).z;
return mix(lut[i].y, lut[i+1].y, t );
}
}
}
float aberrate(float f, float chroma)
{
return f + (f * chroma);
}
vec3 chromaticize_and_invert(float f)
{
vec3 rgb_f = vec3(aberrate(f, chroma_red), aberrate(f, chroma_green), aberrate(f, chroma_blue));
// We need to DIVIDE by F when we redistort, and x / y == x * (1 / y)
if(apply_disto) {
rgb_f = 1 / rgb_f;
}
return rgb_f;
}
void main(void)
{
vec2 px, uv;
float f = 1;
float r = 1;
px = gl_FragCoord.xy;
// Make sure we are still centered
px.x -= (adsk_result_w - adsk_input1_w) / 2;
px.y -= (adsk_result_h - adsk_input1_h) / 2;
// Push the destination coordinates into the [0..1] range
uv.x = px.x / adsk_input1_w;
uv.y = px.y / adsk_input1_h;
// And to Syntheyes UV which are [1..-1] on both X and Y
uv.x = (uv.x *2 ) - 1;
uv.y = (uv.y *2 ) - 1;
// Add UV shifts
uv.x += uShift;
uv.y += vShift;
// Make the X value the aspect value, so that the X coordinates go to [-aspect..aspect]
uv.x = uv.x * adsk_input1_frameratio;
// Compute the radius
r = sqrt(uv.x*uv.x + uv.y*uv.y);
// If we are redistorting, account for the oversize plate in the input, assume that
// the input aspect is the same
if(apply_disto) {
r = r / (float(adsk_input1_w) / float(adsk_result_w));
}
// Apply or remove disto, per channel honoring chromatic aberration
if(apply_disto) {
f = inverse_f(r);
} else {
f = distortion_f(r);
}
vec2[3] rgb_uvs = vec2[](uv, uv, uv);
// Compute distortions per component
vec3 rgb_f = chromaticize_and_invert(f);
// Apply the disto coefficients, per component
rgb_uvs[0] = rgb_uvs[0] * rgb_f.rr;
rgb_uvs[1] = rgb_uvs[1] * rgb_f.gg;
rgb_uvs[2] = rgb_uvs[2] * rgb_f.bb;
// Convert all the UVs back to the texture space, per color component
for(int i=0; i < 3; i++) {
uv = rgb_uvs[i];
// Back from [-aspect..aspect] to [-1..1]
uv.x = uv.x / adsk_input1_frameratio;
// Remove UV shifts
uv.x -= uShift;
uv.y -= vShift;
// Back to OGL UV
uv.x = (uv.x + 1) / 2;
uv.y = (uv.y + 1) / 2;
rgb_uvs[i] = uv;
}
// Sample the input plate, per component
vec4 sampled;
sampled.r = texture2D(input1, rgb_uvs[0]).r;
sampled.g = texture2D(input1, rgb_uvs[1]).g;
sampled.b = texture2D(input1, rgb_uvs[2]).b;
// and assign to the output
gl_FragColor.rgba = vec4(sampled.rgb, 1.0 );
}

View File

@@ -0,0 +1,630 @@
//// High quality (Some browsers may freeze or crash)
//#define HIGHQUALITY
//// Medium quality (Should be fine on all systems, works on Intel HD2000 on Win7 but quite slow)
//#define MEDIUMQUALITY
//// Defaults
//#define REFLECTIONS
#define SHADOWS
//#define GRASS
//#define SMALL_WAVES
#define RAGGED_LEAVES
//#define DETAILED_NOISE
//#define LIGHT_AA // 2 sample SSAA
//#define HEAVY_AA // 2x2 RG SSAA
//#define TONEMAP
//// Configurations
#ifdef MEDIUMQUALITY
#define SHADOWS
#define SMALL_WAVES
#define RAGGED_LEAVES
#define TONEMAP
#endif
#ifdef HIGHQUALITY
#define REFLECTIONS
#define SHADOWS
//#define GRASS
#define SMALL_WAVES
#define RAGGED_LEAVES
#define DETAILED_NOISE
#define LIGHT_AA
#define TONEMAP
#endif
// Constants
const float eps = 1e-5;
const float PI = 3.14159265359;
const vec3 sunDir = vec3(0.79057,-0.47434, 0.0);
const vec3 skyCol = vec3(0.3, 0.5, 0.8);
const vec3 sandCol = vec3(0.9, 0.8, 0.5);
const vec3 treeCol = vec3(0.8, 0.65, 0.3);
const vec3 grassCol = vec3(0.4, 0.5, 0.18);
const vec3 leavesCol = vec3(0.3, 0.6, 0.2);
const vec3 leavesPos = vec3(-5.1,13.4, 0.0);
#ifdef TONEMAP
const vec3 sunCol = vec3(1.8, 1.7, 1.6);
#else
const vec3 sunCol = vec3(0.9, 0.85, 0.8);
#endif
const float exposure = 1.1; // Only used when tonemapping
// Description : Array and textureless GLSL 2D/3D/4D simplex
// noise functions.
// Author : Ian McEwan, Ashima Arts.
// License : Copyright (C) 2011 Ashima Arts. All rights reserved.
// Distributed under the MIT License. See LICENSE file.
// https://github.com/ashima/webgl-noise
vec3 mod289(vec3 x) {
return x - floor(x * (1.0 / 289.0)) * 289.0;
}
vec4 mod289(vec4 x) {
return x - floor(x * (1.0 / 289.0)) * 289.0;
}
vec4 permute(vec4 x) {
return mod289(((x*34.0)+1.0)*x);
}
vec4 taylorInvSqrt(vec4 r) {
return 1.79284291400159 - 0.85373472095314 * r;
}
float snoise(vec3 v) {
const vec2 C = vec2(1.0/6.0, 1.0/3.0) ;
const vec4 D = vec4(0.0, 0.5, 1.0, 2.0);
// First corner
vec3 i = floor(v + dot(v, C.yyy) );
vec3 x0 = v - i + dot(i, C.xxx) ;
// Other corners
vec3 g = step(x0.yzx, x0.xyz);
vec3 l = 1.0 - g;
vec3 i1 = min( g.xyz, l.zxy );
vec3 i2 = max( g.xyz, l.zxy );
// x0 = x0 - 0.0 + 0.0 * C.xxx;
// x1 = x0 - i1 + 1.0 * C.xxx;
// x2 = x0 - i2 + 2.0 * C.xxx;
// x3 = x0 - 1.0 + 3.0 * C.xxx;
vec3 x1 = x0 - i1 + C.xxx;
vec3 x2 = x0 - i2 + C.yyy; // 2.0*C.x = 1/3 = C.y
vec3 x3 = x0 - D.yyy; // -1.0+3.0*C.x = -0.5 = -D.y
// Permutations
i = mod289(i);
vec4 p = permute( permute( permute(
i.z + vec4(0.0, i1.z, i2.z, 1.0 ))
+ i.y + vec4(0.0, i1.y, i2.y, 1.0 ))
+ i.x + vec4(0.0, i1.x, i2.x, 1.0 ));
// Gradients: 7x7 points over a square, mapped onto an octahedron.
// The ring size 17*17 = 289 is close to a multiple of 49 (49*6 = 294)
float n_ = 0.142857142857; // 1.0/7.0
vec3 ns = n_ * D.wyz - D.xzx;
vec4 j = p - 49.0 * floor(p * ns.z * ns.z); // mod(p,7*7)
vec4 x_ = floor(j * ns.z);
vec4 y_ = floor(j - 7.0 * x_ ); // mod(j,N)
vec4 x = x_ *ns.x + ns.yyyy;
vec4 y = y_ *ns.x + ns.yyyy;
vec4 h = 1.0 - abs(x) - abs(y);
vec4 b0 = vec4( x.xy, y.xy );
vec4 b1 = vec4( x.zw, y.zw );
//vec4 s0 = vec4(lessThan(b0,0.0))*2.0 - 1.0;
//vec4 s1 = vec4(lessThan(b1,0.0))*2.0 - 1.0;
vec4 s0 = floor(b0)*2.0 + 1.0;
vec4 s1 = floor(b1)*2.0 + 1.0;
vec4 sh = -step(h, vec4(0.0));
vec4 a0 = b0.xzyw + s0.xzyw*sh.xxyy ;
vec4 a1 = b1.xzyw + s1.xzyw*sh.zzww ;
vec3 p0 = vec3(a0.xy,h.x);
vec3 p1 = vec3(a0.zw,h.y);
vec3 p2 = vec3(a1.xy,h.z);
vec3 p3 = vec3(a1.zw,h.w);
//Normalise gradients
vec4 norm = taylorInvSqrt(vec4(dot(p0,p0), dot(p1,p1), dot(p2, p2), dot(p3,p3)));
p0 *= norm.x;
p1 *= norm.y;
p2 *= norm.z;
p3 *= norm.w;
// Mix final noise value
vec4 m = max(0.6 - vec4(dot(x0,x0), dot(x1,x1), dot(x2,x2), dot(x3,x3)), 0.0);
m = m * m;
return 42.0 * dot( m*m, vec4( dot(p0,x0), dot(p1,x1),
dot(p2,x2), dot(p3,x3) ) );
}
// Main
float fbm(vec3 p)
{
float final = snoise(p);
p *= 1.94; final += snoise(p) * 0.5;
#ifdef DETAILED_NOISE
p *= 3.75; final += snoise(p) * 0.25;
return final / 1.75;
#else
return final / 1.5;
#endif
}
float waterHeight(vec3 p)
{
float d = length(p.xz);
float h = sin(d * 1.5 + iGlobalTime * 3.0) * 12.0 / d; // Island waves
#ifdef SMALL_WAVES
h += fbm(p*0.5); // Other waves
#endif
return h;
}
vec3 bump(vec3 pos, vec3 rayDir)
{
float s = 2.0;
// Fade out waves to reduce aliasing
float dist = dot(pos, rayDir);
s *= dist < 2.0 ? 1.0 : 1.4142 / sqrt(dist);
// Calculate normal from heightmap
vec2 e = vec2(1e-2, 0.0);
vec3 p = vec3(pos.x, iGlobalTime*0.5, pos.z)*0.7;
float m = waterHeight(p)*s;
return normalize(vec3(
waterHeight(p+e.xyy)*s-m,
1.0,
waterHeight(p+e.yxy)*s-m
));
}
// Ray intersections
vec4 intersectSphere(vec3 rpos, vec3 rdir, vec3 pos, float rad)
{
vec3 op = pos - rpos;
float b = dot(op, rdir);
float det = b*b - dot(op, op) + rad*rad;
if (det > 0.0)
{
det = sqrt(det);
float t = b - det;
if (t > eps)
return vec4(-normalize(rpos+rdir*t-pos), t);
}
return vec4(0.0);
}
vec4 intersectCylinder(vec3 rpos, vec3 rdir, vec3 pos, float rad)
{
vec3 op = pos - rpos;
vec2 rdir2 = normalize(rdir.yz);
float b = dot(op.yz, rdir2);
float det = b*b - dot(op.yz, op.yz) + rad*rad;
if (det > 0.0)
{
det = sqrt(det);
float t = b - det;
if (t > eps)
return vec4(-normalize(rpos.yz+rdir2*t-pos.yz), 0.0, t);
t = b + det;
if (t > eps)
return vec4(-normalize(rpos.yz+rdir2*t-pos.yz), 0.0, t);
}
return vec4(0.0);
}
vec4 intersectPlane(vec3 rayPos, vec3 rayDir, vec3 n, float d)
{
float t = -(dot(rayPos, n) + d) / dot(rayDir, n);
return vec4(n * sign(dot(rayDir, n)), t);
}
// Helper functions
vec3 rotate(vec3 p, float theta)
{
float c = cos(theta), s = sin(theta);
return vec3(p.x * c + p.z * s, p.y,
p.z * c - p.x * s);
}
float impulse(float k, float x) // by iq
{
float h = k*x;
return h * exp(1.0 - h);
}
// Raymarched parts of scene
float grass(vec3 pos)
{
float h = length(pos - vec3(0.0, -7.0, 0.0)) - 8.0;
if (h > 2.0) return h; // Optimization (Avoid noise if too far away)
return h + snoise(pos * 3.0) * 0.1 + pos.y * 0.9;
}
float tree(vec3 pos)
{
pos.y -= 0.5;
float s = sin(pos.y*0.03);
float c = cos(pos.y*0.03);
mat2 m = mat2(c, -s, s, c);
vec3 p = vec3(m*pos.xy, pos.z);
float width = 1.0 - pos.y * 0.02 - clamp(sin(pos.y * 8.0) * 0.1, 0.05, 0.1);
return max(length(p.xz) - width, pos.y - 12.5);
}
vec2 scene(vec3 pos)
{
float vtree = tree(pos);
#ifdef GRASS
float vgrass = grass(pos);
float v = min(vtree, vgrass);
#else
float v = vtree;
#endif
return vec2(v, v == vtree ? 2.0 : 1.0);
}
vec3 normal(vec3 pos)
{
vec2 eps = vec2(1e-3, 0.0);
float h = scene(pos).x;
return normalize(vec3(
scene(pos-eps.xyy).x-h,
scene(pos-eps.yxy).x-h,
scene(pos-eps.yyx).x-h
));
}
float plantsShadow(vec3 rayPos, vec3 rayDir)
{
// Soft shadow taken from iq
float k = 6.0;
float t = 0.0;
float s = 1.0;
for (int i = 0; i < 30; i++)
{
vec3 pos = rayPos+rayDir*t;
vec2 res = scene(pos);
if (res.x < 0.001) return 0.0;
s = min(s, k*res.x/t);
t += max(res.x, 0.01);
}
return s*s*(3.0 - 2.0*s);
}
// Ray-traced parts of scene
vec4 intersectWater(vec3 rayPos, vec3 rayDir)
{
float h = sin(20.5 + iGlobalTime * 2.0) * 0.03;
float t = -(rayPos.y + 2.5 + h) / rayDir.y;
return vec4(0.0, 1.0, 0.0, t);
}
vec4 intersectSand(vec3 rayPos, vec3 rayDir)
{
return intersectSphere(rayPos, rayDir, vec3(0.0,-24.1,0.0), 24.1);
}
vec4 intersectTreasure(vec3 rayPos, vec3 rayDir)
{
return vec4(0.0);
}
vec4 intersectLeaf(vec3 rayPos, vec3 rayDir, float openAmount)
{
vec3 dir = normalize(vec3(0.0, 1.0, openAmount));
float offset = 0.0;
vec4 res = intersectPlane(rayPos, rayDir, dir, 0.0);
vec3 pos = rayPos+rayDir*res.w;
#ifdef RAGGED_LEAVES
offset = snoise(pos*0.8) * 0.3;
#endif
if (pos.y > 0.0 || length(pos * vec3(0.9, 2.0, 1.0)) > 4.0 - offset) res.w = 0.0;
vec4 res2 = intersectPlane(rayPos, rayDir, vec3(dir.xy, -dir.z), 0.0);
pos = rayPos+rayDir*res2.w;
#ifdef RAGGED_LEAVES
offset = snoise(pos*0.8) * 0.3;
#endif
if (pos.y > 0.0 || length(pos * vec3(0.9, 2.0, 1.0)) > 4.0 - offset) res2.w = 0.0;
if (res2.w > 0.0 && res2.w < res.w || res.w <= 0.0)
res = res2;
return res;
}
vec4 leaves(vec3 rayPos, vec3 rayDir)
{
float t = 1e20;
vec3 n = vec3(0.0);
rayPos -= leavesPos;
float sway = impulse(15.0, fract(iGlobalTime / PI * 0.125));
float upDownSway = sway * -sin(iGlobalTime) * 0.06;
float openAmount = sway * max(-cos(iGlobalTime) * 0.4, 0.0);
float angleOffset = -0.1;
for (float k = 0.0; k < 6.2; k += 0.75)
{
// Left-right
float alpha = k + (k - PI) * sway * 0.015;
vec3 p = rotate(rayPos, alpha);
vec3 d = rotate(rayDir, alpha);
// Up-down
angleOffset *= -1.0;
float theta = -0.4 +
angleOffset +
cos(k) * 0.35 +
upDownSway +
sin(iGlobalTime+k*10.0) * 0.03 * (sway + 0.2);
p = rotate(p.xzy, theta).xzy;
d = rotate(d.xzy, theta).xzy;
// Shift
p -= vec3(5.4, 0.0, 0.0);
// Intersect individual leaf
vec4 res = intersectLeaf(p, d, 1.0+openAmount);
if (res.w > 0.0 && res.w < t)
{
t = res.w;
n = res.xyz;
}
}
return vec4(n, t);
}
// Lighting
float shadow(vec3 rayPos, vec3 rayDir)
{
float s = 1.0;
// Intersect sand
//vec4 resSand = intersectSand(rayPos, rayDir);
//if (resSand.w > 0.0) return 0.0;
// Intersect plants
s = min(s, plantsShadow(rayPos, rayDir));
if (s < 0.0001) return 0.0;
// Intersect leaves
vec4 resLeaves = leaves(rayPos, rayDir);
if (resLeaves.w > 0.0 && resLeaves.w < 1e7) return 0.0;
return s;
}
vec3 light(vec3 p, vec3 n)
{
float s = 1.0;
#ifdef SHADOWS
s = shadow(p-sunDir*0.01, -sunDir);
#endif
vec3 col = sunCol * min(max(dot(n, sunDir), 0.0), s);
col += skyCol * (-n.y * 0.5 + 0.5) * 0.3;
return col;
}
vec3 lightLeaves(vec3 p, vec3 n)
{
float s = 1.0;
#ifdef SHADOWS
s = shadow(p-sunDir*0.01, -sunDir);
#endif
float ao = min(length(p - leavesPos) * 0.1, 1.0);
float ns = dot(n, sunDir);
float d = sqrt(max(ns, 0.0));
vec3 col = sunCol * min(d, s);
col += sunCol * max(-ns, 0.0) * vec3(0.3, 0.3, 0.1) * ao;
col += skyCol * (-n.y * 0.5 + 0.5) * 0.3 * ao;
return col;
}
vec3 sky(vec3 n)
{
return skyCol * (1.0 - n.y * 0.8);
}
// Ray-marching
vec4 plants(vec3 rayPos, vec3 rayDir)
{
float t = 0.0;
for (int i = 0; i < 40; i++)
{
vec3 pos = rayPos+rayDir*t;
vec2 res = scene(pos);
float h = res.x;
if (h < 0.001)
{
vec3 col = res.y == 2.0 ? treeCol : grassCol;
float uvFact = res.y == 2.0 ? 1.0 : 10.0;
vec3 n = normal(pos);
vec2 uv = vec2(n.x, pos.y * 0.5) * 0.2 * uvFact;
vec3 tex = texture2D(iChannel0, uv).rgb * 0.6 + 0.4;
float ao = min(length(pos - leavesPos) * 0.1, 1.0);
return vec4(col * light(pos, n) * ao * tex, t);
}
t += h;
}
return vec4(sky(rayDir), 1e8);
}
// Final combination
vec3 traceReflection(vec3 rayPos, vec3 rayDir)
{
vec3 col = vec3(0.0);
float t = 1e20;
// Intersect plants
vec4 resPlants = plants(rayPos, rayDir);
if (resPlants.w > 0.0 && resPlants.w < t)
{
t = resPlants.w;
col = resPlants.xyz;
}
// Intersect leaves
vec4 resLeaves = leaves(rayPos, rayDir);
if (resLeaves.w > 0.0 && resLeaves.w < t)
{
vec3 pos = rayPos + rayDir * resLeaves.w;
vec2 uv = (pos.xz - leavesPos.xz) * 0.3;
float tex = texture2D(iChannel0, uv).r * 0.6 + 0.5;
t = resLeaves.w;
col = leavesCol * lightLeaves(pos, resLeaves.xyz) * tex;
}
if (t > 1e7) return sky(rayDir);
return col;
}
vec3 trace(vec3 rayPos, vec3 rayDir)
{
vec3 col = vec3(0.0);
float t = 1e20;
// Intersect sand
vec4 resSand = intersectSand(rayPos, rayDir);
if (resSand.w > 0.0)
{
vec3 pos = rayPos + rayDir * resSand.w;
t = resSand.w;
col = sandCol * light(pos, resSand.xyz);
}
// Intersect treasure chest
vec4 resTreasure = intersectTreasure(rayPos, rayDir);
if (resTreasure.w > 0.0 && resTreasure.w < t)
{
vec3 pos = rayPos + rayDir * resTreasure.w;
t = resTreasure.w;
col = leavesCol * light(pos, resTreasure.xyz);
}
// Intersect leaves
vec4 resLeaves = leaves(rayPos, rayDir);
if (resLeaves.w > 0.0 && resLeaves.w < t)
{
vec3 pos = rayPos + rayDir * resLeaves.w;
vec2 uv = (pos.xz - leavesPos.xz) * 0.3;
float tex = texture2D(iChannel0, uv).r * 0.6 + 0.5;
t = resLeaves.w;
col = leavesCol * lightLeaves(pos, resLeaves.xyz) * tex;
}
// Intersect plants
vec4 resPlants = plants(rayPos, rayDir);
if (resPlants.w > 0.0 && resPlants.w < t)
{
t = resPlants.w;
col = resPlants.xyz;
}
// Intersect water
vec4 resWater = intersectWater(rayPos, rayDir);
if (resWater.w > 0.0 && resWater.w < t)
{
vec3 pos = rayPos + rayDir * resWater.w;
float dist = t - resWater.w;
vec3 n = bump(pos, rayDir);
float ct = -min(dot(n,rayDir), 0.0);
float fresnel = 0.9 - 0.9 * pow(1.0 - ct, 5.0);
vec3 trans = col * exp(-dist * vec3(1.0, 0.7, 0.4) * 3.0);
vec3 reflDir = normalize(reflect(rayDir, n));
vec3 refl = sky(reflDir);
#ifdef REFLECTIONS
if (dot(pos, rayDir) < -2.0)
refl = traceReflection(pos, reflDir).rgb;
#endif
t = resWater.t;
col = mix(refl, trans, fresnel);
}
if (t > 1e7) return sky(rayDir);
return col;
}
// Ray-generation
vec3 camera(vec2 px)
{
vec2 rd = (px / iResolution.yy - vec2(iResolution.x/iResolution.y*0.5-0.5, 0.0)) * 2.0 - 1.0;
float t = sin(iGlobalTime * 0.1) * 0.2;
vec3 rayDir = normalize(vec3(rd.x, rd.y, 1.0));
vec3 rayPos = vec3(0.0, 3.0, -18.0);
return trace(rayPos, rayDir);
}
void main(void)
{
#ifdef HEAVY_AA
vec3 col = camera(gl_FragCoord.xy+vec2(0.0,0.5))*0.25;
col += camera(gl_FragCoord.xy+vec2(0.25,0.0))*0.25;
col += camera(gl_FragCoord.xy+vec2(0.5,0.75))*0.25;
col += camera(gl_FragCoord.xy+vec2(0.75,0.25))*0.25;
#else
vec3 col = camera(gl_FragCoord.xy);
#ifdef LIGHT_AA
col = col * 0.5 + camera(gl_FragCoord.xy+vec2(0.5,0.5))*0.5;
#endif
#endif
#ifdef TONEMAP
// Optimized Haarm-Peter Duikers curve
vec3 x = max(vec3(0.0),col*exposure-0.004);
col = (x*(6.2*x+.5))/(x*(6.2*x+1.7)+0.06);
#else
col = pow(col, vec3(0.4545));
#endif
gl_FragColor = vec4(col, 1.0);
}

59
samples/GN/BUILD.2.gn Normal file
View File

@@ -0,0 +1,59 @@
# Copyright 2016 the V8 project authors. All rights reserved.
# Use of this source code is governed by a BSD-style license that can be
# found in the LICENSE file.
import("../gni/isolate.gni")
group("gn_all") {
testonly = true
if (v8_test_isolation_mode != "noop") {
deps = [
":check-static-initializers_run",
":jsfunfuzz_run",
":run-deopt-fuzzer_run",
":run-gcmole_run",
":run-valgrind_run",
]
}
}
v8_isolate_run("check-static-initializers") {
deps = [
"..:d8_run",
]
isolate = "check-static-initializers.isolate"
}
v8_isolate_run("jsfunfuzz") {
deps = [
"..:d8_run",
]
isolate = "jsfunfuzz/jsfunfuzz.isolate"
}
v8_isolate_run("run-deopt-fuzzer") {
deps = [
"..:d8_run",
]
isolate = "run-deopt-fuzzer.isolate"
}
v8_isolate_run("run-gcmole") {
deps = [
"..:d8_run",
]
isolate = "gcmole/run-gcmole.isolate"
}
v8_isolate_run("run-valgrind") {
deps = [
"..:d8_run",
]
isolate = "run-valgrind.isolate"
}

1646
samples/GN/BUILD.3.gn Normal file

File diff suppressed because it is too large Load Diff

2583
samples/GN/BUILD.gn Normal file

File diff suppressed because it is too large Load Diff

2781
samples/GN/android-rules.gni Normal file

File diff suppressed because it is too large Load Diff

13
samples/GN/clang.gni Normal file
View File

@@ -0,0 +1,13 @@
# Copyright 2014 The Chromium Authors. All rights reserved.
# Use of this source code is governed by a BSD-style license that can be
# found in the LICENSE file.
import("//build/toolchain/toolchain.gni")
declare_args() {
# Indicates if the build should use the Chrome-specific plugins for enforcing
# coding guidelines, etc. Only used when compiling with Clang.
clang_use_chrome_plugins = is_clang && !is_nacl && !use_xcode_clang
clang_base_path = "//third_party/llvm-build/Release+Asserts"
}

25
samples/GN/filenames/.gn Normal file
View File

@@ -0,0 +1,25 @@
# This file is used by the GN meta build system to find the root of the source
# tree and to set startup options. For documentation on the values set in this
# file, run "gn help dotfile" at the command line.
import("//build/dotfile_settings.gni")
# The location of the build configuration file.
buildconfig = "//build/config/BUILDCONFIG.gn"
# The secondary source root is a parallel directory tree where
# GN build files are placed when they can not be placed directly
# in the source tree, e.g. for third party source trees.
secondary_source = "//build/secondary/"
# These are the targets to check headers for by default. The files in targets
# matching these patterns (see "gn help label_pattern" for format) will have
# their includes checked for proper dependencies when you run either
# "gn check" or "gn gen --check".
check_targets = []
# These are the list of GN files that run exec_script. This whitelist exists
# to force additional review for new uses of exec_script, which is strongly
# discouraged except for gypi_to_gn calls.
exec_script_whitelist =
build_dotfile_settings.exec_script_whitelist + [ "//test/test262/BUILD.gn" ]

View File

@@ -0,0 +1,503 @@
# Copyright (c) 2013 The Chromium Authors. All rights reserved.
# Use of this source code is governed by a BSD-style license that can be
# found in the LICENSE file.
import("//build/config/android/config.gni")
import("//build/config/clang/clang.gni")
import("//build/config/nacl/config.gni")
import("//build/config/sanitizers/sanitizers.gni")
import("//build/config/v8_target_cpu.gni")
import("//build/toolchain/cc_wrapper.gni")
import("//build/toolchain/goma.gni")
import("//build/toolchain/toolchain.gni")
# This template defines a toolchain for something that works like gcc
# (including clang).
#
# It requires the following variables specifying the executables to run:
# - ar
# - cc
# - cxx
# - ld
#
# Optional parameters that control the tools:
#
# - extra_cflags
# Extra flags to be appended when compiling C files (but not C++ files).
# - extra_cppflags
# Extra flags to be appended when compiling both C and C++ files. "CPP"
# stands for "C PreProcessor" in this context, although it can be
# used for non-preprocessor flags as well. Not to be confused with
# "CXX" (which follows).
# - extra_cxxflags
# Extra flags to be appended when compiling C++ files (but not C files).
# - extra_ldflags
# Extra flags to be appended when linking
#
# - libs_section_prefix
# - libs_section_postfix
# The contents of these strings, if specified, will be placed around
# the libs section of the linker line. It allows one to inject libraries
# at the beginning and end for all targets in a toolchain.
# - solink_libs_section_prefix
# - solink_libs_section_postfix
# Same as libs_section_{pre,post}fix except used for solink instead of link.
# - link_outputs
# The content of this array, if specified, will be added to the list of
# outputs from the link command. This can be useful in conjunction with
# the post_link parameter.
# - post_link
# The content of this string, if specified, will be run as a separate
# command following the the link command.
# - deps
# Just forwarded to the toolchain definition.
# - executable_extension
# If this string is specified it will be used for the file extension
# for an executable, rather than using no extension; targets will
# still be able to override the extension using the output_extension
# variable.
# - rebuild_define
# The contents of this string, if specified, will be passed as a #define
# to the toolchain. It can be used to force recompiles whenever a
# toolchain is updated.
# - shlib_extension
# If this string is specified it will be used for the file extension
# for a shared library, rather than default value specified in
# toolchain.gni
# - strip
# Location of the strip executable. When specified, strip will be run on
# all shared libraries and executables as they are built. The pre-stripped
# artifacts will be put in lib.unstripped/ and exe.unstripped/.
template("gcc_toolchain") {
toolchain(target_name) {
assert(defined(invoker.ar), "gcc_toolchain() must specify a \"ar\" value")
assert(defined(invoker.cc), "gcc_toolchain() must specify a \"cc\" value")
assert(defined(invoker.cxx), "gcc_toolchain() must specify a \"cxx\" value")
assert(defined(invoker.ld), "gcc_toolchain() must specify a \"ld\" value")
# This define changes when the toolchain changes, forcing a rebuild.
# Nothing should ever use this define.
if (defined(invoker.rebuild_define)) {
rebuild_string = "-D" + invoker.rebuild_define + " "
} else {
rebuild_string = ""
}
# GN's syntax can't handle more than one scope dereference at once, like
# "invoker.toolchain_args.foo", so make a temporary to hold the toolchain
# args so we can do "invoker_toolchain_args.foo".
assert(defined(invoker.toolchain_args),
"Toolchains must specify toolchain_args")
invoker_toolchain_args = invoker.toolchain_args
assert(defined(invoker_toolchain_args.current_cpu),
"toolchain_args must specify a current_cpu")
assert(defined(invoker_toolchain_args.current_os),
"toolchain_args must specify a current_os")
# When invoking this toolchain not as the default one, these args will be
# passed to the build. They are ignored when this is the default toolchain.
toolchain_args = {
# Populate toolchain args from the invoker.
forward_variables_from(invoker_toolchain_args, "*")
# The host toolchain value computed by the default toolchain's setup
# needs to be passed through unchanged to all secondary toolchains to
# ensure that it's always the same, regardless of the values that may be
# set on those toolchains.
host_toolchain = host_toolchain
if (!defined(invoker_toolchain_args.v8_current_cpu)) {
v8_current_cpu = invoker_toolchain_args.current_cpu
}
}
# When the invoker has explicitly overridden use_goma or cc_wrapper in the
# toolchain args, use those values, otherwise default to the global one.
# This works because the only reasonable override that toolchains might
# supply for these values are to force-disable them.
if (defined(toolchain_args.use_goma)) {
toolchain_uses_goma = toolchain_args.use_goma
} else {
toolchain_uses_goma = use_goma
}
if (defined(toolchain_args.cc_wrapper)) {
toolchain_cc_wrapper = toolchain_args.cc_wrapper
} else {
toolchain_cc_wrapper = cc_wrapper
}
# Compute the compiler prefix.
if (toolchain_uses_goma) {
assert(toolchain_cc_wrapper == "",
"Goma and cc_wrapper can't be used together.")
compiler_prefix = "$goma_dir/gomacc "
} else if (toolchain_cc_wrapper != "") {
compiler_prefix = toolchain_cc_wrapper + " "
} else {
compiler_prefix = ""
}
cc = compiler_prefix + invoker.cc
cxx = compiler_prefix + invoker.cxx
ar = invoker.ar
ld = invoker.ld
if (defined(invoker.readelf)) {
readelf = invoker.readelf
} else {
readelf = "readelf"
}
if (defined(invoker.nm)) {
nm = invoker.nm
} else {
nm = "nm"
}
if (defined(invoker.shlib_extension)) {
default_shlib_extension = invoker.shlib_extension
} else {
default_shlib_extension = shlib_extension
}
if (defined(invoker.executable_extension)) {
default_executable_extension = invoker.executable_extension
} else {
default_executable_extension = ""
}
# Bring these into our scope for string interpolation with default values.
if (defined(invoker.libs_section_prefix)) {
libs_section_prefix = invoker.libs_section_prefix
} else {
libs_section_prefix = ""
}
if (defined(invoker.libs_section_postfix)) {
libs_section_postfix = invoker.libs_section_postfix
} else {
libs_section_postfix = ""
}
if (defined(invoker.solink_libs_section_prefix)) {
solink_libs_section_prefix = invoker.solink_libs_section_prefix
} else {
solink_libs_section_prefix = ""
}
if (defined(invoker.solink_libs_section_postfix)) {
solink_libs_section_postfix = invoker.solink_libs_section_postfix
} else {
solink_libs_section_postfix = ""
}
if (defined(invoker.extra_cflags) && invoker.extra_cflags != "") {
extra_cflags = " " + invoker.extra_cflags
} else {
extra_cflags = ""
}
if (defined(invoker.extra_cppflags) && invoker.extra_cppflags != "") {
extra_cppflags = " " + invoker.extra_cppflags
} else {
extra_cppflags = ""
}
if (defined(invoker.extra_cxxflags) && invoker.extra_cxxflags != "") {
extra_cxxflags = " " + invoker.extra_cxxflags
} else {
extra_cxxflags = ""
}
if (defined(invoker.extra_ldflags) && invoker.extra_ldflags != "") {
extra_ldflags = " " + invoker.extra_ldflags
} else {
extra_ldflags = ""
}
# These library switches can apply to all tools below.
lib_switch = "-l"
lib_dir_switch = "-L"
# Object files go in this directory.
object_subdir = "{{target_out_dir}}/{{label_name}}"
tool("cc") {
depfile = "{{output}}.d"
command = "$cc -MMD -MF $depfile ${rebuild_string}{{defines}} {{include_dirs}} {{cflags}} {{cflags_c}}${extra_cppflags}${extra_cflags} -c {{source}} -o {{output}}"
depsformat = "gcc"
description = "CC {{output}}"
outputs = [
# The whitelist file is also an output, but ninja does not
# currently support multiple outputs for tool("cc").
"$object_subdir/{{source_name_part}}.o",
]
if (enable_resource_whitelist_generation) {
compile_wrapper =
rebase_path("//build/toolchain/gcc_compile_wrapper.py",
root_build_dir)
command = "$python_path \"$compile_wrapper\" --resource-whitelist=\"{{output}}.whitelist\" $command"
}
}
tool("cxx") {
depfile = "{{output}}.d"
command = "$cxx -MMD -MF $depfile ${rebuild_string}{{defines}} {{include_dirs}} {{cflags}} {{cflags_cc}}${extra_cppflags}${extra_cxxflags} -c {{source}} -o {{output}}"
depsformat = "gcc"
description = "CXX {{output}}"
outputs = [
# The whitelist file is also an output, but ninja does not
# currently support multiple outputs for tool("cxx").
"$object_subdir/{{source_name_part}}.o",
]
if (enable_resource_whitelist_generation) {
compile_wrapper =
rebase_path("//build/toolchain/gcc_compile_wrapper.py",
root_build_dir)
command = "$python_path \"$compile_wrapper\" --resource-whitelist=\"{{output}}.whitelist\" $command"
}
}
tool("asm") {
# For GCC we can just use the C compiler to compile assembly.
depfile = "{{output}}.d"
command = "$cc -MMD -MF $depfile ${rebuild_string}{{defines}} {{include_dirs}} {{asmflags}} -c {{source}} -o {{output}}"
depsformat = "gcc"
description = "ASM {{output}}"
outputs = [
"$object_subdir/{{source_name_part}}.o",
]
}
tool("alink") {
rspfile = "{{output}}.rsp"
whitelist_flag = " "
if (enable_resource_whitelist_generation) {
whitelist_flag = " --resource-whitelist=\"{{output}}.whitelist\""
}
# This needs a Python script to avoid using simple sh features in this
# command, in case the host does not use a POSIX shell (e.g. compiling
# POSIX-like toolchains such as NaCl on Windows).
ar_wrapper =
rebase_path("//build/toolchain/gcc_ar_wrapper.py", root_build_dir)
command = "$python_path \"$ar_wrapper\"$whitelist_flag --output={{output}} --ar=\"$ar\" {{arflags}} rcsD @\"$rspfile\""
description = "AR {{output}}"
rspfile_content = "{{inputs}}"
outputs = [
"{{output_dir}}/{{target_output_name}}{{output_extension}}",
]
# Shared libraries go in the target out directory by default so we can
# generate different targets with the same name and not have them collide.
default_output_dir = "{{target_out_dir}}"
default_output_extension = ".a"
output_prefix = "lib"
}
tool("solink") {
soname = "{{target_output_name}}{{output_extension}}" # e.g. "libfoo.so".
sofile = "{{output_dir}}/$soname" # Possibly including toolchain dir.
rspfile = sofile + ".rsp"
pool = "//build/toolchain:link_pool($default_toolchain)"
whitelist_flag = " "
if (enable_resource_whitelist_generation) {
whitelist_file = "$sofile.whitelist"
whitelist_flag = " --resource-whitelist=\"$whitelist_file\""
}
if (defined(invoker.strip)) {
unstripped_sofile = "{{root_out_dir}}/lib.unstripped/$soname"
} else {
unstripped_sofile = sofile
}
# These variables are not built into GN but are helpers that
# implement (1) linking to produce a .so, (2) extracting the symbols
# from that file (3) if the extracted list differs from the existing
# .TOC file, overwrite it, otherwise, don't change it.
tocfile = sofile + ".TOC"
link_command = "$ld -shared {{ldflags}}${extra_ldflags} -o \"$unstripped_sofile\" -Wl,-soname=\"$soname\" @\"$rspfile\""
assert(defined(readelf), "to solink you must have a readelf")
assert(defined(nm), "to solink you must have an nm")
strip_switch = ""
if (defined(invoker.strip)) {
strip_switch = "--strip=${invoker.strip}"
}
# This needs a Python script to avoid using a complex shell command
# requiring sh control structures, pipelines, and POSIX utilities.
# The host might not have a POSIX shell and utilities (e.g. Windows).
solink_wrapper = rebase_path("//build/toolchain/gcc_solink_wrapper.py")
command = "$python_path \"$solink_wrapper\" --readelf=\"$readelf\" --nm=\"$nm\" $strip_switch --sofile=\"$unstripped_sofile\" --tocfile=\"$tocfile\" --output=\"$sofile\"$whitelist_flag -- $link_command"
rspfile_content = "-Wl,--whole-archive {{inputs}} {{solibs}} -Wl,--no-whole-archive $solink_libs_section_prefix {{libs}} $solink_libs_section_postfix"
description = "SOLINK $sofile"
# Use this for {{output_extension}} expansions unless a target manually
# overrides it (in which case {{output_extension}} will be what the target
# specifies).
default_output_extension = default_shlib_extension
default_output_dir = "{{root_out_dir}}"
if (shlib_subdir != ".") {
default_output_dir += "/$shlib_subdir"
}
output_prefix = "lib"
# Since the above commands only updates the .TOC file when it changes, ask
# Ninja to check if the timestamp actually changed to know if downstream
# dependencies should be recompiled.
restat = true
# Tell GN about the output files. It will link to the sofile but use the
# tocfile for dependency management.
outputs = [
sofile,
tocfile,
]
if (enable_resource_whitelist_generation) {
outputs += [ whitelist_file ]
}
if (sofile != unstripped_sofile) {
outputs += [ unstripped_sofile ]
}
link_output = sofile
depend_output = tocfile
}
tool("solink_module") {
soname = "{{target_output_name}}{{output_extension}}" # e.g. "libfoo.so".
sofile = "{{output_dir}}/$soname"
rspfile = sofile + ".rsp"
pool = "//build/toolchain:link_pool($default_toolchain)"
if (defined(invoker.strip)) {
unstripped_sofile = "{{root_out_dir}}/lib.unstripped/$soname"
} else {
unstripped_sofile = sofile
}
command = "$ld -shared {{ldflags}}${extra_ldflags} -o \"$unstripped_sofile\" -Wl,-soname=\"$soname\" @\"$rspfile\""
if (defined(invoker.strip)) {
strip_command = "${invoker.strip} --strip-unneeded -o \"$sofile\" \"$unstripped_sofile\""
command += " && " + strip_command
}
rspfile_content = "-Wl,--whole-archive {{inputs}} {{solibs}} -Wl,--no-whole-archive $solink_libs_section_prefix {{libs}} $solink_libs_section_postfix"
description = "SOLINK_MODULE $sofile"
# Use this for {{output_extension}} expansions unless a target manually
# overrides it (in which case {{output_extension}} will be what the target
# specifies).
if (defined(invoker.loadable_module_extension)) {
default_output_extension = invoker.loadable_module_extension
} else {
default_output_extension = default_shlib_extension
}
default_output_dir = "{{root_out_dir}}"
if (shlib_subdir != ".") {
default_output_dir += "/$shlib_subdir"
}
output_prefix = "lib"
outputs = [
sofile,
]
if (sofile != unstripped_sofile) {
outputs += [ unstripped_sofile ]
}
}
tool("link") {
exename = "{{target_output_name}}{{output_extension}}"
outfile = "{{output_dir}}/$exename"
rspfile = "$outfile.rsp"
unstripped_outfile = outfile
pool = "//build/toolchain:link_pool($default_toolchain)"
# Use this for {{output_extension}} expansions unless a target manually
# overrides it (in which case {{output_extension}} will be what the target
# specifies).
default_output_extension = default_executable_extension
default_output_dir = "{{root_out_dir}}"
if (defined(invoker.strip)) {
unstripped_outfile = "{{root_out_dir}}/exe.unstripped/$exename"
}
command = "$ld {{ldflags}}${extra_ldflags} -o \"$unstripped_outfile\" -Wl,--start-group @\"$rspfile\" {{solibs}} -Wl,--end-group $libs_section_prefix {{libs}} $libs_section_postfix"
if (defined(invoker.strip)) {
link_wrapper =
rebase_path("//build/toolchain/gcc_link_wrapper.py", root_build_dir)
command = "$python_path \"$link_wrapper\" --strip=\"${invoker.strip}\" --unstripped-file=\"$unstripped_outfile\" --output=\"$outfile\" -- $command"
}
description = "LINK $outfile"
rspfile_content = "{{inputs}}"
outputs = [
outfile,
]
if (outfile != unstripped_outfile) {
outputs += [ unstripped_outfile ]
}
if (defined(invoker.link_outputs)) {
outputs += invoker.link_outputs
}
}
# These two are really entirely generic, but have to be repeated in
# each toolchain because GN doesn't allow a template to be used here.
# See //build/toolchain/toolchain.gni for details.
tool("stamp") {
command = stamp_command
description = stamp_description
}
tool("copy") {
command = copy_command
description = copy_description
}
forward_variables_from(invoker, [ "deps" ])
}
}
# This is a shorthand for gcc_toolchain instances based on the Chromium-built
# version of Clang. Only the toolchain_cpu and toolchain_os variables need to
# be specified by the invoker, and optionally toolprefix if it's a
# cross-compile case. Note that for a cross-compile case this toolchain
# requires a config to pass the appropriate -target option, or else it will
# actually just be doing a native compile. The invoker can optionally override
# use_gold too.
template("clang_toolchain") {
if (defined(invoker.toolprefix)) {
toolprefix = invoker.toolprefix
} else {
toolprefix = ""
}
gcc_toolchain(target_name) {
prefix = rebase_path("$clang_base_path/bin", root_build_dir)
cc = "$prefix/clang"
cxx = "$prefix/clang++"
ld = cxx
readelf = "${toolprefix}readelf"
ar = "${toolprefix}ar"
nm = "${toolprefix}nm"
forward_variables_from(invoker, [ "strip" ])
toolchain_args = {
if (defined(invoker.toolchain_args)) {
forward_variables_from(invoker.toolchain_args, "*")
}
is_clang = true
}
}
}

235
samples/GN/icu.gn Normal file
View File

@@ -0,0 +1,235 @@
# Copyright 2016 The Chromium Authors. All rights reserved.
# Use of this source code is governed by a BSD-style license that can be
# found in the LICENSE file.
import("//build/config/linux/pkg_config.gni")
import("//build/shim_headers.gni")
group("icu") {
public_deps = [
":icui18n",
":icuuc",
]
}
config("icu_config") {
defines = [
"USING_SYSTEM_ICU=1",
"ICU_UTIL_DATA_IMPL=ICU_UTIL_DATA_STATIC",
]
}
pkg_config("system_icui18n") {
packages = [ "icu-i18n" ]
}
pkg_config("system_icuuc") {
packages = [ "icu-uc" ]
}
source_set("icui18n") {
deps = [
":icui18n_shim",
]
public_configs = [
":icu_config",
":system_icui18n",
]
}
source_set("icuuc") {
deps = [
":icuuc_shim",
]
public_configs = [
":icu_config",
":system_icuuc",
]
}
shim_headers("icui18n_shim") {
root_path = "source/i18n"
headers = [
# This list can easily be updated using the command below:
# find third_party/icu/source/i18n/unicode \
# -iname '*.h' -printf '"%p",\n' | \
# sed -e 's|third_party/icu/i18n/common/||' | sort -u
"unicode/alphaindex.h",
"unicode/basictz.h",
"unicode/calendar.h",
"unicode/choicfmt.h",
"unicode/coleitr.h",
"unicode/coll.h",
"unicode/compactdecimalformat.h",
"unicode/curramt.h",
"unicode/currpinf.h",
"unicode/currunit.h",
"unicode/datefmt.h",
"unicode/dcfmtsym.h",
"unicode/decimfmt.h",
"unicode/dtfmtsym.h",
"unicode/dtitvfmt.h",
"unicode/dtitvinf.h",
"unicode/dtptngen.h",
"unicode/dtrule.h",
"unicode/fieldpos.h",
"unicode/fmtable.h",
"unicode/format.h",
"unicode/fpositer.h",
"unicode/gender.h",
"unicode/gregocal.h",
"unicode/locdspnm.h",
"unicode/measfmt.h",
"unicode/measunit.h",
"unicode/measure.h",
"unicode/msgfmt.h",
"unicode/numfmt.h",
"unicode/numsys.h",
"unicode/plurfmt.h",
"unicode/plurrule.h",
"unicode/rbnf.h",
"unicode/rbtz.h",
"unicode/regex.h",
"unicode/region.h",
"unicode/reldatefmt.h",
"unicode/scientificnumberformatter.h",
"unicode/search.h",
"unicode/selfmt.h",
"unicode/simpletz.h",
"unicode/smpdtfmt.h",
"unicode/sortkey.h",
"unicode/stsearch.h",
"unicode/tblcoll.h",
"unicode/timezone.h",
"unicode/tmunit.h",
"unicode/tmutamt.h",
"unicode/tmutfmt.h",
"unicode/translit.h",
"unicode/tzfmt.h",
"unicode/tznames.h",
"unicode/tzrule.h",
"unicode/tztrans.h",
"unicode/ucal.h",
"unicode/ucol.h",
"unicode/ucoleitr.h",
"unicode/ucsdet.h",
"unicode/ucurr.h",
"unicode/udat.h",
"unicode/udateintervalformat.h",
"unicode/udatpg.h",
"unicode/udisplaycontext.h",
"unicode/ufieldpositer.h",
"unicode/uformattable.h",
"unicode/ugender.h",
"unicode/uldnames.h",
"unicode/ulocdata.h",
"unicode/umsg.h",
"unicode/unirepl.h",
"unicode/unum.h",
"unicode/unumsys.h",
"unicode/upluralrules.h",
"unicode/uregex.h",
"unicode/uregion.h",
"unicode/usearch.h",
"unicode/uspoof.h",
"unicode/utmscale.h",
"unicode/utrans.h",
"unicode/vtzone.h",
]
}
shim_headers("icuuc_shim") {
root_path = "source/common"
headers = [
# This list can easily be updated using the command below:
# find third_party/icu/source/common/unicode \
# -iname '*.h' -printf '"%p",\n' | \
# sed -e 's|third_party/icu/source/common/||' | sort -u
"unicode/appendable.h",
"unicode/brkiter.h",
"unicode/bytestream.h",
"unicode/bytestrie.h",
"unicode/bytestriebuilder.h",
"unicode/caniter.h",
"unicode/chariter.h",
"unicode/dbbi.h",
"unicode/docmain.h",
"unicode/dtintrv.h",
"unicode/enumset.h",
"unicode/errorcode.h",
"unicode/filteredbrk.h",
"unicode/icudataver.h",
"unicode/icuplug.h",
"unicode/idna.h",
"unicode/listformatter.h",
"unicode/localpointer.h",
"unicode/locid.h",
"unicode/messagepattern.h",
"unicode/normalizer2.h",
"unicode/normlzr.h",
"unicode/parseerr.h",
"unicode/parsepos.h",
"unicode/platform.h",
"unicode/ptypes.h",
"unicode/putil.h",
"unicode/rbbi.h",
"unicode/rep.h",
"unicode/resbund.h",
"unicode/schriter.h",
"unicode/std_string.h",
"unicode/strenum.h",
"unicode/stringpiece.h",
"unicode/stringtriebuilder.h",
"unicode/symtable.h",
"unicode/ubidi.h",
"unicode/ubrk.h",
"unicode/ucasemap.h",
"unicode/ucat.h",
"unicode/uchar.h",
"unicode/ucharstrie.h",
"unicode/ucharstriebuilder.h",
"unicode/uchriter.h",
"unicode/uclean.h",
"unicode/ucnv.h",
"unicode/ucnv_cb.h",
"unicode/ucnv_err.h",
"unicode/ucnvsel.h",
"unicode/uconfig.h",
"unicode/udata.h",
"unicode/uenum.h",
"unicode/uidna.h",
"unicode/uiter.h",
"unicode/ulistformatter.h",
"unicode/uloc.h",
"unicode/umachine.h",
"unicode/umisc.h",
"unicode/unifilt.h",
"unicode/unifunct.h",
"unicode/unimatch.h",
"unicode/uniset.h",
"unicode/unistr.h",
"unicode/unorm.h",
"unicode/unorm2.h",
"unicode/uobject.h",
"unicode/urename.h",
"unicode/urep.h",
"unicode/ures.h",
"unicode/uscript.h",
"unicode/uset.h",
"unicode/usetiter.h",
"unicode/ushape.h",
"unicode/usprep.h",
"unicode/ustring.h",
"unicode/ustringtrie.h",
"unicode/utext.h",
"unicode/utf.h",
"unicode/utf16.h",
"unicode/utf32.h",
"unicode/utf8.h",
"unicode/utf_old.h",
"unicode/utrace.h",
"unicode/utypes.h",
"unicode/uvernum.h",
"unicode/uversion.h",
]
}

File diff suppressed because it is too large Load Diff

1422
samples/GN/ios-rules.gni Normal file

File diff suppressed because it is too large Load Diff

193
samples/GN/isolate.gni Normal file
View File

@@ -0,0 +1,193 @@
# Copyright 2016 the V8 project authors. All rights reserved.
# Use of this source code is governed by a BSD-style license that can be
# found in the LICENSE file.
import("//build/config/sanitizers/sanitizers.gni")
import("//third_party/icu/config.gni")
import("v8.gni")
declare_args() {
# Sets the test isolation mode (noop|prepare|check).
v8_test_isolation_mode = "noop"
}
template("v8_isolate_run") {
forward_variables_from(invoker,
"*",
[
"deps",
"isolate",
])
# Remember target name as within the action scope the target name will be
# different.
name = target_name
assert(defined(invoker.deps))
assert(defined(invoker.isolate))
if (name != "" && v8_test_isolation_mode != "noop") {
action(name + "_run") {
testonly = true
deps = invoker.deps
script = "//tools/isolate_driver.py"
sources = [
invoker.isolate,
]
inputs = [
# Files that are known to be involved in this step.
"//tools/swarming_client/isolate.py",
"//tools/swarming_client/run_isolated.py",
]
if (v8_test_isolation_mode == "prepare") {
outputs = [
"$root_out_dir/$name.isolated.gen.json",
]
} else if (v8_test_isolation_mode == "check") {
outputs = [
"$root_out_dir/$name.isolated",
"$root_out_dir/$name.isolated.state",
]
}
# Translate gn to gyp variables.
if (is_asan) {
asan = "1"
} else {
asan = "0"
}
if (is_msan) {
msan = "1"
} else {
msan = "0"
}
if (is_tsan) {
tsan = "1"
} else {
tsan = "0"
}
if (is_cfi) {
cfi_vptr = "1"
} else {
cfi_vptr = "0"
}
if (target_cpu == "x86") {
target_arch = "ia32"
} else {
target_arch = target_cpu
}
if (is_debug) {
configuration_name = "Debug"
} else {
configuration_name = "Release"
}
if (is_component_build) {
component = "shared_library"
} else {
component = "static_library"
}
if (icu_use_data_file) {
icu_use_data_file_flag = "1"
} else {
icu_use_data_file_flag = "0"
}
if (v8_enable_inspector) {
enable_inspector = "1"
} else {
enable_inspector = "0"
}
if (v8_use_external_startup_data) {
use_external_startup_data = "1"
} else {
use_external_startup_data = "0"
}
if (v8_use_snapshot) {
use_snapshot = "true"
} else {
use_snapshot = "false"
}
if (v8_has_valgrind) {
has_valgrind = "1"
} else {
has_valgrind = "0"
}
if (v8_gcmole) {
gcmole = "1"
} else {
gcmole = "0"
}
# Note, all paths will be rebased in isolate_driver.py to be relative to
# the isolate file.
args = [
v8_test_isolation_mode,
"--isolated",
rebase_path("$root_out_dir/$name.isolated", root_build_dir),
"--isolate",
rebase_path(invoker.isolate, root_build_dir),
# Path variables are used to replace file paths when loading a .isolate
# file
"--path-variable",
"DEPTH",
rebase_path("//", root_build_dir),
"--path-variable",
"PRODUCT_DIR",
rebase_path(root_out_dir, root_build_dir),
# TODO(machenbach): Set variables for remaining features.
"--config-variable",
"CONFIGURATION_NAME=$configuration_name",
"--config-variable",
"OS=$target_os",
"--config-variable",
"asan=$asan",
"--config-variable",
"cfi_vptr=$cfi_vptr",
"--config-variable",
"gcmole=$gcmole",
"--config-variable",
"has_valgrind=$has_valgrind",
"--config-variable",
"icu_use_data_file_flag=$icu_use_data_file_flag",
"--config-variable",
"is_gn=1",
"--config-variable",
"msan=$msan",
"--config-variable",
"tsan=$tsan",
"--config-variable",
"coverage=0",
"--config-variable",
"sanitizer_coverage=0",
"--config-variable",
"component=$component",
"--config-variable",
"target_arch=$target_arch",
"--config-variable",
"v8_enable_inspector=$enable_inspector",
"--config-variable",
"v8_use_external_startup_data=$use_external_startup_data",
"--config-variable",
"v8_use_snapshot=$use_snapshot",
]
if (is_win) {
args += [
"--config-variable",
"msvs_version=2015",
]
} else {
args += [
"--config-variable",
"msvs_version=0",
]
}
}
}
}

12
samples/Genie/Class.gs Normal file
View File

@@ -0,0 +1,12 @@
init
new Demo( "Demonstration class" ).run()
class Demo
_message:string = ""
construct ( message:string = "Optional argument - no message passed in constructor" )
_message = message
def run()
print( _message )

2
samples/Genie/Hello.gs Normal file
View File

@@ -0,0 +1,2 @@
init
print( "Hello, World!" )

135
samples/HCL/main.tf Normal file
View File

@@ -0,0 +1,135 @@
resource "aws_security_group" "elb_sec_group" {
description = "Allow traffic from the internet to ELB port 80"
vpc_id = "${var.vpc_id}"
ingress {
from_port = 80
to_port = 80
protocol = "tcp"
cidr_blocks = ["${split(",", var.allowed_cidr_blocks)}"]
}
egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
}
resource "aws_security_group" "dokku_allow_ssh_from_internal" {
description = "Allow git access over ssh from the private subnet"
vpc_id = "${var.vpc_id}"
ingress {
from_port = 22
to_port = 22
protocol = "tcp"
cidr_blocks = ["${var.private_subnet_cidr}"]
}
egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
}
resource "aws_security_group" "allow_from_elb_to_instance" {
description = "Allow traffic from the ELB to the private instance"
vpc_id = "${var.vpc_id}"
ingress {
security_groups = ["${aws_security_group.elb_sec_group.id}"]
from_port = 80
to_port = 80
protocol = "tcp"
}
egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
}
resource "aws_instance" "dokku" {
ami = "ami-47a23a30"
instance_type = "${var.instance_type}"
associate_public_ip_address = false
key_name = "${var.key_name}"
subnet_id = "${var.private_subnet_id}"
vpc_security_group_ids = [
"${var.bastion_sec_group_id}",
"${aws_security_group.allow_from_elb_to_instance.id}",
"${aws_security_group.dokku_allow_ssh_from_internal.id}"
]
tags {
Name = "${var.name}"
}
connection {
user = "ubuntu"
private_key = "${var.private_key}"
bastion_host = "${var.bastion_host}"
bastion_port = "${var.bastion_port}"
bastion_user = "${var.bastion_user}"
bastion_private_key = "${var.bastion_private_key}"
}
provisioner "file" {
source = "${path.module}/../scripts/install-dokku.sh"
destination = "/home/ubuntu/install-dokku.sh"
}
provisioner "remote-exec" {
inline = [
"chmod +x /home/ubuntu/install-dokku.sh",
"HOSTNAME=${var.hostname} /home/ubuntu/install-dokku.sh"
]
}
}
resource "aws_elb" "elb_dokku" {
name = "elb-dokku-${var.name}"
subnets = ["${var.public_subnet_id}"]
security_groups = ["${aws_security_group.elb_sec_group.id}"]
listener {
instance_port = 80
instance_protocol = "http"
lb_port = 80
lb_protocol = "http"
}
health_check {
healthy_threshold = 2
unhealthy_threshold = 2
timeout = 3
target = "HTTP:80/"
interval = 30
}
instances = ["${aws_instance.dokku.id}"]
cross_zone_load_balancing = false
idle_timeout = 400
tags {
Name = "elb-dokku-${var.name}"
}
}
resource "aws_route53_record" "dokku-deploy" {
zone_id = "${var.zone_id}"
name = "deploy.${var.hostname}"
type = "A"
ttl = "300"
records = ["${aws_instance.dokku.private_ip}"]
}
resource "aws_route53_record" "dokku-wildcard" {
zone_id = "${var.zone_id}"
name = "*.${var.hostname}"
type = "CNAME"
ttl = "300"
records = ["${aws_elb.elb_dokku.dns_name}"]
}

89
samples/HLSL/bloom.cginc Normal file
View File

@@ -0,0 +1,89 @@
// From https://github.com/Unity-Technologies/PostProcessing/blob/master/PostProcessing/Resources/Shaders/Bloom.cginc
// Licensed under the MIT license
#ifndef __BLOOM__
#define __BLOOM__
#include "Common.cginc"
// Brightness function
half Brightness(half3 c)
{
return Max3(c);
}
// 3-tap median filter
half3 Median(half3 a, half3 b, half3 c)
{
return a + b + c - min(min(a, b), c) - max(max(a, b), c);
}
// Downsample with a 4x4 box filter
half3 DownsampleFilter(sampler2D tex, float2 uv, float2 texelSize)
{
float4 d = texelSize.xyxy * float4(-1.0, -1.0, 1.0, 1.0);
half3 s;
s = DecodeHDR(tex2D(tex, uv + d.xy));
s += DecodeHDR(tex2D(tex, uv + d.zy));
s += DecodeHDR(tex2D(tex, uv + d.xw));
s += DecodeHDR(tex2D(tex, uv + d.zw));
return s * (1.0 / 4.0);
}
// Downsample with a 4x4 box filter + anti-flicker filter
half3 DownsampleAntiFlickerFilter(sampler2D tex, float2 uv, float2 texelSize)
{
float4 d = texelSize.xyxy * float4(-1.0, -1.0, 1.0, 1.0);
half3 s1 = DecodeHDR(tex2D(tex, uv + d.xy));
half3 s2 = DecodeHDR(tex2D(tex, uv + d.zy));
half3 s3 = DecodeHDR(tex2D(tex, uv + d.xw));
half3 s4 = DecodeHDR(tex2D(tex, uv + d.zw));
// Karis's luma weighted average (using brightness instead of luma)
half s1w = 1.0 / (Brightness(s1) + 1.0);
half s2w = 1.0 / (Brightness(s2) + 1.0);
half s3w = 1.0 / (Brightness(s3) + 1.0);
half s4w = 1.0 / (Brightness(s4) + 1.0);
half one_div_wsum = 1.0 / (s1w + s2w + s3w + s4w);
return (s1 * s1w + s2 * s2w + s3 * s3w + s4 * s4w) * one_div_wsum;
}
half3 UpsampleFilter(sampler2D tex, float2 uv, float2 texelSize, float sampleScale)
{
#if MOBILE_OR_CONSOLE
// 4-tap bilinear upsampler
float4 d = texelSize.xyxy * float4(-1.0, -1.0, 1.0, 1.0) * (sampleScale * 0.5);
half3 s;
s = DecodeHDR(tex2D(tex, uv + d.xy));
s += DecodeHDR(tex2D(tex, uv + d.zy));
s += DecodeHDR(tex2D(tex, uv + d.xw));
s += DecodeHDR(tex2D(tex, uv + d.zw));
return s * (1.0 / 4.0);
#else
// 9-tap bilinear upsampler (tent filter)
float4 d = texelSize.xyxy * float4(1.0, 1.0, -1.0, 0.0) * sampleScale;
half3 s;
s = DecodeHDR(tex2D(tex, uv - d.xy));
s += DecodeHDR(tex2D(tex, uv - d.wy)) * 2.0;
s += DecodeHDR(tex2D(tex, uv - d.zy));
s += DecodeHDR(tex2D(tex, uv + d.zw)) * 2.0;
s += DecodeHDR(tex2D(tex, uv)) * 4.0;
s += DecodeHDR(tex2D(tex, uv + d.xw)) * 2.0;
s += DecodeHDR(tex2D(tex, uv + d.zy));
s += DecodeHDR(tex2D(tex, uv + d.wy)) * 2.0;
s += DecodeHDR(tex2D(tex, uv + d.xy));
return s * (1.0 / 16.0);
#endif
}
#endif // __BLOOM__

View File

@@ -0,0 +1,48 @@
{% from "forms.html" import label as description %}
{% macro field(name, value='', type='text') %}
<div class="field">
<input type="{{ type }}" name="{{ name }}"
value="{{ value | escape }}" />
</div>
{% endmacro %}
<html>
<head>
{% extends "head.html" %}
</head>
<body>
{% if horse %}
Chuck Norris once kicked a horse in the chin. Its descendants are known today as Giraffes.
{% elif optimus %}
Chuck Norris once urinated in a semi truck's gas tank as a joke....that truck is now known as Optimus Prime.
{% else %}
Chuck Norris threw a grenade and killed 50 people, then the grenade exploded.
{% endif %}
{% block left %}
This is the left side!
{% endblock %}
{% block right %}
This is the right side!
{% endblock %}
{{ description('Username') }}
{{ field('user') }}
{{ field('pass', type='password') }}
<h1>Posts</h1>
<ul>
{% for item in items %}
<li>{{ item.title }}</li>
{% else %}
<li>This would display if the 'item' collection were empty</li>
{% endfor %}
</ul>
{# Don't escape foo #}
{{ foo | safe }}
</body>
</html>

View File

@@ -0,0 +1,6 @@
{
"presets": [
"es2015",
"es2016"
]
}

View File

@@ -0,0 +1,923 @@
/* generated by jison-lex 0.3.4-159 */
var ccalcLex = (function () {
// See also:
// http://stackoverflow.com/questions/1382107/whats-a-good-way-to-extend-error-in-javascript/#35881508
// but we keep the prototype.constructor and prototype.name assignment lines too for compatibility
// with userland code which might access the derived class in a 'classic' way.
function JisonLexerError(msg, hash) {
Object.defineProperty(this, 'name', {
enumerable: false,
writable: false,
value: 'JisonLexerError'
});
if (msg == null) msg = '???';
Object.defineProperty(this, 'message', {
enumerable: false,
writable: true,
value: msg
});
this.hash = hash;
var stacktrace;
if (hash && hash.exception instanceof Error) {
var ex2 = hash.exception;
this.message = ex2.message || msg;
stacktrace = ex2.stack;
}
if (!stacktrace) {
if (Error.hasOwnProperty('captureStackTrace')) { // V8
Error.captureStackTrace(this, this.constructor);
} else {
stacktrace = (new Error(msg)).stack;
}
}
if (stacktrace) {
Object.defineProperty(this, 'stack', {
enumerable: false,
writable: false,
value: stacktrace
});
}
}
if (typeof Object.setPrototypeOf === 'function') {
Object.setPrototypeOf(JisonLexerError.prototype, Error.prototype);
} else {
JisonLexerError.prototype = Object.create(Error.prototype);
}
JisonLexerError.prototype.constructor = JisonLexerError;
JisonLexerError.prototype.name = 'JisonLexerError';
var lexer = {
EOF: 1,
ERROR: 2,
// JisonLexerError: JisonLexerError, // <-- injected by the code generator
// options: {}, // <-- injected by the code generator
// yy: ..., // <-- injected by setInput()
__currentRuleSet__: null, // <-- internal rule set cache for the current lexer state
__error_infos: [], // INTERNAL USE ONLY: the set of lexErrorInfo objects created since the last cleanup
__decompressed: false, // INTERNAL USE ONLY: mark whether the lexer instance has been 'unfolded' completely and is now ready for use
done: false, // INTERNAL USE ONLY
_backtrack: false, // INTERNAL USE ONLY
_input: '', // INTERNAL USE ONLY
_more: false, // INTERNAL USE ONLY
_signaled_error_token: false, // INTERNAL USE ONLY
conditionStack: [], // INTERNAL USE ONLY; managed via `pushState()`, `popState()`, `topState()` and `stateStackSize()`
match: '', // READ-ONLY EXTERNAL ACCESS - ADVANCED USE ONLY: tracks input which has been matched so far for the lexer token under construction. `match` is identical to `yytext` except that this one still contains the matched input string after `lexer.performAction()` has been invoked, where userland code MAY have changed/replaced the `yytext` value entirely!
matched: '', // READ-ONLY EXTERNAL ACCESS - ADVANCED USE ONLY: tracks entire input which has been matched so far
matches: false, // READ-ONLY EXTERNAL ACCESS - ADVANCED USE ONLY: tracks RE match result for last (successful) match attempt
yytext: '', // ADVANCED USE ONLY: tracks input which has been matched so far for the lexer token under construction; this value is transferred to the parser as the 'token value' when the parser consumes the lexer token produced through a call to the `lex()` API.
offset: 0, // READ-ONLY EXTERNAL ACCESS - ADVANCED USE ONLY: tracks the 'cursor position' in the input string, i.e. the number of characters matched so far
yyleng: 0, // READ-ONLY EXTERNAL ACCESS - ADVANCED USE ONLY: length of matched input for the token under construction (`yytext`)
yylineno: 0, // READ-ONLY EXTERNAL ACCESS - ADVANCED USE ONLY: 'line number' at which the token under construction is located
yylloc: null, // READ-ONLY EXTERNAL ACCESS - ADVANCED USE ONLY: tracks location info (lines + columns) for the token under construction
// INTERNAL USE: construct a suitable error info hash object instance for `parseError`.
constructLexErrorInfo: function lexer_constructLexErrorInfo(msg, recoverable) {
var pei = {
errStr: msg,
recoverable: !!recoverable,
text: this.match, // This one MAY be empty; userland code should use the `upcomingInput` API to obtain more text which follows the 'lexer cursor position'...
token: null,
line: this.yylineno,
loc: this.yylloc,
yy: this.yy,
lexer: this,
// and make sure the error info doesn't stay due to potential
// ref cycle via userland code manipulations.
// These would otherwise all be memory leak opportunities!
//
// Note that only array and object references are nuked as those
// constitute the set of elements which can produce a cyclic ref.
// The rest of the members is kept intact as they are harmless.
destroy: function destructLexErrorInfo() {
// remove cyclic references added to error info:
// info.yy = null;
// info.lexer = null;
// ...
var rec = !!this.recoverable;
for (var key in this) {
if (this.hasOwnProperty(key) && typeof key === 'object') {
this[key] = undefined;
}
}
this.recoverable = rec;
}
};
// track this instance so we can `destroy()` it once we deem it superfluous and ready for garbage collection!
this.__error_infos.push(pei);
return pei;
},
parseError: function lexer_parseError(str, hash) {
if (this.yy.parser && typeof this.yy.parser.parseError === 'function') {
return this.yy.parser.parseError(str, hash) || this.ERROR;
} else if (typeof this.yy.parseError === 'function') {
return this.yy.parseError.call(this, str, hash) || this.ERROR;
} else {
throw new this.JisonLexerError(str);
}
},
// final cleanup function for when we have completed lexing the input;
// make it an API so that external code can use this one once userland
// code has decided it's time to destroy any lingering lexer error
// hash object instances and the like: this function helps to clean
// up these constructs, which *may* carry cyclic references which would
// otherwise prevent the instances from being properly and timely
// garbage-collected, i.e. this function helps prevent memory leaks!
cleanupAfterLex: function lexer_cleanupAfterLex(do_not_nuke_errorinfos) {
var rv;
// prevent lingering circular references from causing memory leaks:
this.setInput('', {});
// nuke the error hash info instances created during this run.
// Userland code must COPY any data/references
// in the error hash instance(s) it is more permanently interested in.
if (!do_not_nuke_errorinfos) {
for (var i = this.__error_infos.length - 1; i >= 0; i--) {
var el = this.__error_infos[i];
if (el && typeof el.destroy === 'function') {
el.destroy();
}
}
this.__error_infos.length = 0;
}
return this;
},
// clear the lexer token context; intended for internal use only
clear: function lexer_clear() {
this.yytext = '';
this.yyleng = 0;
this.match = '';
this.matches = false;
this._more = false;
this._backtrack = false;
},
// resets the lexer, sets new input
setInput: function lexer_setInput(input, yy) {
this.yy = yy || this.yy || {};
// also check if we've fully initialized the lexer instance,
// including expansion work to be done to go from a loaded
// lexer to a usable lexer:
if (!this.__decompressed) {
// step 1: decompress the regex list:
var rules = this.rules;
for (var i = 0, len = rules.length; i < len; i++) {
var rule_re = rules[i];
// compression: is the RE an xref to another RE slot in the rules[] table?
if (typeof rule_re === 'number') {
rules[i] = rules[rule_re];
}
}
// step 2: unfold the conditions[] set to make these ready for use:
var conditions = this.conditions;
for (var k in conditions) {
var spec = conditions[k];
var rule_ids = spec.rules;
var len = rule_ids.length;
var rule_regexes = new Array(len + 1); // slot 0 is unused; we use a 1-based index approach here to keep the hottest code in `lexer_next()` fast and simple!
var rule_new_ids = new Array(len + 1);
if (this.rules_prefix1) {
var rule_prefixes = new Array(65536);
var first_catch_all_index = 0;
for (var i = 0; i < len; i++) {
var idx = rule_ids[i];
var rule_re = rules[idx];
rule_regexes[i + 1] = rule_re;
rule_new_ids[i + 1] = idx;
var prefix = this.rules_prefix1[idx];
// compression: is the PREFIX-STRING an xref to another PREFIX-STRING slot in the rules_prefix1[] table?
if (typeof prefix === 'number') {
prefix = this.rules_prefix1[prefix];
}
// init the prefix lookup table: first come, first serve...
if (!prefix) {
if (!first_catch_all_index) {
first_catch_all_index = i + 1;
}
} else {
for (var j = 0, pfxlen = prefix.length; j < pfxlen; j++) {
var pfxch = prefix.charCodeAt(j);
// first come, first serve:
if (!rule_prefixes[pfxch]) {
rule_prefixes[pfxch] = i + 1;
}
}
}
}
// if no catch-all prefix has been encountered yet, it means all
// rules have limited prefix sets and it MAY be that particular
// input characters won't be recognized by any rule in this
// condition state.
//
// To speed up their discovery at run-time while keeping the
// remainder of the lexer kernel code very simple (and fast),
// we point these to an 'illegal' rule set index *beyond*
// the end of the rule set.
if (!first_catch_all_index) {
first_catch_all_index = len + 1;
}
for (var i = 0; i < 65536; i++) {
if (!rule_prefixes[i]) {
rule_prefixes[i] = first_catch_all_index;
}
}
spec.__dispatch_lut = rule_prefixes;
} else {
for (var i = 0; i < len; i++) {
var idx = rule_ids[i];
var rule_re = rules[idx];
rule_regexes[i + 1] = rule_re;
rule_new_ids[i + 1] = idx;
}
}
spec.rules = rule_new_ids;
spec.__rule_regexes = rule_regexes;
spec.__rule_count = len;
}
this.__decompressed = true;
}
this._input = input || '';
this.clear();
this._signaled_error_token = false;
this.done = false;
this.yylineno = 0;
this.matched = '';
this.conditionStack = ['INITIAL'];
this.__currentRuleSet__ = null;
this.yylloc = {
first_line: 1,
first_column: 0,
last_line: 1,
last_column: 0
};
if (this.options.ranges) {
this.yylloc.range = [0, 0];
}
this.offset = 0;
return this;
},
// consumes and returns one char from the input
input: function lexer_input() {
if (!this._input) {
this.done = true;
return null;
}
var ch = this._input[0];
this.yytext += ch;
this.yyleng++;
this.offset++;
this.match += ch;
this.matched += ch;
// Count the linenumber up when we hit the LF (or a stand-alone CR).
// On CRLF, the linenumber is incremented when you fetch the CR or the CRLF combo
// and we advance immediately past the LF as well, returning both together as if
// it was all a single 'character' only.
var slice_len = 1;
var lines = false;
if (ch === '\n') {
lines = true;
} else if (ch === '\r') {
lines = true;
var ch2 = this._input[1];
if (ch2 === '\n') {
slice_len++;
ch += ch2;
this.yytext += ch2;
this.yyleng++;
this.offset++;
this.match += ch2;
this.matched += ch2;
if (this.options.ranges) {
this.yylloc.range[1]++;
}
}
}
if (lines) {
this.yylineno++;
this.yylloc.last_line++;
} else {
this.yylloc.last_column++;
}
if (this.options.ranges) {
this.yylloc.range[1]++;
}
this._input = this._input.slice(slice_len);
return ch;
},
// unshifts one char (or a string) into the input
unput: function lexer_unput(ch) {
var len = ch.length;
var lines = ch.split(/(?:\r\n?|\n)/g);
this._input = ch + this._input;
this.yytext = this.yytext.substr(0, this.yytext.length - len);
//this.yyleng -= len;
this.offset -= len;
var oldLines = this.match.split(/(?:\r\n?|\n)/g);
this.match = this.match.substr(0, this.match.length - len);
this.matched = this.matched.substr(0, this.matched.length - len);
if (lines.length - 1) {
this.yylineno -= lines.length - 1;
}
this.yylloc.last_line = this.yylineno + 1;
this.yylloc.last_column = (lines ?
(lines.length === oldLines.length ? this.yylloc.first_column : 0)
+ oldLines[oldLines.length - lines.length].length - lines[0].length :
this.yylloc.first_column - len);
if (this.options.ranges) {
this.yylloc.range[1] = this.yylloc.range[0] + this.yyleng - len;
}
this.yyleng = this.yytext.length;
this.done = false;
return this;
},
// When called from action, caches matched text and appends it on next action
more: function lexer_more() {
this._more = true;
return this;
},
// When called from action, signals the lexer that this rule fails to match the input, so the next matching rule (regex) should be tested instead.
reject: function lexer_reject() {
if (this.options.backtrack_lexer) {
this._backtrack = true;
} else {
// when the parseError() call returns, we MUST ensure that the error is registered.
// We accomplish this by signaling an 'error' token to be produced for the current
// .lex() run.
var p = this.constructLexErrorInfo('Lexical error on line ' + (this.yylineno + 1) + '. You can only invoke reject() in the lexer when the lexer is of the backtracking persuasion (options.backtrack_lexer = true).\n' + this.showPosition(), false);
this._signaled_error_token = (this.parseError(p.errStr, p) || this.ERROR);
}
return this;
},
// retain first n characters of the match
less: function lexer_less(n) {
return this.unput(this.match.slice(n));
},
// return (part of the) already matched input, i.e. for error messages.
// Limit the returned string length to `maxSize` (default: 20).
// Limit the returned string to the `maxLines` number of lines of input (default: 1).
// Negative limit values equal *unlimited*.
pastInput: function lexer_pastInput(maxSize, maxLines) {
var past = this.matched.substring(0, this.matched.length - this.match.length);
if (maxSize < 0)
maxSize = past.length;
else if (!maxSize)
maxSize = 20;
if (maxLines < 0)
maxLines = past.length; // can't ever have more input lines than this!
else if (!maxLines)
maxLines = 1;
// `substr` anticipation: treat \r\n as a single character and take a little
// more than necessary so that we can still properly check against maxSize
// after we've transformed and limited the newLines in here:
past = past.substr(-maxSize * 2 - 2);
// now that we have a significantly reduced string to process, transform the newlines
// and chop them, then limit them:
var a = past.replace(/\r\n|\r/g, '\n').split('\n');
a = a.slice(-maxLines);
past = a.join('\n');
// When, after limiting to maxLines, we still have too much to return,
// do add an ellipsis prefix...
if (past.length > maxSize) {
past = '...' + past.substr(-maxSize);
}
return past;
},
// return (part of the) upcoming input, i.e. for error messages.
// Limit the returned string length to `maxSize` (default: 20).
// Limit the returned string to the `maxLines` number of lines of input (default: 1).
// Negative limit values equal *unlimited*.
upcomingInput: function lexer_upcomingInput(maxSize, maxLines) {
var next = this.match;
if (maxSize < 0)
maxSize = next.length + this._input.length;
else if (!maxSize)
maxSize = 20;
if (maxLines < 0)
maxLines = maxSize; // can't ever have more input lines than this!
else if (!maxLines)
maxLines = 1;
// `substring` anticipation: treat \r\n as a single character and take a little
// more than necessary so that we can still properly check against maxSize
// after we've transformed and limited the newLines in here:
if (next.length < maxSize * 2 + 2) {
next += this._input.substring(0, maxSize * 2 + 2); // substring is faster on Chrome/V8
}
// now that we have a significantly reduced string to process, transform the newlines
// and chop them, then limit them:
var a = next.replace(/\r\n|\r/g, '\n').split('\n');
a = a.slice(0, maxLines);
next = a.join('\n');
// When, after limiting to maxLines, we still have too much to return,
// do add an ellipsis postfix...
if (next.length > maxSize) {
next = next.substring(0, maxSize) + '...';
}
return next;
},
// return a string which displays the character position where the lexing error occurred, i.e. for error messages
showPosition: function lexer_showPosition(maxPrefix, maxPostfix) {
var pre = this.pastInput(maxPrefix).replace(/\s/g, ' ');
var c = new Array(pre.length + 1).join('-');
return pre + this.upcomingInput(maxPostfix).replace(/\s/g, ' ') + '\n' + c + '^';
},
// helper function, used to produce a human readable description as a string, given
// the input `yylloc` location object.
// Set `display_range_too` to TRUE to include the string character index position(s)
// in the description if the `yylloc.range` is available.
describeYYLLOC: function lexer_describe_yylloc(yylloc, display_range_too) {
var l1 = yylloc.first_line;
var l2 = yylloc.last_line;
var o1 = yylloc.first_column;
var o2 = yylloc.last_column - 1;
var dl = l2 - l1;
var d_o = (dl === 0 ? o2 - o1 : 1000);
var rv;
if (dl === 0) {
rv = 'line ' + l1 + ', ';
if (d_o === 0) {
rv += 'column ' + o1;
} else {
rv += 'columns ' + o1 + ' .. ' + o2;
}
} else {
rv = 'lines ' + l1 + '(column ' + o1 + ') .. ' + l2 + '(column ' + o2 + ')';
}
if (yylloc.range && display_range_too) {
var r1 = yylloc.range[0];
var r2 = yylloc.range[1] - 1;
if (r2 === r1) {
rv += ' {String Offset: ' + r1 + '}';
} else {
rv += ' {String Offset range: ' + r1 + ' .. ' + r2 + '}';
}
}
return rv;
// return JSON.stringify(yylloc);
},
// test the lexed token: return FALSE when not a match, otherwise return token.
//
// `match` is supposed to be an array coming out of a regex match, i.e. `match[0]`
// contains the actually matched text string.
//
// Also move the input cursor forward and update the match collectors:
// - yytext
// - yyleng
// - match
// - matches
// - yylloc
// - offset
test_match: function lexer_test_match(match, indexed_rule) {
var token,
lines,
backup,
match_str;
if (this.options.backtrack_lexer) {
// save context
backup = {
yylineno: this.yylineno,
yylloc: {
first_line: this.yylloc.first_line,
last_line: this.last_line,
first_column: this.yylloc.first_column,
last_column: this.yylloc.last_column
},
yytext: this.yytext,
match: this.match,
matches: this.matches,
matched: this.matched,
yyleng: this.yyleng,
offset: this.offset,
_more: this._more,
_input: this._input,
yy: this.yy,
conditionStack: this.conditionStack.slice(0),
done: this.done
};
if (this.options.ranges) {
backup.yylloc.range = this.yylloc.range.slice(0);
}
}
match_str = match[0];
lines = match_str.match(/(?:\r\n?|\n).*/g);
if (lines) {
this.yylineno += lines.length;
}
this.yylloc = {
first_line: this.yylloc.last_line,
last_line: this.yylineno + 1,
first_column: this.yylloc.last_column,
last_column: lines ?
lines[lines.length - 1].length - lines[lines.length - 1].match(/\r?\n?/)[0].length :
this.yylloc.last_column + match_str.length
};
this.yytext += match_str;
this.match += match_str;
this.matches = match;
this.yyleng = this.yytext.length;
if (this.options.ranges) {
this.yylloc.range = [this.offset, this.offset + this.yyleng];
}
// previous lex rules MAY have invoked the `more()` API rather than producing a token:
// those rules will already have moved this `offset` forward matching their match lengths,
// hence we must only add our own match length now:
this.offset += match_str.length;
this._more = false;
this._backtrack = false;
this._input = this._input.slice(match_str.length);
this.matched += match_str;
// calling this method:
//
// function lexer__performAction(yy, yy_, $avoiding_name_collisions, YY_START) {...}
token = this.performAction.call(this, this.yy, this, indexed_rule, this.conditionStack[this.conditionStack.length - 1] /* = YY_START */);
// otherwise, when the action codes are all simple return token statements:
//token = this.simpleCaseActionClusters[indexed_rule];
if (this.done && this._input) {
this.done = false;
}
if (token) {
return token;
} else if (this._backtrack) {
// recover context
for (var k in backup) {
this[k] = backup[k];
}
this.__currentRuleSet__ = null;
return false; // rule action called reject() implying the next rule should be tested instead.
} else if (this._signaled_error_token) {
// produce one 'error' token as .parseError() in reject() did not guarantee a failure signal by throwing an exception!
token = this._signaled_error_token;
this._signaled_error_token = false;
return token;
}
return false;
},
// return next match in input
next: function lexer_next() {
if (this.done) {
this.clear();
return this.EOF;
}
if (!this._input) {
this.done = true;
}
var token,
match,
tempMatch,
index;
if (!this._more) {
this.clear();
}
var spec = this.__currentRuleSet__;
if (!spec) {
// Update the ruleset cache as we apparently encountered a state change or just started lexing.
// The cache is set up for fast lookup -- we assume a lexer will switch states much less often than it will
// invoke the `lex()` token-producing API and related APIs, hence caching the set for direct access helps
// speed up those activities a tiny bit.
spec = this.__currentRuleSet__ = this._currentRules();
}
var rule_ids = spec.rules;
// var dispatch = spec.__dispatch_lut;
var regexes = spec.__rule_regexes;
var len = spec.__rule_count;
// var c0 = this._input[0];
// Note: the arrays are 1-based, while `len` itself is a valid index,
// hence the non-standard less-or-equal check in the next loop condition!
//
// `dispatch` is a lookup table which lists the *first* rule which matches the 1-char *prefix* of the rule-to-match.
// By using that array as a jumpstart, we can cut down on the otherwise O(n*m) behaviour of this lexer, down to
// O(n) ideally, where:
//
// - N is the number of input particles -- which is not precisely characters
// as we progress on a per-regex-match basis rather than on a per-character basis
//
// - M is the number of rules (regexes) to test in the active condition state.
//
for (var i = 1 /* (dispatch[c0] || 1) */ ; i <= len; i++) {
tempMatch = this._input.match(regexes[i]);
if (tempMatch && (!match || tempMatch[0].length > match[0].length)) {
match = tempMatch;
index = i;
if (this.options.backtrack_lexer) {
token = this.test_match(tempMatch, rule_ids[i]);
if (token !== false) {
return token;
} else if (this._backtrack) {
match = undefined;
continue; // rule action called reject() implying a rule MISmatch.
} else {
// else: this is a lexer rule which consumes input without producing a token (e.g. whitespace)
return false;
}
} else if (!this.options.flex) {
break;
}
}
}
if (match) {
token = this.test_match(match, rule_ids[index]);
if (token !== false) {
return token;
}
// else: this is a lexer rule which consumes input without producing a token (e.g. whitespace)
return false;
}
if (this._input === '') {
this.done = true;
return this.EOF;
} else {
var p = this.constructLexErrorInfo('Lexical error on line ' + (this.yylineno + 1) + '. Unrecognized text.\n' + this.showPosition(), this.options.lexer_errors_are_recoverable);
token = (this.parseError(p.errStr, p) || this.ERROR);
if (token === this.ERROR) {
// we can try to recover from a lexer error that parseError() did not 'recover' for us, by moving forward at least one character at a time:
if (!this.match.length) {
this.input();
}
}
return token;
}
},
// return next match that has a token
lex: function lexer_lex() {
var r;
// allow the PRE/POST handlers set/modify the return token for maximum flexibility of the generated lexer:
if (typeof this.options.pre_lex === 'function') {
r = this.options.pre_lex.call(this);
}
while (!r) {
r = this.next();
}
if (typeof this.options.post_lex === 'function') {
// (also account for a userdef function which does not return any value: keep the token as is)
r = this.options.post_lex.call(this, r) || r;
}
return r;
},
// backwards compatible alias for `pushState()`;
// the latter is symmetrical with `popState()` and we advise to use
// those APIs in any modern lexer code, rather than `begin()`.
begin: function lexer_begin(condition) {
return this.pushState(condition);
},
// activates a new lexer condition state (pushes the new lexer condition state onto the condition stack)
pushState: function lexer_pushState(condition) {
this.conditionStack.push(condition);
this.__currentRuleSet__ = null;
return this;
},
// pop the previously active lexer condition state off the condition stack
popState: function lexer_popState() {
var n = this.conditionStack.length - 1;
if (n > 0) {
this.__currentRuleSet__ = null;
return this.conditionStack.pop();
} else {
return this.conditionStack[0];
}
},
// return the currently active lexer condition state; when an index argument is provided it produces the N-th previous condition state, if available
topState: function lexer_topState(n) {
n = this.conditionStack.length - 1 - Math.abs(n || 0);
if (n >= 0) {
return this.conditionStack[n];
} else {
return 'INITIAL';
}
},
// (internal) determine the lexer rule set which is active for the currently active lexer condition state
_currentRules: function lexer__currentRules() {
if (this.conditionStack.length && this.conditionStack[this.conditionStack.length - 1]) {
return this.conditions[this.conditionStack[this.conditionStack.length - 1]];
} else {
return this.conditions['INITIAL'];
}
},
// return the number of states currently on the stack
stateStackSize: function lexer_stateStackSize() {
return this.conditionStack.length;
},
options: {},
JisonLexerError: JisonLexerError,
performAction: function lexer__performAction(yy, yy_, $avoiding_name_collisions, YY_START) {
var YYSTATE = YY_START;
switch($avoiding_name_collisions) {
case 0 :
/*! Conditions:: INITIAL */
/*! Rule:: [ \t\r\n]+ */
/* eat up whitespace */
BeginToken(yy_.yytext);
break;
case 1 :
/*! Conditions:: INITIAL */
/*! Rule:: {DIGIT}+ */
BeginToken(yy_.yytext);
yylval.value = atof(yy_.yytext);
return VALUE;
break;
case 2 :
/*! Conditions:: INITIAL */
/*! Rule:: {DIGIT}+\.{DIGIT}* */
BeginToken(yy_.yytext);
yylval.value = atof(yy_.yytext);
return VALUE;
break;
case 3 :
/*! Conditions:: INITIAL */
/*! Rule:: {DIGIT}+[eE]["+""-"]?{DIGIT}* */
BeginToken(yy_.yytext);
yylval.value = atof(yy_.yytext);
return VALUE;
break;
case 4 :
/*! Conditions:: INITIAL */
/*! Rule:: {DIGIT}+\.{DIGIT}*[eE]["+""-"]?{DIGIT}* */
BeginToken(yy_.yytext);
yylval.value = atof(yy_.yytext);
return VALUE;
break;
case 5 :
/*! Conditions:: INITIAL */
/*! Rule:: {ID} */
BeginToken(yy_.yytext);
yylval.string = malloc(strlen(yy_.yytext)+1);
strcpy(yylval.string, yy_.yytext);
return IDENTIFIER;
break;
case 6 :
/*! Conditions:: INITIAL */
/*! Rule:: \+ */
BeginToken(yy_.yytext); return ADD;
break;
case 7 :
/*! Conditions:: INITIAL */
/*! Rule:: - */
BeginToken(yy_.yytext); return SUB;
break;
case 8 :
/*! Conditions:: INITIAL */
/*! Rule:: \* */
BeginToken(yy_.yytext); return MULT;
break;
case 9 :
/*! Conditions:: INITIAL */
/*! Rule:: \/ */
BeginToken(yy_.yytext); return DIV;
break;
case 10 :
/*! Conditions:: INITIAL */
/*! Rule:: \( */
BeginToken(yy_.yytext); return LBRACE;
break;
case 11 :
/*! Conditions:: INITIAL */
/*! Rule:: \) */
BeginToken(yy_.yytext); return RBRACE;
break;
case 12 :
/*! Conditions:: INITIAL */
/*! Rule:: ; */
BeginToken(yy_.yytext); return SEMICOLON;
break;
case 13 :
/*! Conditions:: INITIAL */
/*! Rule:: = */
BeginToken(yy_.yytext); return ASSIGN;
break;
case 14 :
/*! Conditions:: INITIAL */
/*! Rule:: . */
BeginToken(yy_.yytext);
return yy_.yytext[0];
break;
default:
return this.simpleCaseActionClusters[$avoiding_name_collisions];
}
},
simpleCaseActionClusters: {
},
rules: [
/^(?:[ \t\r\n]+)/,
/^(?:(\d)+)/,
/^(?:(\d)+\.(\d)*)/,
/^(?:(\d)+[Ee]["+]?(\d)*)/,
/^(?:(\d)+\.(\d)*[Ee]["+]?(\d)*)/,
/^(?:([^\W\d]\w*))/,
/^(?:\+)/,
/^(?:-)/,
/^(?:\*)/,
/^(?:\/)/,
/^(?:\()/,
/^(?:\))/,
/^(?:;)/,
/^(?:=)/,
/^(?:.)/
],
conditions: {
"INITIAL": {
rules: [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14
],
inclusive: true
}
}
};
/*--------------------------------------------------------------------
* lex.l
*------------------------------------------------------------------*/;
return lexer;
})();

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,31 @@
/**
* @fileoverview
* @enhanceable
* @public
*/
// GENERATED CODE -- DO NOT EDIT!
goog.provide('proto.google.protobuf.Timestamp');
goog.require('jspb.Message');
/**
* Generated by JsPbCodeGenerator.
* @param {Array=} opt_data Optional initial data array, typically from a
* server response, or constructed directly in Javascript. The array is used
* in place and becomes part of the constructed object. It is not cloned.
* If no data is provided, the constructed object will be empty, but still
* valid.
* @extends {jspb.Message}
* @constructor
*/
proto.google.protobuf.Timestamp = function(opt_data) {
jspb.Message.initialize(this, opt_data, 0, -1, null, null);
};
goog.inherits(proto.google.protobuf.Timestamp, jspb.Message);
if (goog.DEBUG && !COMPILED) {
proto.google.protobuf.Timestamp.displayName = 'proto.google.protobuf.Timestamp';
}
// Remainder elided

View File

@@ -0,0 +1,39 @@
digit [0-9]
id [a-zA-Z][a-zA-Z0-9]*
%%
"//".* /* ignore comment */
"main" return 'MAIN';
"class" return 'CLASS';
"extends" return 'EXTENDS';
"nat" return 'NATTYPE';
"if" return 'IF';
"else" return 'ELSE';
"for" return 'FOR';
"printNat" return 'PRINTNAT';
"readNat" return 'READNAT';
"this" return 'THIS';
"new" return 'NEW';
"var" return 'VAR';
"null" return 'NUL';
{digit}+ return 'NATLITERAL';
{id} return 'ID';
"==" return 'EQUALITY';
"=" return 'ASSIGN';
"+" return 'PLUS';
"-" return 'MINUS';
"*" return 'TIMES';
">" return 'GREATER';
"||" return 'OR';
"!" return 'NOT';
"." return 'DOT';
"{" return 'LBRACE';
"}" return 'RBRACE';
"(" return 'LPAREN';
")" return 'RPAREN';
";" return 'SEMICOLON';
\s+ /* skip whitespace */
"." throw 'Illegal character';
<<EOF>> return 'ENDOFFILE';

View File

@@ -0,0 +1,29 @@
%%
\n+ {yy.freshLine = true;}
\s+ {yy.freshLine = false;}
"y{"[^}]*"}" {yytext = yytext.substr(2, yyleng - 3); return 'ACTION';}
[a-zA-Z_][a-zA-Z0-9_-]* {return 'NAME';}
'"'([^"]|'\"')*'"' {return 'STRING_LIT';}
"'"([^']|"\'")*"'" {return 'STRING_LIT';}
"|" {return '|';}
"["("\]"|[^\]])*"]" {return 'ANY_GROUP_REGEX';}
"(" {return '(';}
")" {return ')';}
"+" {return '+';}
"*" {return '*';}
"?" {return '?';}
"^" {return '^';}
"/" {return '/';}
"\\"[a-zA-Z0] {return 'ESCAPE_CHAR';}
"$" {return '$';}
"<<EOF>>" {return '$';}
"." {return '.';}
"%%" {return '%%';}
"{"\d+(","\s?\d+|",")?"}" {return 'RANGE_REGEX';}
/"{" %{if (yy.freshLine) { this.input('{'); return '{'; } else { this.unput('y'); }%}
"}" %{return '}';%}
"%{"(.|\n)*?"}%" {yytext = yytext.substr(2, yyleng - 4); return 'ACTION';}
. {/* ignore bad characters */}
<<EOF>> {return 'EOF';}

418
samples/Jison/ansic.jison Normal file
View File

@@ -0,0 +1,418 @@
%token IDENTIFIER CONSTANT STRING_LITERAL SIZEOF
%token PTR_OP INC_OP DEC_OP LEFT_OP RIGHT_OP LE_OP GE_OP EQ_OP NE_OP
%token AND_OP OR_OP MUL_ASSIGN DIV_ASSIGN MOD_ASSIGN ADD_ASSIGN
%token SUB_ASSIGN LEFT_ASSIGN RIGHT_ASSIGN AND_ASSIGN
%token XOR_ASSIGN OR_ASSIGN TYPE_NAME
%token TYPEDEF EXTERN STATIC AUTO REGISTER
%token CHAR SHORT INT LONG SIGNED UNSIGNED FLOAT DOUBLE CONST VOLATILE VOID
%token STRUCT UNION ENUM ELLIPSIS
%token CASE DEFAULT IF ELSE SWITCH WHILE DO FOR GOTO CONTINUE BREAK RETURN
%nonassoc IF_WITHOUT_ELSE
%nonassoc ELSE
%start translation_unit
%%
primary_expression
: IDENTIFIER
| CONSTANT
| STRING_LITERAL
| '(' expression ')'
;
postfix_expression
: primary_expression
| postfix_expression '[' expression ']'
| postfix_expression '(' ')'
| postfix_expression '(' argument_expression_list ')'
| postfix_expression '.' IDENTIFIER
| postfix_expression PTR_OP IDENTIFIER
| postfix_expression INC_OP
| postfix_expression DEC_OP
;
argument_expression_list
: assignment_expression
| argument_expression_list ',' assignment_expression
;
unary_expression
: postfix_expression
| INC_OP unary_expression
| DEC_OP unary_expression
| unary_operator cast_expression
| SIZEOF unary_expression
| SIZEOF '(' type_name ')'
;
unary_operator
: '&'
| '*'
| '+'
| '-'
| '~'
| '!'
;
cast_expression
: unary_expression
| '(' type_name ')' cast_expression
;
multiplicative_expression
: cast_expression
| multiplicative_expression '*' cast_expression
| multiplicative_expression '/' cast_expression
| multiplicative_expression '%' cast_expression
;
additive_expression
: multiplicative_expression
| additive_expression '+' multiplicative_expression
| additive_expression '-' multiplicative_expression
;
shift_expression
: additive_expression
| shift_expression LEFT_OP additive_expression
| shift_expression RIGHT_OP additive_expression
;
relational_expression
: shift_expression
| relational_expression '<' shift_expression
| relational_expression '>' shift_expression
| relational_expression LE_OP shift_expression
| relational_expression GE_OP shift_expression
;
equality_expression
: relational_expression
| equality_expression EQ_OP relational_expression
| equality_expression NE_OP relational_expression
;
and_expression
: equality_expression
| and_expression '&' equality_expression
;
exclusive_or_expression
: and_expression
| exclusive_or_expression '^' and_expression
;
inclusive_or_expression
: exclusive_or_expression
| inclusive_or_expression '|' exclusive_or_expression
;
logical_and_expression
: inclusive_or_expression
| logical_and_expression AND_OP inclusive_or_expression
;
logical_or_expression
: logical_and_expression
| logical_or_expression OR_OP logical_and_expression
;
conditional_expression
: logical_or_expression
| logical_or_expression '?' expression ':' conditional_expression
;
assignment_expression
: conditional_expression
| unary_expression assignment_operator assignment_expression
;
assignment_operator
: '='
| MUL_ASSIGN
| DIV_ASSIGN
| MOD_ASSIGN
| ADD_ASSIGN
| SUB_ASSIGN
| LEFT_ASSIGN
| RIGHT_ASSIGN
| AND_ASSIGN
| XOR_ASSIGN
| OR_ASSIGN
;
expression
: assignment_expression
| expression ',' assignment_expression
;
constant_expression
: conditional_expression
;
declaration
: declaration_specifiers ';'
| declaration_specifiers init_declarator_list ';'
;
declaration_specifiers
: storage_class_specifier
| storage_class_specifier declaration_specifiers
| type_specifier
| type_specifier declaration_specifiers
| type_qualifier
| type_qualifier declaration_specifiers
;
init_declarator_list
: init_declarator
| init_declarator_list ',' init_declarator
;
init_declarator
: declarator
| declarator '=' initializer
;
storage_class_specifier
: TYPEDEF
| EXTERN
| STATIC
| AUTO
| REGISTER
;
type_specifier
: VOID
| CHAR
| SHORT
| INT
| LONG
| FLOAT
| DOUBLE
| SIGNED
| UNSIGNED
| struct_or_union_specifier
| enum_specifier
| TYPE_NAME
;
struct_or_union_specifier
: struct_or_union IDENTIFIER '{' struct_declaration_list '}'
| struct_or_union '{' struct_declaration_list '}'
| struct_or_union IDENTIFIER
;
struct_or_union
: STRUCT
| UNION
;
struct_declaration_list
: struct_declaration
| struct_declaration_list struct_declaration
;
struct_declaration
: specifier_qualifier_list struct_declarator_list ';'
;
specifier_qualifier_list
: type_specifier specifier_qualifier_list
| type_specifier
| type_qualifier specifier_qualifier_list
| type_qualifier
;
struct_declarator_list
: struct_declarator
| struct_declarator_list ',' struct_declarator
;
struct_declarator
: declarator
| ':' constant_expression
| declarator ':' constant_expression
;
enum_specifier
: ENUM '{' enumerator_list '}'
| ENUM IDENTIFIER '{' enumerator_list '}'
| ENUM IDENTIFIER
;
enumerator_list
: enumerator
| enumerator_list ',' enumerator
;
enumerator
: IDENTIFIER
| IDENTIFIER '=' constant_expression
;
type_qualifier
: CONST
| VOLATILE
;
declarator
: pointer direct_declarator
| direct_declarator
;
direct_declarator
: IDENTIFIER
| '(' declarator ')'
| direct_declarator '[' constant_expression ']'
| direct_declarator '[' ']'
| direct_declarator '(' parameter_type_list ')'
| direct_declarator '(' identifier_list ')'
| direct_declarator '(' ')'
;
pointer
: '*'
| '*' type_qualifier_list
| '*' pointer
| '*' type_qualifier_list pointer
;
type_qualifier_list
: type_qualifier
| type_qualifier_list type_qualifier
;
parameter_type_list
: parameter_list
| parameter_list ',' ELLIPSIS
;
parameter_list
: parameter_declaration
| parameter_list ',' parameter_declaration
;
parameter_declaration
: declaration_specifiers declarator
| declaration_specifiers abstract_declarator
| declaration_specifiers
;
identifier_list
: IDENTIFIER
| identifier_list ',' IDENTIFIER
;
type_name
: specifier_qualifier_list
| specifier_qualifier_list abstract_declarator
;
abstract_declarator
: pointer
| direct_abstract_declarator
| pointer direct_abstract_declarator
;
direct_abstract_declarator
: '(' abstract_declarator ')'
| '[' ']'
| '[' constant_expression ']'
| direct_abstract_declarator '[' ']'
| direct_abstract_declarator '[' constant_expression ']'
| '(' ')'
| '(' parameter_type_list ')'
| direct_abstract_declarator '(' ')'
| direct_abstract_declarator '(' parameter_type_list ')'
;
initializer
: assignment_expression
| '{' initializer_list '}'
| '{' initializer_list ',' '}'
;
initializer_list
: initializer
| initializer_list ',' initializer
;
statement
: labeled_statement
| compound_statement
| expression_statement
| selection_statement
| iteration_statement
| jump_statement
;
labeled_statement
: IDENTIFIER ':' statement
| CASE constant_expression ':' statement
| DEFAULT ':' statement
;
compound_statement
: '{' '}'
| '{' statement_list '}'
| '{' declaration_list '}'
| '{' declaration_list statement_list '}'
;
declaration_list
: declaration
| declaration_list declaration
;
statement_list
: statement
| statement_list statement
;
expression_statement
: ';'
| expression ';'
;
selection_statement
: IF '(' expression ')' statement %prec IF_WITHOUT_ELSE
| IF '(' expression ')' statement ELSE statement
| SWITCH '(' expression ')' statement
;
iteration_statement
: WHILE '(' expression ')' statement
| DO statement WHILE '(' expression ')' ';'
| FOR '(' expression_statement expression_statement ')' statement
| FOR '(' expression_statement expression_statement expression ')' statement
;
jump_statement
: GOTO IDENTIFIER ';'
| CONTINUE ';'
| BREAK ';'
| RETURN ';'
| RETURN expression ';'
;
translation_unit
: external_declaration
| translation_unit external_declaration
;
external_declaration
: function_definition
| declaration
;
function_definition
: declaration_specifiers declarator declaration_list compound_statement
| declaration_specifiers declarator compound_statement
| declarator declaration_list compound_statement
| declarator compound_statement
;

View File

@@ -0,0 +1,84 @@
/* description: ClassyLang grammar. Very classy. */
/*
To build parser:
$ ./bin/jison examples/classy.jison examples/classy.jisonlex
*/
/* author: Zach Carter */
%right ASSIGN
%left OR
%nonassoc EQUALITY GREATER
%left PLUS MINUS
%left TIMES
%right NOT
%left DOT
%%
pgm
: cdl MAIN LBRACE vdl el RBRACE ENDOFFILE
;
cdl
: c cdl
|
;
c
: CLASS id EXTENDS id LBRACE vdl mdl RBRACE
;
vdl
: VAR t id SEMICOLON vdl
|
;
mdl
: t id LPAREN t id RPAREN LBRACE vdl el RBRACE mdl
|
;
t
: NATTYPE
| id
;
id
: ID
;
el
: e SEMICOLON el
| e SEMICOLON
;
e
: NATLITERAL
| NUL
| id
| NEW id
| THIS
| IF LPAREN e RPAREN LBRACE el RBRACE ELSE LBRACE el RBRACE
| FOR LPAREN e SEMICOLON e SEMICOLON e RPAREN LBRACE el RBRACE
| READNAT LPAREN RPAREN
| PRINTNAT LPAREN e RPAREN
| e PLUS e
| e MINUS e
| e TIMES e
| e EQUALITY e
| e GREATER e
| NOT e
| e OR e
| e DOT id
| id ASSIGN e
| e DOT id ASSIGN e
| id LPAREN e RPAREN
| e DOT id LPAREN e RPAREN
| LPAREN e RPAREN
;

145
samples/Jison/lex.jison Normal file
View File

@@ -0,0 +1,145 @@
// `%nonassoc` tells the parser compiler (JISON) that these tokens cannot occur more than once,
// i.e. input like '//a' (tokens '/', '/' and 'a') is not a legal input while '/a' (tokens '/' and 'a')
// *is* legal input for this grammar.
%nonassoc '/' '/!'
// Likewise for `%left`: this informs the LALR(1) grammar compiler (JISON) that these tokens
// *can* occur repeatedly, e.g. 'a?*' and even 'a**' are considered legal inputs given this
// grammar!
//
// Token `RANGE_REGEX` may seem the odd one out here but really isn't: given the `regex_base`
// choice/rule `regex_base range_regex`, which is recursive, this grammar tells JISON that
// any input matching a sequence like `regex_base range_regex range_regex` *is* legal.
// If you do not want that to be legal, you MUST adjust the grammar rule set you match your
// actual intent.
%left '*' '+' '?' RANGE_REGEX
%%
lex
: definitions include '%%' rules '%%' EOF
{{ $$ = {macros: $1, rules: $4};
if ($2) $$.actionInclude = $2;
return $$; }}
| definitions include '%%' rules EOF
{{ $$ = {macros: $1, rules: $4};
if ($2) $$.actionInclude = $2;
return $$; }}
;
include
: action
|
;
definitions
: definitions definition
{ $$ = $1; $$.concat($2); }
| definition
{ $$ = [$1]; }
;
definition
: name regex
{ $$ = [$1, $2]; }
;
name
: NAME
{ $$ = yytext; }
;
rules
: rules rule
{ $$ = $1; $$.push($2); }
| rule
{ $$ = [$1]; }
;
rule
: regex action
{ $$ = [$1, $2]; }
;
action
: ACTION
{ $$ = yytext; }
;
regex
: start_caret regex_list end_dollar
{ $$ = $1+$2+$3; }
;
start_caret
: '^'
{ $$ = '^'; }
|
{ $$ = ''; }
;
end_dollar
: '$'
{ $$ = '$'; }
|
{ $$ = ''; }
;
regex_list
: regex_list '|' regex_chain
{ $$ = $1+'|'+$3; }
| regex_chain
;
regex_chain
: regex_chain regex_base
{ $$ = $1+$2;}
| regex_base
{ $$ = $1;}
;
regex_base
: '(' regex_list ')'
{ $$ = '('+$2+')'; }
| regex_base '+'
{ $$ = $1+'+'; }
| regex_base '*'
{ $$ = $1+'*'; }
| regex_base '?'
{ $$ = $1+'?'; }
| '/' regex_base
{ $$ = '(?=' + $regex_base + ')'; }
| '/!' regex_base
{ $$ = '(?!' + $regex_base + ')'; }
| name_expansion
| regex_base range_regex
{ $$ = $1+$2; }
| any_group_regex
| '.'
{ $$ = '.'; }
| string
;
name_expansion
: '{' name '}'
{{ $$ = '{'+$2+'}'; }}
;
any_group_regex
: ANY_GROUP_REGEX
{ $$ = yytext; }
;
range_regex
: RANGE_REGEX
{ $$ = yytext; }
;
string
: STRING_LIT
{ $$ = yy.prepareString(yytext.substr(1, yyleng-2)); }
;

37
samples/Jolie/common.iol Normal file
View File

@@ -0,0 +1,37 @@
include "types/Binding.iol"
constants {
Location_Exam = "socket://localhost:8000"
}
type StartExamRequest:void {
.examName:string
.studentName:string
.student:Binding
}
type MakeQuestionRequest:void {
.question:string
.examName:string
.studentName:string
}
type DecisionMessage:void {
.studentName:string
.examName:string
}
interface ExamInterface {
OneWay:
startExam(StartExamRequest),
pass(DecisionMessage), fail(DecisionMessage)
RequestResponse:
makeQuestion(MakeQuestionRequest)(int)
}
interface StudentInterface {
OneWay:
sendMessage(string)
RequestResponse:
makeQuestion(MakeQuestionRequest)(int)
}

Some files were not shown because too many files have changed in this diff Show More