Compare commits

...

60 Commits

Author SHA1 Message Date
Vicent Marti
5812f89f66 compiler: Add error output to the compiler 2017-12-04 18:27:48 +01:00
Paul Chaignon
e4b9430024 Vendor CSS files in font-awesome directory (#3932) 2017-12-02 15:24:05 +01:00
Paul Chaignon
a76805e40d Improve Prolog .pro heuristic to avoid false positives (#3931)
The `[a:-b]` syntax for index selection in arrays is valid in IDL and
matches the heuristic for Prolog. Update the Prolog heuristic to
exclude `[`.
2017-12-02 15:08:19 +01:00
Ashe Connor
8d27845f8c drop max token len to 32 (#3925) 2017-12-01 19:33:50 +11:00
Ashe Connor
9a8ab45b6f Limit tokens to 64 characters or less (#3922) 2017-12-01 13:41:59 +11:00
Vicent Martí
e335d48625 New Grammars Compiler (#3915)
* grammars: Update several grammars with compat issues

* [WIP] Add new grammar conversion tools

* Wrap in a Docker script

* Proper Dockerfile support

* Add Javadoc grammar

* Remove NPM package.json

* Remove superfluous test

This is now always checked by the grammars compiler

* Update JSyntax grammar to new submodule

* Approve Javadoc license

* grammars: Remove checked-in dependencies

* grammars: Add regex checks to the compiler

* grammars: Point Oz to its actual submodule

* grammars: Refactor compiler to group errors by repo

* grammars: Cleanups to error reporting
2017-11-30 16:15:48 +01:00
NachoSoto
4f46155c05 Add Carthage/Build to generated list so it doesn't show in PR diffs (#3920)
Equivalent to #3865, but for Carthage.
2017-11-29 14:26:23 +00:00
NachoSoto
38901d51d2 Changed Carthage vendor regex to allow folder in any subdirectory (#3921)
In monorepro projects, it's not uncommon for `Carthage` to not be in the root directory.
2017-11-29 14:25:04 +00:00
Shai Mishali
ded0dc74e0 Add Cocoapods to generated list so it doesn't show in PR diffs (#3865)
* Add Cocoapods to generated list so it doesn't show in PR diffs

* Removed Cocoapods from vendor.yml

* Enhance regex to match only Cocoapod's Pods folder

* Adds additional test cases for generated Pods folder
2017-11-28 11:04:18 +00:00
Colin Seymour
c5d1bb5370 Unvendor tools/ (#3919)
* Unvendor tools/

* Remove test
2017-11-28 10:52:02 +00:00
Andrey Sitnik
c8ca48856b Add PostCSS syntaxes support (#3916) 2017-11-26 16:21:10 +11:00
John Gardner
7be6fb0138 Test Perl before Turing when running heuristics (#3880)
* Test Perl before Turing when running heuristics

* Revise order of Perl 5 and 6 in `.t` heuristic

See: https://github.com/github/linguist/pull/3880#issuecomment-340319500

* Combine patterns for disambiguating Perl
2017-11-17 21:25:56 +11:00
wesdawg
8c516655bc Add YARA language (#3877)
* Add YARA language grammars

* Add YARA to languages.yml

* Add YARA samples

* Add YARA to README
2017-11-16 12:16:33 +11:00
Michael R. Crusoe
9dceffce2f Add the Common Workflow Language standard (#3902)
* Add the language for the Common Workflow Language standards

* add CWL grammer

* add MIT licensed CWL sample

* script/set-language-ids --update for CWL
2017-11-16 12:15:55 +11:00
Ashe Connor
33be70eb28 Fix failing edges on leading commas in args (#3905) 2017-11-16 11:44:51 +11:00
Jingwen
9c4dc3047c Add BUILD.bazel to Python filenames (#3907)
BUILD.bazel and BUILD are used by Bazel, and both are valid filenames. BUILD.bazel is used in favor of BUILD if it exists.

https://stackoverflow.com/a/43205770/521209
2017-11-15 10:04:36 +00:00
Pratik Karki
d8e5f3c965 Add color for LFE language. (#3895)
* 'Add color to LFE'

* Test passing color for LFE

* Let LFE be independent rather than grouping to Erlang
2017-11-14 07:35:12 +00:00
Ashe Connor
71bf640a47 Release v5.3.3 (#3903)
* Add self to maintainers

* bump to v5.3.3
2017-11-13 18:17:38 +11:00
Ashe Connor
c9b3d19c6f Lexer crash fix (#3900)
* input may return 0 for EOF

Stops overruns into fread from nothing.

* remove two trailing contexts

* fix up sgml tokens
2017-11-10 22:11:32 +11:00
Alex Arslan
0f4955e5d5 Update Julia definitions to use Atom instead of TextMate (#3871) 2017-11-09 19:39:37 +11:00
Paul Chaignon
d968b0e9ee Improve heuristic for XML/TypeScript (#3883)
The heuristic for XML .ts files might match TypeScript generics
starting with TS
2017-11-04 11:16:44 +01:00
Ashe Connor
1f5ed3b3fe Release v5.3.2 (#3882)
* update grammar submodules

* bump to 5.3.2
2017-11-01 10:01:03 +10:00
Robert Koeninger
297be948d1 Set color for Idris language (#3866) 2017-10-31 16:27:21 +00:00
Charles Milette
b4492e7205 Add support for Edje Data Collections (#3879)
* Add support for Edje Data Collections

Fixes #3876

* Add EDC in grammars list
2017-10-31 16:26:44 +00:00
Paul Chaignon
c05bc99004 Vendor a few big JS libraries (#3861) 2017-10-31 15:12:02 +01:00
Ashe Connor
99eaf5faf9 Replace the tokenizer with a flex-based scanner (#3846)
* Lex everything except SGML, multiline, SHEBANG

* Prepend SHEBANG#! to tokens

* Support SGML tag/attribute extraction

* Multiline comments

* WIP cont'd; productionifying

* Compile before test

* Add extension to gemspec

* Add flex task to build lexer

* Reentrant extra data storage

* regenerate lexer

* use prefix

* rebuild lexer on linux

* Optimise a number of operations:

* Don't read and split the entire file if we only ever use the first/last n
  lines

* Only consider the first 50KiB when using heuristics/classifying.  This can
  save a *lot* of time; running a large number of regexes over 1MiB of text
  takes a while.

* Memoize File.size/read/stat; re-reading in a 500KiB file every time `data` is
  called adds up a lot.

* Use single regex for C++

* act like #lines

* [1][-2..-1] => nil, ffs

* k may not be set
2017-10-31 11:06:56 +11:00
Cesar Tessarin
21babbceb1 Fix Perl 5 and 6 disambiguation bug (#3860)
* Add test to demonstrate Perl syntax detection bug

A Perl 5 .pm file containing the word `module` or `class`, even with
an explicit `use 5.*` statement, is recognized as Perl 6 code.

* Improve Perl 5 and Perl 6 disambiguation

The heuristics for Perl 5 and 6 `.pm` files disambiguation was done
searching for keywords which can appear in both languages (`class` and
`module`) in addition to the `use` statement check.

Due to Perl 6 being tested first, code containing those words would
always be interpreted as Perl 6.

Test order was thus reversed, testing for Perl 5 first. Since Perl 6
code would never contain a `use 5.*` statement, this does no harm to
Perl 6 detection while fixing the problem to Perl 5.

Fixes: #3637
2017-10-23 10:16:56 +01:00
Paul Chaignon
15885701cd Tests for Ruby 2.4 must pass (#3862) 2017-10-17 11:08:04 +02:00
Ashe Connor
9b942086f7 Release v5.3.1 (#3864)
* Fix Perl/Pod disambiguation
2017-10-17 19:31:20 +11:00
Ashe Connor
93cd47822f Only recognise Pod for .pod files (#3863)
We uncomplicate matters by removing ".pod" from the Perl definition
entirely.
2017-10-17 19:05:20 +11:00
Colin Seymour
ea3e79a631 Release v5.3.0 (#3859)
* Update grammars

* Update haskell scopes to match updated grammar

* Bump version to 5.3.0
2017-10-15 09:52:27 +01:00
Maickel Hubner
0af9a35ff1 Create association with OpenEdge .w files (#3648)
* Update heuristics.rb

* Update languages.yml

* Create consmov.w

* Create menu.w

* Switch out large samples for smaller ones

* Relax regex
2017-10-14 18:12:16 +01:00
Codecat
44048c9ba8 Add Angelscript language (#3844)
* Add AngelScript scriping language

* Add AngelScript sample

* Initial implementation of Angelscript

* Update Angelscript tm_scope and ace_mode

* Move Angelscript after ANTLR

* Updated grammar list

* Alphabetical sorting for Angelscript

* Angelscript grammar license is unlicense

* Add ActionScript samples

* Added a heuristic for .as files

* Whitelist sublime-angelscript license hash

* Added heuristic test for Angelscript and Actionscript

* Remove .acs from Angelscript file extensions
2017-10-14 17:34:12 +01:00
Chris Llanwarne
e51b5ec9b7 Add WDL language support (#3858)
* Add WDL language support

* Add ace mode
2017-10-14 17:12:47 +01:00
Colin Seymour
a47008ea00 Ping @lildude from now on (#3856) 2017-10-13 17:49:04 +01:00
Dan Moore
a0b38e8207 Don't count VCL as Perl for statistics. (#3857)
* Don't count VCL as Perl for statistics.

While the Varnish-specific language was apparently inspired by C and Perl, there's no reason to group it as Perl for repo statistics.

* Re-adding color for VCL.

Which was accidentally removed as part of https://github.com/github/linguist/pull/2298/files#diff-3552b1a64ad2071983c4d91349075c75L3223
2017-10-12 15:42:31 -04:00
Colin Seymour
10dfe9f296 Fix typo in script/add-grammar (#3853) 2017-10-10 18:26:26 +01:00
Ján Neščivera
0b9c05f989 added VS Code workspace files to vendored path (#3723) 2017-10-08 17:32:01 +01:00
Paul Chaignon
95dca67e2b New repository for TypeScript grammar (#3730) 2017-10-06 13:27:14 +01:00
Adædra
e98728595b Change Ruby grammar source (#3782)
* Move the Ruby grammar to use Atom's one
2017-09-21 09:52:10 +01:00
Kerem
4cd558c374 Added natvis extension to XML (#3789)
* natvis extension added to xml.

* Added sample natvis file from the Chromium project.
2017-09-17 13:31:02 +01:00
John Gardner
adf6206ef5 Register "buildozer.spec" as an INI filename (#3817)
Resolves #3814.
2017-09-17 13:29:49 +01:00
Shan Mahanama
c2d558b71d Add Ballerina language (#3818)
* Add Ballerina language

* Add missing file

* Update color

* Update with required changes

* Update sub-module
2017-09-17 13:29:12 +01:00
Nate Whetsell
78c58f956e Update Ace modes for Csound languages (#3822) 2017-09-17 13:27:24 +01:00
Agustin Mendez
fc1404985a Add DataWeave language (#3804)
* Add DataWeave language

* Add Licence

* Update to latest DataWeave revision
2017-09-07 15:28:46 +01:00
Adeel Mujahid
5d48ccd757 Classify some project files as XML (#3696)
Also added disambiguation rule for `.proj` and `.user`.

##### CSCFG

https://github.com/search?utf8=%E2%9C%93&q=extension%3Acscfg+NOT+nothack&type=Code
(16.7K hits)

##### CSDEF

https://github.com/search?utf8=%E2%9C%93&q=extension%3Acsdef+NOT+nothack&type=Code
(12.7K hits)

##### CCPROJ

https://github.com/search?utf8=%E2%9C%93&q=extension%3Accproj+NOT+nothack&type=Code
(5K hits)

##### DEPPROJ

https://github.com/search?utf8=%E2%9C%93&q=extension%3Adepproj+NOT+nothack&type=Code
(505 hits)

##### NDPROJ

https://github.com/search?utf8=%E2%9C%93&q=extension%3Andproj+NOT+nothack&type=Code
(194 hits)

##### PROJ

https://github.com/search?utf8=%E2%9C%93&q=extension%3Aproj+%28project+OR+Property+OR+Import+OR+xml+OR+xmlns%29&type=Code
(35K hits)

##### SHPROJ

https://github.com/search?utf8=%E2%9C%93&q=extension%3Ashproj+NOT+nothack&type=Code
(13K hits)
2017-09-07 10:04:09 +01:00
Abigail
3530a18e46 Add .clang-tidy filename for YAML (#3767)
.clang-tidy is the filename used for clang-tidy's configuration file.
clang-tidy is a clang-based C++ "linter" tool. For more info, see:
https://clang.llvm.org/extra/clang-tidy/
2017-09-07 10:01:10 +01:00
Marciano C. Preciado
ae8f4f9228 Make Matlab's Color More Appropriate (#3771)
Purple is not an affiliated color of Matlab or Mathworks. Change the color to better reflect the color theme of the Matlab sofware and logo.
2017-09-07 09:59:19 +01:00
Robert Koeninger
7c34d38786 Updated color for Ceylon language (#3780)
* Updated color for Ceylon language

* Adjusting Ceylon color due to its proximity to Clarion color

* Made Ceylon color darker to avoid collision

* Used more accurate color

* Specified tm_scope for Ceylon
2017-09-07 09:58:30 +01:00
Bradley Meck
38bc5fd336 added .mjs extension to JavaScript (#3783)
* added .mjs extension to JavaScript

* add missing newline at end of file

* add example from https://github.com/bmeck/composable-ast-walker/blob/master/example/constant_fold.mjs
2017-09-07 09:56:36 +01:00
Anthony D. Green
6b06e47c67 Create VBAllInOne.vb (#3785)
Adding the test file the Visual Basic compiler team uses to verify parsing and other features.
2017-09-07 09:55:20 +01:00
Mat Mariani
061712ff78 Added syntax highlighting for Squirrel (#3791)
* Added syntax highlighting for Squirrel

https://github.com/search?utf8=%E2%9C%93&q=extension%3Anut+NOT+nothack&t
ype=Code

Squirrel is already detected by GitHub but has no syntax
highlighting.

* removed duplicate `source.nut`
2017-09-07 09:53:25 +01:00
Seppe Stas
7707585d5e Change KiCad Board language to KiCad Legacy Layout (#3799)
* Change KiCad Board language to KiCad Legacy Layout

KiCad .brd files and .kicad_pcb files have the same purpose, they are both source files for PCB layouts. Having one of the file types named "KiCad Board" and the other one "KiCad Layout" can cause confusion since it implies they are not the same thing.

The [.brd files use the old, legacy layout format](http://kicad-pcb.org/help/file-formats/#_native_file_formats) that is [not actively used anymore](https://github.com/search?utf8=%E2%9C%93&q=language%3A%22KiCad+Board%22&type=Repositories&ref=advsearch&l=KiCad+Board&l=). Having it come before the KiCad Layout language in the Language Selection list and not having it flagged as legacy can cause people to select it when searching for KiCad layout files.

* Change KiCad sample according to changes in 4b306f34

* Update vendor/README.md using script/list-grammars
2017-09-07 09:52:27 +01:00
DoctorWhoof
fa7d433886 Added ".monkey2" extension to Monkey Programming Language (#3809)
The latest Monkey Programming Language extension is ".monkey2". The language description is available at "http://monkeycoder.co.nz".
2017-09-07 09:39:52 +01:00
PatrickJS
998e24cf36 Add ".gql" as a GraphQL file extension (#3813) 2017-09-07 09:38:42 +01:00
John Gardner
63ff51e2ed Add test to keep grammar-list synced with submodules (#3793)
* Add test to check if grammar list is outdated

* Update grammar list

* Fix duplicate punctuation in error messages
2017-08-24 21:13:30 +10:00
Colin Seymour
b541b53b78 Byebug requires Ruby 2.2 (#3790)
Also don't attempt to install it during testing.
2017-08-24 10:17:12 +01:00
Hardmath123
a878620a8e Add nearley language definition. (#3781) 2017-08-17 18:03:38 +01:00
John Gardner
5633fd3668 Fix classification of bogus "markup" languages (#3751)
* Reclassify Protocol Buffer as a data-type language

References: #3740

* Fix classification of bogus "markup" languages

* Fix category of the ironically-named "Pure Data"

Ironically and *appropriately* named, might I add.
2017-08-16 22:48:51 +10:00
Colin Seymour
9d0af0da40 Update to charlock_holmes 0.7.5 (#3778)
This fixes https://github.com/github/linguist/issues/3777
2017-08-16 10:08:33 +01:00
184 changed files with 11300 additions and 810 deletions

3
.gitignore vendored
View File

@@ -8,3 +8,6 @@ lib/linguist/samples.json
/node_modules
test/fixtures/ace_modes.json
/vendor/gems/
/tmp
*.bundle
*.so

55
.gitmodules vendored
View File

@@ -169,9 +169,6 @@
[submodule "vendor/grammars/Agda.tmbundle"]
path = vendor/grammars/Agda.tmbundle
url = https://github.com/mokus0/Agda.tmbundle
[submodule "vendor/grammars/Julia.tmbundle"]
path = vendor/grammars/Julia.tmbundle
url = https://github.com/JuliaEditorSupport/Julia.tmbundle
[submodule "vendor/grammars/ooc.tmbundle"]
path = vendor/grammars/ooc.tmbundle
url = https://github.com/nilium/ooc.tmbundle
@@ -400,10 +397,6 @@
[submodule "vendor/grammars/sublime_cobol"]
path = vendor/grammars/sublime_cobol
url = https://bitbucket.org/bitlang/sublime_cobol
[submodule "vendor/grammars/ruby.tmbundle"]
path = vendor/grammars/ruby.tmbundle
url = https://github.com/aroben/ruby.tmbundle
branch = pl
[submodule "vendor/grammars/IDL-Syntax"]
path = vendor/grammars/IDL-Syntax
url = https://github.com/andik/IDL-Syntax
@@ -446,9 +439,6 @@
[submodule "vendor/grammars/sublime-golo"]
path = vendor/grammars/sublime-golo
url = https://github.com/TypeUnsafe/sublime-golo
[submodule "vendor/grammars/JSyntax"]
path = vendor/grammars/JSyntax
url = https://github.com/bcj/JSyntax
[submodule "vendor/grammars/TXL"]
path = vendor/grammars/TXL
url = https://github.com/MikeHoffert/Sublime-Text-TXL-syntax
@@ -569,9 +559,6 @@
[submodule "vendor/grammars/sublime-aspectj"]
path = vendor/grammars/sublime-aspectj
url = https://github.com/pchaigno/sublime-aspectj
[submodule "vendor/grammars/sublime-typescript"]
path = vendor/grammars/sublime-typescript
url = https://github.com/Microsoft/TypeScript-Sublime-Plugin
[submodule "vendor/grammars/sublime-pony"]
path = vendor/grammars/sublime-pony
url = https://github.com/CausalityLtd/sublime-pony
@@ -866,3 +853,45 @@
[submodule "vendor/grammars/language-reason"]
path = vendor/grammars/language-reason
url = https://github.com/reasonml-editor/language-reason
[submodule "vendor/grammars/sublime-nearley"]
path = vendor/grammars/sublime-nearley
url = https://github.com/Hardmath123/sublime-nearley
[submodule "vendor/grammars/data-weave-tmLanguage"]
path = vendor/grammars/data-weave-tmLanguage
url = https://github.com/mulesoft-labs/data-weave-tmLanguage
[submodule "vendor/grammars/squirrel-language"]
path = vendor/grammars/squirrel-language
url = https://github.com/mathewmariani/squirrel-language
[submodule "vendor/grammars/language-ballerina"]
path = vendor/grammars/language-ballerina
url = https://github.com/ballerinalang/plugin-vscode
[submodule "vendor/grammars/language-yara"]
path = vendor/grammars/language-yara
url = https://github.com/blacktop/language-yara
[submodule "vendor/grammars/language-ruby"]
path = vendor/grammars/language-ruby
url = https://github.com/atom/language-ruby
[submodule "vendor/grammars/sublime-angelscript"]
path = vendor/grammars/sublime-angelscript
url = https://github.com/wronex/sublime-angelscript
[submodule "vendor/grammars/TypeScript-TmLanguage"]
path = vendor/grammars/TypeScript-TmLanguage
url = https://github.com/Microsoft/TypeScript-TmLanguage
[submodule "vendor/grammars/wdl-sublime-syntax-highlighter"]
path = vendor/grammars/wdl-sublime-syntax-highlighter
url = https://github.com/broadinstitute/wdl-sublime-syntax-highlighter
[submodule "vendor/grammars/atom-language-julia"]
path = vendor/grammars/atom-language-julia
url = https://github.com/JuliaEditorSupport/atom-language-julia
[submodule "vendor/grammars/language-cwl"]
path = vendor/grammars/language-cwl
url = https://github.com/manabuishii/language-cwl
[submodule "vendor/grammars/Syntax-highlighting-for-PostCSS"]
path = vendor/grammars/Syntax-highlighting-for-PostCSS
url = https://github.com/hudochenkov/Syntax-highlighting-for-PostCSS
[submodule "vendor/grammars/javadoc.tmbundle"]
path = vendor/grammars/javadoc.tmbundle
url = https://github.com/textmate/javadoc.tmbundle
[submodule "vendor/grammars/JSyntax"]
path = vendor/grammars/JSyntax
url = https://github.com/tikkanz/JSyntax

View File

@@ -19,10 +19,6 @@ rvm:
- 2.3.3
- 2.4.0
matrix:
allow_failures:
- rvm: 2.4.0
notifications:
disabled: true
@@ -32,3 +28,5 @@ git:
cache: bundler
dist: precise
bundler_args: --without debug

View File

@@ -17,7 +17,7 @@ To add support for a new extension:
In addition, if this extension is already listed in [`languages.yml`][languages] then sometimes a few more steps will need to be taken:
1. Make sure that example `.yourextension` files are present in the [samples directory][samples] for each language that uses `.yourextension`.
1. Test the performance of the Bayesian classifier with a relatively large number (1000s) of sample `.yourextension` files. (ping **@bkeepers** to help with this) to ensure we're not misclassifying files.
1. Test the performance of the Bayesian classifier with a relatively large number (1000s) of sample `.yourextension` files. (ping **@lildude** to help with this) to ensure we're not misclassifying files.
1. If the Bayesian classifier does a bad job with the sample `.yourextension` files then a [heuristic](https://github.com/github/linguist/blob/master/lib/linguist/heuristics.rb) may need to be written to help.
@@ -36,7 +36,7 @@ To add support for a new language:
In addition, if your new language defines an extension that's already listed in [`languages.yml`][languages] (such as `.foo`) then sometimes a few more steps will need to be taken:
1. Make sure that example `.foo` files are present in the [samples directory][samples] for each language that uses `.foo`.
1. Test the performance of the Bayesian classifier with a relatively large number (1000s) of sample `.foo` files. (ping **@bkeepers** to help with this) to ensure we're not misclassifying files.
1. Test the performance of the Bayesian classifier with a relatively large number (1000s) of sample `.foo` files. (ping **@lildude** to help with this) to ensure we're not misclassifying files.
1. If the Bayesian classifier does a bad job with the sample `.foo` files then a [heuristic](https://github.com/github/linguist/blob/master/lib/linguist/heuristics.rb) may need to be written to help.
Remember, the goal here is to try and avoid false positives!
@@ -93,6 +93,7 @@ Linguist is maintained with :heart: by:
- **@BenEddy** (GitHub staff)
- **@Caged** (GitHub staff)
- **@grantr** (GitHub staff)
- **@kivikakk** (GitHub staff)
- **@larsbrinkhoff**
- **@lildude** (GitHub staff)
- **@pchaigno**

View File

@@ -1,3 +1,6 @@
source 'https://rubygems.org'
gemspec :name => "github-linguist"
gem 'byebug' if RUBY_VERSION >= '2.0'
group :debug do
gem 'byebug' if RUBY_VERSION >= '2.2'
end

View File

@@ -1,6 +1,7 @@
require 'bundler/setup'
require 'rake/clean'
require 'rake/testtask'
require 'rake/extensiontask'
require 'yaml'
require 'yajl'
require 'open-uri'
@@ -10,8 +11,14 @@ task :default => :test
Rake::TestTask.new
gem_spec = Gem::Specification.load('github-linguist.gemspec')
Rake::ExtensionTask.new('linguist', gem_spec) do |ext|
ext.lib_dir = File.join('lib', 'linguist')
end
# Extend test task to check for samples and fetch latest Ace modes
task :test => [:check_samples, :fetch_ace_modes]
task :test => [:compile, :check_samples, :fetch_ace_modes]
desc "Check that we have samples.json generated"
task :check_samples do
@@ -34,12 +41,19 @@ task :fetch_ace_modes do
end
end
task :samples do
task :samples => :compile do
require 'linguist/samples'
json = Yajl.dump(Linguist::Samples.data, :pretty => true)
File.write 'lib/linguist/samples.json', json
end
task :flex do
if `flex -V` !~ /^flex \d+\.\d+\.\d+/
fail "flex not detected"
end
system "cd ext/linguist && flex tokenizer.l"
end
task :build_gem => :samples do
rm_rf "grammars"
sh "script/convert-grammars"

3
ext/linguist/extconf.rb Normal file
View File

@@ -0,0 +1,3 @@
require 'mkmf'
dir_config('linguist')
create_makefile('linguist/linguist')

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,336 @@
#ifndef linguist_yyHEADER_H
#define linguist_yyHEADER_H 1
#define linguist_yyIN_HEADER 1
#line 6 "lex.linguist_yy.h"
#define YY_INT_ALIGNED short int
/* A lexical scanner generated by flex */
#define FLEX_SCANNER
#define YY_FLEX_MAJOR_VERSION 2
#define YY_FLEX_MINOR_VERSION 5
#define YY_FLEX_SUBMINOR_VERSION 35
#if YY_FLEX_SUBMINOR_VERSION > 0
#define FLEX_BETA
#endif
/* First, we deal with platform-specific or compiler-specific issues. */
/* begin standard C headers. */
#include <stdio.h>
#include <string.h>
#include <errno.h>
#include <stdlib.h>
/* end standard C headers. */
/* flex integer type definitions */
#ifndef FLEXINT_H
#define FLEXINT_H
/* C99 systems have <inttypes.h>. Non-C99 systems may or may not. */
#if defined (__STDC_VERSION__) && __STDC_VERSION__ >= 199901L
/* C99 says to define __STDC_LIMIT_MACROS before including stdint.h,
* if you want the limit (max/min) macros for int types.
*/
#ifndef __STDC_LIMIT_MACROS
#define __STDC_LIMIT_MACROS 1
#endif
#include <inttypes.h>
typedef int8_t flex_int8_t;
typedef uint8_t flex_uint8_t;
typedef int16_t flex_int16_t;
typedef uint16_t flex_uint16_t;
typedef int32_t flex_int32_t;
typedef uint32_t flex_uint32_t;
typedef uint64_t flex_uint64_t;
#else
typedef signed char flex_int8_t;
typedef short int flex_int16_t;
typedef int flex_int32_t;
typedef unsigned char flex_uint8_t;
typedef unsigned short int flex_uint16_t;
typedef unsigned int flex_uint32_t;
#endif /* ! C99 */
/* Limits of integral types. */
#ifndef INT8_MIN
#define INT8_MIN (-128)
#endif
#ifndef INT16_MIN
#define INT16_MIN (-32767-1)
#endif
#ifndef INT32_MIN
#define INT32_MIN (-2147483647-1)
#endif
#ifndef INT8_MAX
#define INT8_MAX (127)
#endif
#ifndef INT16_MAX
#define INT16_MAX (32767)
#endif
#ifndef INT32_MAX
#define INT32_MAX (2147483647)
#endif
#ifndef UINT8_MAX
#define UINT8_MAX (255U)
#endif
#ifndef UINT16_MAX
#define UINT16_MAX (65535U)
#endif
#ifndef UINT32_MAX
#define UINT32_MAX (4294967295U)
#endif
#endif /* ! FLEXINT_H */
#ifdef __cplusplus
/* The "const" storage-class-modifier is valid. */
#define YY_USE_CONST
#else /* ! __cplusplus */
/* C99 requires __STDC__ to be defined as 1. */
#if defined (__STDC__)
#define YY_USE_CONST
#endif /* defined (__STDC__) */
#endif /* ! __cplusplus */
#ifdef YY_USE_CONST
#define yyconst const
#else
#define yyconst
#endif
/* An opaque pointer. */
#ifndef YY_TYPEDEF_YY_SCANNER_T
#define YY_TYPEDEF_YY_SCANNER_T
typedef void* yyscan_t;
#endif
/* For convenience, these vars (plus the bison vars far below)
are macros in the reentrant scanner. */
#define yyin yyg->yyin_r
#define yyout yyg->yyout_r
#define yyextra yyg->yyextra_r
#define yyleng yyg->yyleng_r
#define yytext yyg->yytext_r
#define yylineno (YY_CURRENT_BUFFER_LVALUE->yy_bs_lineno)
#define yycolumn (YY_CURRENT_BUFFER_LVALUE->yy_bs_column)
#define yy_flex_debug yyg->yy_flex_debug_r
/* Size of default input buffer. */
#ifndef YY_BUF_SIZE
#define YY_BUF_SIZE 16384
#endif
#ifndef YY_TYPEDEF_YY_BUFFER_STATE
#define YY_TYPEDEF_YY_BUFFER_STATE
typedef struct yy_buffer_state *YY_BUFFER_STATE;
#endif
#ifndef YY_TYPEDEF_YY_SIZE_T
#define YY_TYPEDEF_YY_SIZE_T
typedef size_t yy_size_t;
#endif
#ifndef YY_STRUCT_YY_BUFFER_STATE
#define YY_STRUCT_YY_BUFFER_STATE
struct yy_buffer_state
{
FILE *yy_input_file;
char *yy_ch_buf; /* input buffer */
char *yy_buf_pos; /* current position in input buffer */
/* Size of input buffer in bytes, not including room for EOB
* characters.
*/
yy_size_t yy_buf_size;
/* Number of characters read into yy_ch_buf, not including EOB
* characters.
*/
yy_size_t yy_n_chars;
/* Whether we "own" the buffer - i.e., we know we created it,
* and can realloc() it to grow it, and should free() it to
* delete it.
*/
int yy_is_our_buffer;
/* Whether this is an "interactive" input source; if so, and
* if we're using stdio for input, then we want to use getc()
* instead of fread(), to make sure we stop fetching input after
* each newline.
*/
int yy_is_interactive;
/* Whether we're considered to be at the beginning of a line.
* If so, '^' rules will be active on the next match, otherwise
* not.
*/
int yy_at_bol;
int yy_bs_lineno; /**< The line count. */
int yy_bs_column; /**< The column count. */
/* Whether to try to fill the input buffer when we reach the
* end of it.
*/
int yy_fill_buffer;
int yy_buffer_status;
};
#endif /* !YY_STRUCT_YY_BUFFER_STATE */
void linguist_yyrestart (FILE *input_file ,yyscan_t yyscanner );
void linguist_yy_switch_to_buffer (YY_BUFFER_STATE new_buffer ,yyscan_t yyscanner );
YY_BUFFER_STATE linguist_yy_create_buffer (FILE *file,int size ,yyscan_t yyscanner );
void linguist_yy_delete_buffer (YY_BUFFER_STATE b ,yyscan_t yyscanner );
void linguist_yy_flush_buffer (YY_BUFFER_STATE b ,yyscan_t yyscanner );
void linguist_yypush_buffer_state (YY_BUFFER_STATE new_buffer ,yyscan_t yyscanner );
void linguist_yypop_buffer_state (yyscan_t yyscanner );
YY_BUFFER_STATE linguist_yy_scan_buffer (char *base,yy_size_t size ,yyscan_t yyscanner );
YY_BUFFER_STATE linguist_yy_scan_string (yyconst char *yy_str ,yyscan_t yyscanner );
YY_BUFFER_STATE linguist_yy_scan_bytes (yyconst char *bytes,yy_size_t len ,yyscan_t yyscanner );
void *linguist_yyalloc (yy_size_t ,yyscan_t yyscanner );
void *linguist_yyrealloc (void *,yy_size_t ,yyscan_t yyscanner );
void linguist_yyfree (void * ,yyscan_t yyscanner );
/* Begin user sect3 */
#define yytext_ptr yytext_r
#ifdef YY_HEADER_EXPORT_START_CONDITIONS
#define INITIAL 0
#define sgml 1
#define c_comment 2
#define xml_comment 3
#define haskell_comment 4
#define ocaml_comment 5
#define python_dcomment 6
#define python_scomment 7
#endif
#ifndef YY_NO_UNISTD_H
/* Special case for "unistd.h", since it is non-ANSI. We include it way
* down here because we want the user's section 1 to have been scanned first.
* The user has a chance to override it with an option.
*/
#include <unistd.h>
#endif
#define YY_EXTRA_TYPE struct tokenizer_extra *
int linguist_yylex_init (yyscan_t* scanner);
int linguist_yylex_init_extra (YY_EXTRA_TYPE user_defined,yyscan_t* scanner);
/* Accessor methods to globals.
These are made visible to non-reentrant scanners for convenience. */
int linguist_yylex_destroy (yyscan_t yyscanner );
int linguist_yyget_debug (yyscan_t yyscanner );
void linguist_yyset_debug (int debug_flag ,yyscan_t yyscanner );
YY_EXTRA_TYPE linguist_yyget_extra (yyscan_t yyscanner );
void linguist_yyset_extra (YY_EXTRA_TYPE user_defined ,yyscan_t yyscanner );
FILE *linguist_yyget_in (yyscan_t yyscanner );
void linguist_yyset_in (FILE * in_str ,yyscan_t yyscanner );
FILE *linguist_yyget_out (yyscan_t yyscanner );
void linguist_yyset_out (FILE * out_str ,yyscan_t yyscanner );
yy_size_t linguist_yyget_leng (yyscan_t yyscanner );
char *linguist_yyget_text (yyscan_t yyscanner );
int linguist_yyget_lineno (yyscan_t yyscanner );
void linguist_yyset_lineno (int line_number ,yyscan_t yyscanner );
/* Macros after this point can all be overridden by user definitions in
* section 1.
*/
#ifndef YY_SKIP_YYWRAP
#ifdef __cplusplus
extern "C" int linguist_yywrap (yyscan_t yyscanner );
#else
extern int linguist_yywrap (yyscan_t yyscanner );
#endif
#endif
#ifndef yytext_ptr
static void yy_flex_strncpy (char *,yyconst char *,int ,yyscan_t yyscanner);
#endif
#ifdef YY_NEED_STRLEN
static int yy_flex_strlen (yyconst char * ,yyscan_t yyscanner);
#endif
#ifndef YY_NO_INPUT
#endif
/* Amount of stuff to slurp up with each read. */
#ifndef YY_READ_BUF_SIZE
#define YY_READ_BUF_SIZE 8192
#endif
/* Number of entries by which start-condition stack grows. */
#ifndef YY_START_STACK_INCR
#define YY_START_STACK_INCR 25
#endif
/* Default declaration of generated scanner - a define so the user can
* easily add parameters.
*/
#ifndef YY_DECL
#define YY_DECL_IS_OURS 1
extern int linguist_yylex (yyscan_t yyscanner);
#define YY_DECL int linguist_yylex (yyscan_t yyscanner)
#endif /* !YY_DECL */
/* yy_get_previous_state - get the state just before the EOB char was reached */
#undef YY_NEW_FILE
#undef YY_FLUSH_BUFFER
#undef yy_set_bol
#undef yy_new_buffer
#undef yy_set_interactive
#undef YY_DO_BEFORE_ACTION
#ifdef YY_DECL_IS_OURS
#undef YY_DECL_IS_OURS
#undef YY_DECL
#endif
#line 118 "tokenizer.l"
#line 335 "lex.linguist_yy.h"
#undef linguist_yyIN_HEADER
#endif /* linguist_yyHEADER_H */

75
ext/linguist/linguist.c Normal file
View File

@@ -0,0 +1,75 @@
#include "ruby.h"
#include "linguist.h"
#include "lex.linguist_yy.h"
// Anything longer is unlikely to be useful.
#define MAX_TOKEN_LEN 32
int linguist_yywrap(yyscan_t yyscanner) {
return 1;
}
static VALUE rb_tokenizer_extract_tokens(VALUE self, VALUE rb_data) {
YY_BUFFER_STATE buf;
yyscan_t scanner;
struct tokenizer_extra extra;
VALUE ary, s;
long len;
int r;
Check_Type(rb_data, T_STRING);
len = RSTRING_LEN(rb_data);
if (len > 100000)
len = 100000;
linguist_yylex_init_extra(&extra, &scanner);
buf = linguist_yy_scan_bytes(RSTRING_PTR(rb_data), (int) len, scanner);
ary = rb_ary_new();
do {
extra.type = NO_ACTION;
extra.token = NULL;
r = linguist_yylex(scanner);
switch (extra.type) {
case NO_ACTION:
break;
case REGULAR_TOKEN:
len = strlen(extra.token);
if (len <= MAX_TOKEN_LEN)
rb_ary_push(ary, rb_str_new(extra.token, len));
free(extra.token);
break;
case SHEBANG_TOKEN:
len = strlen(extra.token);
if (len <= MAX_TOKEN_LEN) {
s = rb_str_new2("SHEBANG#!");
rb_str_cat(s, extra.token, len);
rb_ary_push(ary, s);
}
free(extra.token);
break;
case SGML_TOKEN:
len = strlen(extra.token);
if (len <= MAX_TOKEN_LEN) {
s = rb_str_new(extra.token, len);
rb_str_cat2(s, ">");
rb_ary_push(ary, s);
}
free(extra.token);
break;
}
} while (r);
linguist_yy_delete_buffer(buf, scanner);
linguist_yylex_destroy(scanner);
return ary;
}
__attribute__((visibility("default"))) void Init_linguist() {
VALUE rb_mLinguist = rb_define_module("Linguist");
VALUE rb_cTokenizer = rb_define_class_under(rb_mLinguist, "Tokenizer", rb_cObject);
rb_define_method(rb_cTokenizer, "extract_tokens", rb_tokenizer_extract_tokens, 1);
}

11
ext/linguist/linguist.h Normal file
View File

@@ -0,0 +1,11 @@
enum tokenizer_type {
NO_ACTION,
REGULAR_TOKEN,
SHEBANG_TOKEN,
SGML_TOKEN,
};
struct tokenizer_extra {
char *token;
enum tokenizer_type type;
};

119
ext/linguist/tokenizer.l Normal file
View File

@@ -0,0 +1,119 @@
%{
#include "linguist.h"
#define feed_token(tok, typ) do { \
yyextra->token = (tok); \
yyextra->type = (typ); \
} while (0)
#define eat_until_eol() do { \
int c; \
while ((c = input(yyscanner)) != '\n' && c != EOF && c); \
if (c == EOF || !c) \
return 0; \
} while (0)
#define eat_until_unescaped(q) do { \
int c; \
while ((c = input(yyscanner)) != EOF && c) { \
if (c == '\n') \
break; \
if (c == '\\') { \
c = input(yyscanner); \
if (c == EOF || !c) \
return 0; \
} else if (c == q) \
break; \
} \
if (c == EOF || !c) \
return 0; \
} while (0)
%}
%option never-interactive yywrap reentrant nounput warn nodefault header-file="lex.linguist_yy.h" extra-type="struct tokenizer_extra *" prefix="linguist_yy"
%x sgml c_comment xml_comment haskell_comment ocaml_comment python_dcomment python_scomment
%%
^#![ \t]*([[:alnum:]_\/]*\/)?env([ \t]+([^ \t=]*=[^ \t]*))*[ \t]+[[:alpha:]_]+ {
const char *off = strrchr(yytext, ' ');
if (!off)
off = yytext;
else
++off;
feed_token(strdup(off), SHEBANG_TOKEN);
eat_until_eol();
return 1;
}
^#![ \t]*[[:alpha:]_\/]+ {
const char *off = strrchr(yytext, '/');
if (!off)
off = yytext;
else
++off;
if (strcmp(off, "env") == 0) {
eat_until_eol();
} else {
feed_token(strdup(off), SHEBANG_TOKEN);
eat_until_eol();
return 1;
}
}
^[ \t]*(\/\/|--|\#|%|\")" ".* { /* nothing */ }
"/*" { BEGIN(c_comment); }
/* See below for xml_comment start. */
"{-" { BEGIN(haskell_comment); }
"(*" { BEGIN(ocaml_comment); }
"\"\"\"" { BEGIN(python_dcomment); }
"'''" { BEGIN(python_scomment); }
<c_comment,xml_comment,haskell_comment,ocaml_comment,python_dcomment,python_scomment>.|\n { /* nothing */ }
<c_comment>"*/" { BEGIN(INITIAL); }
<xml_comment>"-->" { BEGIN(INITIAL); }
<haskell_comment>"-}" { BEGIN(INITIAL); }
<ocaml_comment>"*)" { BEGIN(INITIAL); }
<python_dcomment>"\"\"\"" { BEGIN(INITIAL); }
<python_scomment>"'''" { BEGIN(INITIAL); }
\"\"|'' { /* nothing */ }
\" { eat_until_unescaped('"'); }
' { eat_until_unescaped('\''); }
(0x[0-9a-fA-F]([0-9a-fA-F]|\.)*|[0-9]([0-9]|\.)*)([uU][lL]{0,2}|([eE][-+][0-9]*)?[fFlL]*) { /* nothing */ }
\<[[:alnum:]_!./?-]+ {
if (strcmp(yytext, "<!--") == 0) {
BEGIN(xml_comment);
} else {
feed_token(strdup(yytext), SGML_TOKEN);
BEGIN(sgml);
return 1;
}
}
<sgml>[[:alnum:]_]+=\" { feed_token(strndup(yytext, strlen(yytext) - 1), REGULAR_TOKEN); eat_until_unescaped('"'); return 1; }
<sgml>[[:alnum:]_]+=' { feed_token(strndup(yytext, strlen(yytext) - 1), REGULAR_TOKEN); eat_until_unescaped('\''); return 1; }
<sgml>[[:alnum:]_]+=[[:alnum:]_]* { feed_token(strdup(yytext), REGULAR_TOKEN); *(strchr(yyextra->token, '=') + 1) = 0; return 1; }
<sgml>[[:alnum:]_]+ { feed_token(strdup(yytext), REGULAR_TOKEN); return 1; }
<sgml>\> { BEGIN(INITIAL); }
<sgml>.|\n { /* nothing */ }
;|\{|\}|\(|\)|\[|\] { feed_token(strdup(yytext), REGULAR_TOKEN); return 1; }
[[:alnum:]_.@#/*]+ {
if (strncmp(yytext, "/*", 2) == 0) {
if (strlen(yytext) >= 4 && strcmp(yytext + strlen(yytext) - 2, "*/") == 0) {
/* nothing */
} else {
BEGIN(c_comment);
}
} else {
feed_token(strdup(yytext), REGULAR_TOKEN);
return 1;
}
}
\<\<?|\+|\-|\*|\/|%|&&?|\|\|? { feed_token(strdup(yytext), REGULAR_TOKEN); return 1; }
.|\n { /* nothing */ }
%%

View File

@@ -10,15 +10,17 @@ Gem::Specification.new do |s|
s.homepage = "https://github.com/github/linguist"
s.license = "MIT"
s.files = Dir['lib/**/*'] + Dir['grammars/*'] + ['LICENSE']
s.files = Dir['lib/**/*'] + Dir['ext/**/*'] + Dir['grammars/*'] + ['LICENSE']
s.executables = ['linguist', 'git-linguist']
s.extensions = ['ext/linguist/extconf.rb']
s.add_dependency 'charlock_holmes', '~> 0.7.3'
s.add_dependency 'charlock_holmes', '~> 0.7.5'
s.add_dependency 'escape_utils', '~> 1.1.0'
s.add_dependency 'mime-types', '>= 1.19'
s.add_dependency 'rugged', '>= 0.25.1'
s.add_development_dependency 'minitest', '>= 5.0'
s.add_development_dependency 'rake-compiler', '~> 0.9'
s.add_development_dependency 'mocha'
s.add_development_dependency 'plist', '~>3.1'
s.add_development_dependency 'pry'

View File

@@ -1,4 +1,3 @@
---
https://bitbucket.org/Clams/sublimesystemverilog/get/default.tar.gz:
- source.systemverilog
- source.ucfconstraints
@@ -45,8 +44,6 @@ vendor/grammars/Isabelle.tmbundle:
- source.isabelle.theory
vendor/grammars/JSyntax:
- source.j
vendor/grammars/Julia.tmbundle:
- source.julia
vendor/grammars/Lean.tmbundle:
- source.lean
vendor/grammars/LiveScript.tmbundle:
@@ -130,6 +127,9 @@ vendor/grammars/SublimePuppet:
- source.puppet
vendor/grammars/SublimeXtend:
- source.xtend
vendor/grammars/Syntax-highlighting-for-PostCSS:
- source.css.postcss.sugarss
- source.postcss
vendor/grammars/TLA:
- source.tla
vendor/grammars/TXL:
@@ -138,6 +138,11 @@ vendor/grammars/Terraform.tmLanguage:
- source.terraform
vendor/grammars/Textmate-Gosu-Bundle:
- source.gosu.2
vendor/grammars/TypeScript-TmLanguage:
- source.ts
- source.tsx
- text.error-list
- text.find-refs
vendor/grammars/UrWeb-Language-Definition:
- source.ur
vendor/grammars/VBDotNetSyntax:
@@ -187,6 +192,9 @@ vendor/grammars/atom-language-1c-bsl:
vendor/grammars/atom-language-clean:
- source.clean
- text.restructuredtext.clean
vendor/grammars/atom-language-julia:
- source.julia
- source.julia.console
vendor/grammars/atom-language-p4:
- source.p4
vendor/grammars/atom-language-perl6:
@@ -252,6 +260,8 @@ vendor/grammars/d.tmbundle:
vendor/grammars/dartlang:
- source.dart
- source.yaml-ext
vendor/grammars/data-weave-tmLanguage:
- source.data-weave
vendor/grammars/desktop.tmbundle:
- source.desktop
vendor/grammars/diff.tmbundle:
@@ -333,6 +343,8 @@ vendor/grammars/java.tmbundle:
- source.java-properties
- text.html.jsp
- text.junit-test-report
vendor/grammars/javadoc.tmbundle:
- text.html.javadoc
vendor/grammars/javascript-objective-j.tmbundle:
- source.js.objj
vendor/grammars/jflex.tmbundle:
@@ -350,6 +362,8 @@ vendor/grammars/language-asn1:
vendor/grammars/language-babel:
- source.js.jsx
- source.regexp.babel
vendor/grammars/language-ballerina:
- source.ballerina
vendor/grammars/language-batchfile:
- source.batchfile
vendor/grammars/language-blade:
@@ -377,6 +391,8 @@ vendor/grammars/language-csound:
- source.csound-score
vendor/grammars/language-css:
- source.css
vendor/grammars/language-cwl:
- source.cwl
vendor/grammars/language-emacs-lisp:
- source.emacs.lisp
vendor/grammars/language-fontforge:
@@ -394,6 +410,7 @@ vendor/grammars/language-haml:
- text.haml
- text.hamlc
vendor/grammars/language-haskell:
- annotation.liquidhaskell.haskell
- hint.haskell
- hint.message.haskell
- hint.type.haskell
@@ -401,6 +418,7 @@ vendor/grammars/language-haskell:
- source.cabal
- source.haskell
- source.hsc2hs
- source.hsig
- text.tex.latex.haskell
vendor/grammars/language-inform7:
- source.inform7
@@ -459,6 +477,10 @@ vendor/grammars/language-roff:
vendor/grammars/language-rpm-spec:
- source.changelogs.rpm-spec
- source.rpm-spec
vendor/grammars/language-ruby:
- source.ruby
- source.ruby.gemfile
- text.html.erb
vendor/grammars/language-shellscript:
- source.shell
- text.shell-session
@@ -485,6 +507,8 @@ vendor/grammars/language-yaml:
- source.yaml
vendor/grammars/language-yang:
- source.yang
vendor/grammars/language-yara:
- source.yara
vendor/grammars/latex.tmbundle:
- text.bibtex
- text.log.latex
@@ -551,7 +575,7 @@ vendor/grammars/opa.tmbundle:
- source.opa
vendor/grammars/openscad.tmbundle:
- source.scad
vendor/grammars/oz-tmbundle/Syntaxes/Oz.tmLanguage:
vendor/grammars/oz-tmbundle:
- source.oz
vendor/grammars/parrot:
- source.parrot.pir
@@ -588,9 +612,6 @@ vendor/grammars/rascal-syntax-highlighting:
- source.rascal
vendor/grammars/ruby-slim.tmbundle:
- text.slim
vendor/grammars/ruby.tmbundle:
- source.ruby
- text.html.erb
vendor/grammars/sas.tmbundle:
- source.SASLog
- source.sas
@@ -616,6 +637,8 @@ vendor/grammars/sourcepawn:
- source.sp
vendor/grammars/sql.tmbundle:
- source.sql
vendor/grammars/squirrel-language:
- source.nut
vendor/grammars/st2-zonefile:
- text.zone_file
vendor/grammars/standard-ml.tmbundle:
@@ -623,6 +646,8 @@ vendor/grammars/standard-ml.tmbundle:
- source.ml
vendor/grammars/sublime-MuPAD:
- source.mupad
vendor/grammars/sublime-angelscript:
- source.angelscript
vendor/grammars/sublime-aspectj:
- source.aspectj
vendor/grammars/sublime-autoit:
@@ -644,6 +669,8 @@ vendor/grammars/sublime-golo:
- source.golo
vendor/grammars/sublime-mask:
- source.mask
vendor/grammars/sublime-nearley:
- source.ne
vendor/grammars/sublime-netlinx:
- source.netlinx
- source.netlinx.erb
@@ -669,11 +696,6 @@ vendor/grammars/sublime-terra:
- source.terra
vendor/grammars/sublime-text-ox:
- source.ox
vendor/grammars/sublime-typescript:
- source.ts
- source.tsx
- text.error-list
- text.find-refs
vendor/grammars/sublime-varnish:
- source.varnish.vcl
vendor/grammars/sublime_cobol:
@@ -706,6 +728,8 @@ vendor/grammars/vhdl:
- source.vhdl
vendor/grammars/vue-syntax-highlight:
- text.html.vue
vendor/grammars/wdl-sublime-syntax-highlighter:
- source.wdl
vendor/grammars/xc.tmbundle:
- source.xc
vendor/grammars/xml.tmbundle:

View File

@@ -275,10 +275,8 @@ module Linguist
# also--importantly--without having to duplicate many (potentially
# large) strings.
begin
encoded_newlines = ["\r\n", "\r", "\n"].
map { |nl| nl.encode(ruby_encoding, "ASCII-8BIT").force_encoding(data.encoding) }
data.split(Regexp.union(encoded_newlines), -1)
data.split(encoded_newlines_re, -1)
rescue Encoding::ConverterNotFoundError
# The data is not splittable in the detected encoding. Assume it's
# one big line.
@@ -289,6 +287,51 @@ module Linguist
end
end
def encoded_newlines_re
@encoded_newlines_re ||= Regexp.union(["\r\n", "\r", "\n"].
map { |nl| nl.encode(ruby_encoding, "ASCII-8BIT").force_encoding(data.encoding) })
end
def first_lines(n)
return lines[0...n] if defined? @lines
return [] unless viewable? && data
i, c = 0, 0
while c < n && j = data.index(encoded_newlines_re, i)
i = j + $&.length
c += 1
end
data[0...i].split(encoded_newlines_re, -1)
end
def last_lines(n)
if defined? @lines
if n >= @lines.length
@lines
else
lines[-n..-1]
end
end
return [] unless viewable? && data
no_eol = true
i, c = data.length, 0
k = i
while c < n && j = data.rindex(encoded_newlines_re, i - 1)
if c == 0 && j + $&.length == i
no_eol = false
n += 1
end
i = j
k = j + $&.length
c += 1
end
r = data[k..-1].split(encoded_newlines_re, -1)
r.pop if !no_eol
r
end
# Public: Get number of lines of code
#
# Requires Blob#data

View File

@@ -3,6 +3,8 @@ require 'linguist/tokenizer'
module Linguist
# Language bayesian classifier.
class Classifier
CLASSIFIER_CONSIDER_BYTES = 50 * 1024
# Public: Use the classifier to detect language of the blob.
#
# blob - An object that quacks like a blob.
@@ -17,7 +19,7 @@ module Linguist
# Returns an Array of Language objects, most probable first.
def self.call(blob, possible_languages)
language_names = possible_languages.map(&:name)
classify(Samples.cache, blob.data, language_names).map do |name, _|
classify(Samples.cache, blob.data[0...CLASSIFIER_CONSIDER_BYTES], language_names).map do |name, _|
Language[name] # Return the actual Language objects
end
end

View File

@@ -23,21 +23,21 @@ module Linguist
#
# Returns a String like '100644'
def mode
File.stat(@fullpath).mode.to_s(8)
@mode ||= File.stat(@fullpath).mode.to_s(8)
end
# Public: Read file contents.
#
# Returns a String.
def data
File.read(@fullpath)
@data ||= File.read(@fullpath)
end
# Public: Get byte size
#
# Returns an Integer.
def size
File.size(@fullpath)
@size ||= File.size(@fullpath)
end
end
end

View File

@@ -52,6 +52,8 @@ module Linguist
# Return true or false
def generated?
xcode_file? ||
cocoapods? ||
carthage_build? ||
generated_net_designer_file? ||
generated_net_specflow_feature_file? ||
composer_lock? ||
@@ -95,6 +97,20 @@ module Linguist
['.nib', '.xcworkspacedata', '.xcuserstate'].include?(extname)
end
# Internal: Is the blob part of Pods/, which contains dependencies not meant for humans in pull requests.
#
# Returns true or false.
def cocoapods?
!!name.match(/(^Pods|\/Pods)\//)
end
# Internal: Is the blob part of Carthage/Build/, which contains dependencies not meant for humans in pull requests.
#
# Returns true or false.
def carthage_build?
!!name.match(/(^|\/)Carthage\/Build\//)
end
# Internal: Is the blob minified files?
#
# Consider a file minified if the average line length is

View File

@@ -1,6 +1,8 @@
module Linguist
# A collection of simple heuristics that can be used to better analyze languages.
class Heuristics
HEURISTICS_CONSIDER_BYTES = 50 * 1024
# Public: Use heuristics to detect language of the blob.
#
# blob - An object that quacks like a blob.
@@ -14,7 +16,7 @@ module Linguist
#
# Returns an Array of languages, or empty if none matched or were inconclusive.
def self.call(blob, candidates)
data = blob.data
data = blob.data[0...HEURISTICS_CONSIDER_BYTES]
@heuristics.each do |heuristic|
if heuristic.matches?(blob.name, candidates)
@@ -71,7 +73,25 @@ module Linguist
end
# Common heuristics
CPlusPlusRegex = Regexp.union(
/^\s*#\s*include <(cstdint|string|vector|map|list|array|bitset|queue|stack|forward_list|unordered_map|unordered_set|(i|o|io)stream)>/,
/^\s*template\s*</,
/^[ \t]*try/,
/^[ \t]*catch\s*\(/,
/^[ \t]*(class|(using[ \t]+)?namespace)\s+\w+/,
/^[ \t]*(private|public|protected):$/,
/std::\w+/)
ObjectiveCRegex = /^\s*(@(interface|class|protocol|property|end|synchronised|selector|implementation)\b|#import\s+.+\.h[">])/
Perl5Regex = /\buse\s+(?:strict\b|v?5\.)/
Perl6Regex = /^\s*(?:use\s+v6\b|\bmodule\b|\b(?:my\s+)?class\b)/
disambiguate ".as" do |data|
if /^\s*(package\s+[a-z0-9_\.]+|import\s+[a-zA-Z0-9_\.]+;|class\s+[A-Za-z0-9_]+\s+extends\s+[A-Za-z0-9_]+)/.match(data)
Language["ActionScript"]
else
Language["AngelScript"]
end
end
disambiguate ".asc" do |data|
if /^(----[- ]BEGIN|ssh-(rsa|dss)) /.match(data)
@@ -211,8 +231,7 @@ module Linguist
disambiguate ".h" do |data|
if ObjectiveCRegex.match(data)
Language["Objective-C"]
elsif (/^\s*#\s*include <(cstdint|string|vector|map|list|array|bitset|queue|stack|forward_list|unordered_map|unordered_set|(i|o|io)stream)>/.match(data) ||
/^\s*template\s*</.match(data) || /^[ \t]*try/.match(data) || /^[ \t]*catch\s*\(/.match(data) || /^[ \t]*(class|(using[ \t]+)?namespace)\s+\w+/.match(data) || /^[ \t]*(private|public|protected):$/.match(data) || /std::\w+/.match(data))
elsif CPlusPlusRegex.match(data)
Language["C++"]
end
end
@@ -342,33 +361,25 @@ module Linguist
disambiguate ".pl" do |data|
if /^[^#]*:-/.match(data)
Language["Prolog"]
elsif /use strict|use\s+v?5\./.match(data)
elsif Perl5Regex.match(data)
Language["Perl"]
elsif /^(use v6|(my )?class|module)/.match(data)
elsif Perl6Regex.match(data)
Language["Perl 6"]
end
end
disambiguate ".pm" do |data|
if /^\s*(?:use\s+v6\s*;|(?:\bmy\s+)?class|module)\b/.match(data)
Language["Perl 6"]
elsif /\buse\s+(?:strict\b|v?5\.)/.match(data)
if Perl5Regex.match(data)
Language["Perl"]
elsif Perl6Regex.match(data)
Language["Perl 6"]
elsif /^\s*\/\* XPM \*\//.match(data)
Language["XPM"]
end
end
disambiguate ".pod", "Pod", "Perl" do |data|
if /^=\w+\b/.match(data)
Language["Pod"]
else
Language["Perl"]
end
end
disambiguate ".pro" do |data|
if /^[^#]+:-/.match(data)
if /^[^\[#]+:-/.match(data)
Language["Prolog"]
elsif data.include?("last_client=")
Language["INI"]
@@ -450,12 +461,12 @@ module Linguist
end
disambiguate ".t" do |data|
if /^\s*%[ \t]+|^\s*var\s+\w+\s*:=\s*\w+/.match(data)
Language["Turing"]
elsif /^\s*(?:use\s+v6\s*;|\bmodule\b|\b(?:my\s+)?class\b)/.match(data)
Language["Perl 6"]
elsif /\buse\s+(?:strict\b|v?5\.)/.match(data)
if Perl5Regex.match(data)
Language["Perl"]
elsif Perl6Regex.match(data)
Language["Perl 6"]
elsif /^\s*%[ \t]+|^\s*var\s+\w+\s*:=\s*\w+/.match(data)
Language["Turing"]
end
end
@@ -468,7 +479,7 @@ module Linguist
end
disambiguate ".ts" do |data|
if data.include?("<TS")
if /<TS\b/.match(data)
Language["XML"]
else
Language["TypeScript"]
@@ -491,5 +502,14 @@ module Linguist
Language["XML"]
end
end
disambiguate ".w" do |data|
if (data.include?("&ANALYZE-SUSPEND _UIB-CODE-BLOCK _CUSTOM _DEFINITIONS"))
Language["OpenEdge ABL"]
elsif /^@(<|\w+\.)/.match(data)
Language["CWeb"]
end
end
end
end

View File

@@ -110,7 +110,7 @@ module Linguist
# Returns the Language or nil if none was found.
def self.find_by_name(name)
return nil if !name.is_a?(String) || name.to_s.empty?
name && (@name_index[name.downcase] || @name_index[name.split(',').first.downcase])
name && (@name_index[name.downcase] || @name_index[name.split(',', 2).first.downcase])
end
# Public: Look up Language by one of its aliases.
@@ -125,7 +125,7 @@ module Linguist
# Returns the Language or nil if none was found.
def self.find_by_alias(name)
return nil if !name.is_a?(String) || name.to_s.empty?
name && (@alias_index[name.downcase] || @alias_index[name.split(',').first.downcase])
name && (@alias_index[name.downcase] || @alias_index[name.split(',', 2).first.downcase])
end
# Public: Look up Languages by filename.
@@ -219,10 +219,7 @@ module Linguist
lang = @index[name.downcase]
return lang if lang
name = name.split(',').first
return nil if name.to_s.empty?
@index[name.downcase]
@index[name.split(',', 2).first.downcase]
end
# Public: A List of popular languages

View File

@@ -210,6 +210,17 @@ Alpine Abuild:
codemirror_mode: shell
codemirror_mime_type: text/x-sh
language_id: 14
AngelScript:
type: programming
color: "#C7D7DC"
extensions:
- ".as"
- ".angelscript"
tm_scope: source.angelscript
ace_mode: text
codemirror_mode: clike
codemirror_mime_type: text/x-c++src
language_id: 389477596
Ant Build System:
type: data
tm_scope: text.xml.ant
@@ -221,7 +232,7 @@ Ant Build System:
codemirror_mime_type: application/xml
language_id: 15
ApacheConf:
type: markup
type: data
aliases:
- aconf
- apache
@@ -354,6 +365,14 @@ Awk:
- nawk
ace_mode: text
language_id: 28
Ballerina:
type: programming
extensions:
- ".bal"
tm_scope: source.ballerina
ace_mode: text
color: "#FF5000"
language_id: 720859680
Batchfile:
type: programming
aliases:
@@ -625,8 +644,10 @@ CartoCSS:
language_id: 53
Ceylon:
type: programming
color: "#dfa535"
extensions:
- ".ceylon"
tm_scope: source.ceylon
ace_mode: text
language_id: 54
Chapel:
@@ -786,6 +807,19 @@ Common Lisp:
codemirror_mode: commonlisp
codemirror_mime_type: text/x-common-lisp
language_id: 66
Common Workflow Language:
alias: cwl
type: programming
ace_mode: yaml
codemirror_mode: yaml
codemirror_mime_type: text/x-yaml
extensions:
- ".cwl"
interpreters:
- cwl-runner
color: "#B5314C"
tm_scope: source.cwl
language_id: 988547172
Component Pascal:
type: programming
color: "#B0CE4E"
@@ -855,7 +889,7 @@ Csound:
- ".orc"
- ".udo"
tm_scope: source.csound
ace_mode: text
ace_mode: csound_orchestra
language_id: 73
Csound Document:
type: programming
@@ -864,7 +898,7 @@ Csound Document:
extensions:
- ".csd"
tm_scope: source.csound-document
ace_mode: text
ace_mode: csound_document
language_id: 74
Csound Score:
type: programming
@@ -873,7 +907,7 @@ Csound Score:
extensions:
- ".sco"
tm_scope: source.csound-score
ace_mode: text
ace_mode: csound_score
language_id: 75
Cuda:
type: programming
@@ -986,6 +1020,14 @@ Dart:
codemirror_mode: dart
codemirror_mime_type: application/dart
language_id: 87
DataWeave:
type: programming
color: "#003a52"
extensions:
- ".dwl"
ace_mode: text
tm_scope: source.data-weave
language_id: 974514097
Diff:
type: data
extensions:
@@ -1086,8 +1128,7 @@ EQ:
codemirror_mime_type: text/x-csharp
language_id: 96
Eagle:
type: markup
color: "#814C05"
type: data
extensions:
- ".sch"
- ".brd"
@@ -1116,6 +1157,15 @@ Ecere Projects:
codemirror_mode: javascript
codemirror_mime_type: application/json
language_id: 98
Edje Data Collection:
type: data
extensions:
- ".edc"
tm_scope: source.json
ace_mode: json
codemirror_mode: javascript
codemirror_mime_type: application/json
language_id: 342840478
Eiffel:
type: programming
color: "#946d57"
@@ -1487,8 +1537,8 @@ Gerber Image:
- ".gtp"
- ".gts"
interpreters:
- "gerbv"
- "gerbview"
- gerbv
- gerbview
tm_scope: source.gerber
ace_mode: text
language_id: 404627610
@@ -1605,6 +1655,7 @@ GraphQL:
type: data
extensions:
- ".graphql"
- ".gql"
tm_scope: source.graphql
ace_mode: text
language_id: 139
@@ -1868,6 +1919,8 @@ INI:
- ".prefs"
- ".pro"
- ".properties"
filenames:
- buildozer.spec
tm_scope: source.ini
aliases:
- dosini
@@ -1890,6 +1943,7 @@ IRC log:
language_id: 164
Idris:
type: programming
color: "#b30000"
extensions:
- ".idr"
- ".lidr"
@@ -2078,6 +2132,7 @@ JavaScript:
- ".jsfl"
- ".jsm"
- ".jss"
- ".mjs"
- ".njs"
- ".pac"
- ".sjs"
@@ -2149,13 +2204,6 @@ KRL:
tm_scope: none
ace_mode: text
language_id: 186
KiCad Board:
type: data
extensions:
- ".brd"
tm_scope: source.pcb.board
ace_mode: text
language_id: 140848857
KiCad Layout:
type: data
aliases:
@@ -2171,6 +2219,13 @@ KiCad Layout:
codemirror_mode: commonlisp
codemirror_mime_type: text/x-common-lisp
language_id: 187
KiCad Legacy Layout:
type: data
extensions:
- ".brd"
tm_scope: source.pcb.board
ace_mode: text
language_id: 140848857
KiCad Schematic:
type: data
aliases:
@@ -2203,9 +2258,9 @@ Kotlin:
language_id: 189
LFE:
type: programming
color: "#4C3023"
extensions:
- ".lfe"
group: Erlang
tm_scope: source.lisp
ace_mode: lisp
codemirror_mode: commonlisp
@@ -2614,7 +2669,7 @@ Mathematica:
language_id: 224
Matlab:
type: programming
color: "#bb92ac"
color: "#e16737"
aliases:
- octave
extensions:
@@ -2741,6 +2796,7 @@ Monkey:
type: programming
extensions:
- ".monkey"
- ".monkey2"
ace_mode: text
tm_scope: source.monkey
language_id: 236
@@ -2790,6 +2846,15 @@ NSIS:
codemirror_mode: nsis
codemirror_mime_type: text/x-nsis
language_id: 242
Nearley:
type: programming
ace_mode: text
color: "#990000"
extensions:
- ".ne"
- ".nearley"
tm_scope: source.ne
language_id: 521429430
Nemerle:
type: programming
color: "#3d3c6e"
@@ -2841,7 +2906,7 @@ NewLisp:
codemirror_mime_type: text/x-common-lisp
language_id: 247
Nginx:
type: markup
type: data
extensions:
- ".nginxconf"
- ".vhost"
@@ -2853,7 +2918,6 @@ Nginx:
ace_mode: text
codemirror_mode: nginx
codemirror_mime_type: text/x-nginx-conf
color: "#9469E9"
language_id: 248
Nim:
type: programming
@@ -3028,6 +3092,7 @@ OpenEdge ABL:
extensions:
- ".p"
- ".cls"
- ".w"
tm_scope: source.abl
ace_mode: text
language_id: 264
@@ -3271,7 +3336,6 @@ Perl:
- ".ph"
- ".plx"
- ".pm"
- ".pod"
- ".psgi"
- ".t"
filenames:
@@ -3376,6 +3440,14 @@ Pony:
tm_scope: source.pony
ace_mode: text
language_id: 290
PostCSS:
type: markup
tm_scope: source.postcss
group: CSS
extensions:
- ".pcss"
ace_mode: text
language_id: 262764437
PostScript:
type: markup
color: "#da291c"
@@ -3442,7 +3514,7 @@ Propeller Spin:
ace_mode: text
language_id: 296
Protocol Buffer:
type: markup
type: data
aliases:
- protobuf
- Protocol Buffers
@@ -3487,8 +3559,7 @@ Puppet:
tm_scope: source.puppet
language_id: 299
Pure Data:
type: programming
color: "#91de79"
type: data
extensions:
- ".pd"
tm_scope: none
@@ -3542,6 +3613,7 @@ Python:
- ".gclient"
- BUCK
- BUILD
- BUILD.bazel
- SConscript
- SConstruct
- Snakefile
@@ -4363,6 +4435,14 @@ Sublime Text Config:
- ".sublime_metrics"
- ".sublime_session"
language_id: 423
SugarSS:
type: markup
tm_scope: source.css.postcss.sugarss
group: CSS
extensions:
- ".sss"
ace_mode: text
language_id: 826404698
SuperCollider:
type: programming
color: "#46390b"
@@ -4660,8 +4740,8 @@ UrWeb:
ace_mode: text
language_id: 383
VCL:
group: Perl
type: programming
color: "#0298c3"
extensions:
- ".vcl"
tm_scope: source.varnish.vcl
@@ -4773,8 +4853,7 @@ Wavefront Object:
ace_mode: text
language_id: 393
Web Ontology Language:
type: markup
color: "#9cc9dd"
type: data
extensions:
- ".owl"
tm_scope: text.xml
@@ -4855,12 +4934,16 @@ XML:
- ".ant"
- ".axml"
- ".builds"
- ".ccproj"
- ".ccxml"
- ".clixml"
- ".cproject"
- ".cscfg"
- ".csdef"
- ".csl"
- ".csproj"
- ".ct"
- ".depproj"
- ".dita"
- ".ditamap"
- ".ditaval"
@@ -4883,6 +4966,8 @@ XML:
- ".mm"
- ".mod"
- ".mxml"
- ".natvis"
- ".ndproj"
- ".nproj"
- ".nuspec"
- ".odd"
@@ -4890,6 +4975,7 @@ XML:
- ".pkgproj"
- ".plist"
- ".pluginspec"
- ".proj"
- ".props"
- ".ps1xml"
- ".psc1"
@@ -4900,6 +4986,7 @@ XML:
- ".sch"
- ".scxml"
- ".sfproj"
- ".shproj"
- ".srdf"
- ".storyboard"
- ".stTheme"
@@ -4961,11 +5048,11 @@ XPM:
tm_scope: source.c
language_id: 781846279
XPages:
type: programming
type: data
extensions:
- ".xsp-config"
- ".xsp.metadata"
tm_scope: none
tm_scope: text.xml
ace_mode: xml
codemirror_mode: xml
codemirror_mime_type: text/xml
@@ -5050,6 +5137,7 @@ YAML:
- ".yml.mysql"
filenames:
- ".clang-format"
- ".clang-tidy"
ace_mode: yaml
codemirror_mode: yaml
codemirror_mime_type: text/x-yaml
@@ -5061,6 +5149,14 @@ YANG:
tm_scope: source.yang
ace_mode: text
language_id: 408
YARA:
type: data
ace_mode: text
extensions:
- ".yar"
- ".yara"
tm_scope: source.yara
language_id: 805122868
Yacc:
type: programming
extensions:
@@ -5159,6 +5255,14 @@ reStructuredText:
codemirror_mode: rst
codemirror_mime_type: text/x-rst
language_id: 419
wdl:
type: programming
color: "#42f1f4"
extensions:
- ".wdl"
tm_scope: source.wdl
ace_mode: text
language_id: 374521672
wisp:
type: programming
ace_mode: clojure

View File

@@ -109,8 +109,8 @@ module Linguist
# Returns an Array with one Language if the blob has a Vim or Emacs modeline
# that matches a Language name or alias. Returns an empty array if no match.
def self.call(blob, _ = nil)
header = blob.lines.first(SEARCH_SCOPE).join("\n")
footer = blob.lines.last(SEARCH_SCOPE).join("\n")
header = blob.first_lines(SEARCH_SCOPE).join("\n")
footer = blob.last_lines(SEARCH_SCOPE).join("\n")
Array(Language.find_by_alias(modeline(header + footer)))
end

View File

@@ -1,4 +1,5 @@
require 'strscan'
require 'linguist/linguist'
module Linguist
# Generic programming language tokenizer.
@@ -15,191 +16,5 @@ module Linguist
def self.tokenize(data)
new.extract_tokens(data)
end
# Read up to 100KB
BYTE_LIMIT = 100_000
# Start state on token, ignore anything till the next newline
SINGLE_LINE_COMMENTS = [
'//', # C
'--', # Ada, Haskell, AppleScript
'#', # Ruby
'%', # Tex
'"', # Vim
]
# Start state on opening token, ignore anything until the closing
# token is reached.
MULTI_LINE_COMMENTS = [
['/*', '*/'], # C
['<!--', '-->'], # XML
['{-', '-}'], # Haskell
['(*', '*)'], # Coq
['"""', '"""'], # Python
["'''", "'''"] # Python
]
START_SINGLE_LINE_COMMENT = Regexp.compile(SINGLE_LINE_COMMENTS.map { |c|
"\s*#{Regexp.escape(c)} "
}.join("|"))
START_MULTI_LINE_COMMENT = Regexp.compile(MULTI_LINE_COMMENTS.map { |c|
Regexp.escape(c[0])
}.join("|"))
# Internal: Extract generic tokens from data.
#
# data - String to scan.
#
# Examples
#
# extract_tokens("printf('Hello')")
# # => ['printf', '(', ')']
#
# Returns Array of token Strings.
def extract_tokens(data)
s = StringScanner.new(data)
tokens = []
until s.eos?
break if s.pos >= BYTE_LIMIT
if token = s.scan(/^#!.+$/)
if name = extract_shebang(token)
tokens << "SHEBANG#!#{name}"
end
# Single line comment
elsif s.beginning_of_line? && token = s.scan(START_SINGLE_LINE_COMMENT)
# tokens << token.strip
s.skip_until(/\n|\Z/)
# Multiline comments
elsif token = s.scan(START_MULTI_LINE_COMMENT)
# tokens << token
close_token = MULTI_LINE_COMMENTS.assoc(token)[1]
s.skip_until(Regexp.compile(Regexp.escape(close_token)))
# tokens << close_token
# Skip single or double quoted strings
elsif s.scan(/"/)
if s.peek(1) == "\""
s.getch
else
s.skip_until(/(?<!\\)"/)
end
elsif s.scan(/'/)
if s.peek(1) == "'"
s.getch
else
s.skip_until(/(?<!\\)'/)
end
# Skip number literals
elsif s.scan(/(0x\h(\h|\.)*|\d(\d|\.)*)([uU][lL]{0,2}|([eE][-+]\d*)?[fFlL]*)/)
# SGML style brackets
elsif token = s.scan(/<[^\s<>][^<>]*>/)
extract_sgml_tokens(token).each { |t| tokens << t }
# Common programming punctuation
elsif token = s.scan(/;|\{|\}|\(|\)|\[|\]/)
tokens << token
# Regular token
elsif token = s.scan(/[\w\.@#\/\*]+/)
tokens << token
# Common operators
elsif token = s.scan(/<<?|\+|\-|\*|\/|%|&&?|\|\|?/)
tokens << token
else
s.getch
end
end
tokens
end
# Internal: Extract normalized shebang command token.
#
# Examples
#
# extract_shebang("#!/usr/bin/ruby")
# # => "ruby"
#
# extract_shebang("#!/usr/bin/env node")
# # => "node"
#
# extract_shebang("#!/usr/bin/env A=B foo=bar awk -f")
# # => "awk"
#
# Returns String token or nil it couldn't be parsed.
def extract_shebang(data)
s = StringScanner.new(data)
if path = s.scan(/^#!\s*\S+/)
script = path.split('/').last
if script == 'env'
s.scan(/\s+/)
s.scan(/.*=[^\s]+\s+/)
script = s.scan(/\S+/)
end
script = script[/[^\d]+/, 0] if script
return script
end
nil
end
# Internal: Extract tokens from inside SGML tag.
#
# data - SGML tag String.
#
# Examples
#
# extract_sgml_tokens("<a href='' class=foo>")
# # => ["<a>", "href="]
#
# Returns Array of token Strings.
def extract_sgml_tokens(data)
s = StringScanner.new(data)
tokens = []
until s.eos?
# Emit start token
if token = s.scan(/<\/?[^\s>]+/)
tokens << "#{token}>"
# Emit attributes with trailing =
elsif token = s.scan(/\w+=/)
tokens << token
# Then skip over attribute value
if s.scan(/"/)
s.skip_until(/[^\\]"/)
elsif s.scan(/'/)
s.skip_until(/[^\\]'/)
else
s.skip_until(/\w+/)
end
# Emit lone attributes
elsif token = s.scan(/\w+/)
tokens << token
# Stop at the end of the tag
elsif s.scan(/>/)
s.terminate
else
s.getch
end
end
tokens
end
end
end

View File

@@ -19,9 +19,7 @@
- (^|/)dist/
# C deps
# https://github.com/joyent/node
- ^deps/
- ^tools/
- (^|/)configure$
- (^|/)config.guess$
- (^|/)config.sub$
@@ -65,6 +63,7 @@
# Font Awesome
- (^|/)font-awesome\.(css|less|scss|styl)$
- (^|/)font-awesome/.*\.(css|less|scss|styl)$
# Foundation css
- (^|/)foundation\.(css|less|scss|styl)$
@@ -81,6 +80,9 @@
# Animate.css
- (^|/)animate\.(css|less|scss|styl)$
# Select2
- (^|/)select2/.*\.(css|scss|js)$
# Vendored dependencies
- third[-_]?party/
- 3rd[-_]?party/
@@ -119,6 +121,15 @@
# jQuery File Upload
- (^|/)jquery\.fileupload(-\w+)?\.js$
# jQuery dataTables
- jquery.dataTables.js
# bootboxjs
- bootbox.js
# pdf-worker
- pdf.worker.js
# Slick
- (^|/)slick\.\w+.js$
@@ -135,6 +146,9 @@
- .sublime-project
- .sublime-workspace
# VS Code workspace files
- .vscode
# Prototype
- (^|/)prototype(.*)\.js$
- (^|/)effects\.js$
@@ -227,10 +241,7 @@
- \.imageset/
# Carthage
- ^Carthage/
# Cocoapods
- ^Pods/
- (^|/)Carthage/
# Sparkle
- (^|/)Sparkle/

View File

@@ -1,3 +1,3 @@
module Linguist
VERSION = "5.2.0"
VERSION = "5.3.3"
end

View File

@@ -1,7 +0,0 @@
{
"repository": "https://github.com/github/linguist",
"dependencies": {
"season": "~>5.4"
},
"license": "MIT"
}

View File

@@ -0,0 +1,35 @@
// A sample for Actionscript.
package foobar
{
import flash.display.MovieClip;
class Bar
{
public function getNumber():Number
{
return 10;
}
}
class Foo extends Bar
{
private var ourNumber:Number = 25;
override public function getNumber():Number
{
return ourNumber;
}
}
class Main extends MovieClip
{
public function Main()
{
var x:Bar = new Bar();
var y:Foo = new Foo();
trace(x.getNumber());
trace(y.getNumber());
}
}
}

View File

@@ -0,0 +1,13 @@
package mypackage
{
public class Hello
{
/* Let's say hello!
* This is just a test script for Linguist's Actionscript detection.
*/
public function sayHello():void
{
trace("Hello, world");
}
}
}

View File

@@ -0,0 +1,77 @@
/*
* This is a sample script.
*/
#include "BotManagerInterface.acs"
BotManager::BotManager g_BotManager( @CreateDumbBot );
CConCommand@ m_pAddBot;
void PluginInit()
{
g_BotManager.PluginInit();
@m_pAddBot = @CConCommand( "addbot", "Adds a new bot with the given name", @AddBotCallback );
}
void AddBotCallback( const CCommand@ args )
{
if( args.ArgC() < 2 )
{
g_Game.AlertMessage( at_console, "Usage: addbot <name>" );
return;
}
BotManager::BaseBot@ pBot = g_BotManager.CreateBot( args[ 1 ] );
if( pBot !is null )
{
g_Game.AlertMessage( at_console, "Created bot " + args[ 1 ] + "\n" );
}
else
{
g_Game.AlertMessage( at_console, "Could not create bot\n" );
}
}
final class DumbBot : BotManager::BaseBot
{
DumbBot( CBasePlayer@ pPlayer )
{
super( pPlayer );
}
void Think()
{
BotManager::BaseBot::Think();
// If the bot is dead and can be respawned, send a button press
if( Player.pev.deadflag >= DEAD_RESPAWNABLE )
{
Player.pev.button |= IN_ATTACK;
}
else
Player.pev.button &= ~IN_ATTACK;
KeyValueBuffer@ pInfoBuffer = g_EngineFuncs.GetInfoKeyBuffer( Player.edict() );
pInfoBuffer.SetValue( "topcolor", Math.RandomLong( 0, 255 ) );
pInfoBuffer.SetValue( "bottomcolor", Math.RandomLong( 0, 255 ) );
if( Math.RandomLong( 0, 100 ) > 10 )
Player.pev.button |= IN_ATTACK;
else
Player.pev.button &= ~IN_ATTACK;
for( uint uiIndex = 0; uiIndex < 3; ++uiIndex )
{
m_vecVelocity[ uiIndex ] = Math.RandomLong( -50, 50 );
}
}
}
BotManager::BaseBot@ CreateDumbBot( CBasePlayer@ pPlayer )
{
return @DumbBot( pPlayer );
}

View File

@@ -0,0 +1,396 @@
// Sample script.
// Source: https://github.com/codecat/ssbd-payload
array<WorldScript::PayloadBeginTrigger@> g_payloadBeginTriggers;
array<WorldScript::PayloadTeamForcefield@> g_teamForceFields;
[GameMode]
class Payload : TeamVersusGameMode
{
[Editable]
UnitFeed PayloadUnit;
[Editable]
UnitFeed FirstNode;
[Editable default=10]
int PrepareTime;
[Editable default=300]
int TimeLimit;
[Editable default=90]
int TimeAddCheckpoint;
[Editable default=2]
float TimeOvertime;
[Editable default=1000]
int TimePayloadHeal;
[Editable default=1]
int PayloadHeal;
PayloadBehavior@ m_payload;
int m_tmStarting;
int m_tmStarted;
int m_tmLimitCustom;
int m_tmOvertime;
int m_tmInOvertime;
PayloadHUD@ m_payloadHUD;
PayloadClassSwitchWindow@ m_switchClass;
array<SValue@>@ m_switchedSidesData;
Payload(Scene@ scene)
{
super(scene);
m_tmRespawnCountdown = 5000;
@m_payloadHUD = PayloadHUD(m_guiBuilder);
@m_switchTeam = PayloadTeamSwitchWindow(m_guiBuilder);
@m_switchClass = PayloadClassSwitchWindow(m_guiBuilder);
}
void UpdateFrame(int ms, GameInput& gameInput, MenuInput& menuInput) override
{
TeamVersusGameMode::UpdateFrame(ms, gameInput, menuInput);
m_payloadHUD.Update(ms);
if (Network::IsServer())
{
uint64 tmNow = CurrPlaytimeLevel();
if (m_tmStarting == 0)
{
if (GetPlayersInTeam(0) > 0 && GetPlayersInTeam(1) > 0)
{
m_tmStarting = tmNow;
(Network::Message("GameStarting") << m_tmStarting).SendToAll();
}
}
if (m_tmStarting > 0 && m_tmStarted == 0 && tmNow - m_tmStarting > PrepareTime * 1000)
{
m_tmStarted = tmNow;
(Network::Message("GameStarted") << m_tmStarted).SendToAll();
for (uint i = 0; i < g_payloadBeginTriggers.length(); i++)
{
WorldScript@ ws = WorldScript::GetWorldScript(g_scene, g_payloadBeginTriggers[i]);
ws.Execute();
}
}
}
if (!m_ended && m_tmStarted > 0)
CheckTimeReached(ms);
}
string NameForTeam(int index) override
{
if (index == 0)
return "Defenders";
else if (index == 1)
return "Attackers";
return "Unknown";
}
void CheckTimeReached(int dt)
{
// Check if time limit is not reached yet
if (m_tmLimitCustom - (CurrPlaytimeLevel() - m_tmStarted) > 0)
{
// Don't need to continue checking
m_tmOvertime = 0;
m_tmInOvertime = 0;
return;
}
// Count how long we're in overtime for later time limit fixing when we reach a checkpoint
if (m_tmOvertime > 0)
m_tmInOvertime += dt;
// Check if there are any attackers still inside
if (m_payload.AttackersInside() > 0)
{
// We have overtime
m_tmOvertime = int(TimeOvertime * 1000);
return;
}
// If we have overtime
if (m_tmOvertime > 0)
{
// Decrease timer
m_tmOvertime -= dt;
if (m_tmOvertime <= 0)
{
// Overtime countdown reached, time limit reached
TimeReached();
}
}
else
{
// No overtime, so time limit is reached
TimeReached();
}
}
void TimeReached()
{
if (!Network::IsServer())
return;
(Network::Message("TimeReached")).SendToAll();
SetWinner(false);
}
bool ShouldFreezeControls() override
{
return m_switchClass.m_visible
|| TeamVersusGameMode::ShouldFreezeControls();
}
bool ShouldDisplayCursor() override
{
return m_switchClass.m_visible
|| TeamVersusGameMode::ShouldDisplayCursor();
}
bool CanSwitchTeams() override
{
return m_tmStarted == 0;
}
PlayerRecord@ CreatePlayerRecord() override
{
return PayloadPlayerRecord();
}
int GetPlayerClassCount(PlayerClass playerClass, TeamVersusScore@ team)
{
if (team is null)
return 0;
int ret = 0;
for (uint i = 0; i < team.m_players.length(); i++)
{
if (team.m_players[i].peer == 255)
continue;
auto record = cast<PayloadPlayerRecord>(team.m_players[i]);
if (record.playerClass == playerClass)
ret++;
}
return ret;
}
void PlayerClassesUpdated()
{
m_switchClass.PlayerClassesUpdated();
}
void SetWinner(bool attackers)
{
if (attackers)
print("Attackers win!");
else
print("Defenders win!");
m_payloadHUD.Winner(attackers);
EndMatch();
}
void DisplayPlayerName(int idt, SpriteBatch& sb, PlayerRecord@ record, PlayerHusk@ plr, vec2 pos) override
{
TeamVersusGameMode::DisplayPlayerName(idt, sb, record, plr, pos);
m_payloadHUD.DisplayPlayerName(idt, sb, cast<PayloadPlayerRecord>(record), plr, pos);
}
void RenderFrame(int idt, SpriteBatch& sb) override
{
Player@ player = GetLocalPlayer();
if (player !is null)
{
PlayerHealgun@ healgun = cast<PlayerHealgun>(player.m_currWeapon);
if (healgun !is null)
healgun.RenderMarkers(idt, sb);
}
TeamVersusGameMode::RenderFrame(idt, sb);
}
void RenderWidgets(PlayerRecord@ player, int idt, SpriteBatch& sb) override
{
m_payloadHUD.Draw(sb, idt);
TeamVersusGameMode::RenderWidgets(player, idt, sb);
m_switchClass.Draw(sb, idt);
}
void GoNextMap() override
{
if (m_switchedSidesData !is null)
{
TeamVersusGameMode::GoNextMap();
return;
}
ChangeLevel(GetCurrentLevelFilename());
}
void SpawnPlayers() override
{
if (m_switchedSidesData is null)
{
TeamVersusGameMode::SpawnPlayers();
return;
}
if (Network::IsServer())
{
for (uint i = 0; i < m_switchedSidesData.length(); i += 2)
{
uint peer = uint(m_switchedSidesData[i].GetInteger());
uint team = uint(m_switchedSidesData[i + 1].GetInteger());
TeamVersusScore@ joinScore = FindTeamScore(team);
if (joinScore is m_teamScores[0])
@joinScore = m_teamScores[1];
else
@joinScore = m_teamScores[0];
for (uint j = 0; j < g_players.length(); j++)
{
if (g_players[j].peer != peer)
continue;
SpawnPlayer(j, vec2(), 0, joinScore.m_team);
break;
}
}
}
}
void Save(SValueBuilder& builder) override
{
if (m_switchedSidesData is null)
{
builder.PushArray("teams");
for (uint i = 0; i < g_players.length(); i++)
{
if (g_players[i].peer == 255)
continue;
builder.PushInteger(g_players[i].peer);
builder.PushInteger(g_players[i].team);
}
builder.PopArray();
}
TeamVersusGameMode::Save(builder);
}
void Start(uint8 peer, SValue@ save, StartMode sMode) override
{
if (save !is null)
@m_switchedSidesData = GetParamArray(UnitPtr(), save, "teams", false);
TeamVersusGameMode::Start(peer, save, sMode);
m_tmLimit = 0; // infinite time limit as far as VersusGameMode is concerned
m_tmLimitCustom = TimeLimit * 1000; // 5 minutes by default
@m_payload = cast<PayloadBehavior>(PayloadUnit.FetchFirst().GetScriptBehavior());
if (m_payload is null)
PrintError("PayloadUnit is not a PayloadBehavior!");
UnitPtr unitFirstNode = FirstNode.FetchFirst();
if (unitFirstNode.IsValid())
{
auto node = cast<WorldScript::PayloadNode>(unitFirstNode.GetScriptBehavior());
if (node !is null)
@m_payload.m_targetNode = node;
else
PrintError("First target node is not a PayloadNode script!");
}
else
PrintError("First target node was not set!");
WorldScript::PayloadNode@ prevNode;
float totalDistance = 0.0f;
UnitPtr unitNode = unitFirstNode;
while (unitNode.IsValid())
{
auto node = cast<WorldScript::PayloadNode>(unitNode.GetScriptBehavior());
if (node is null)
break;
unitNode = node.NextNode.FetchFirst();
@node.m_prevNode = prevNode;
@node.m_nextNode = cast<WorldScript::PayloadNode>(unitNode.GetScriptBehavior());
if (prevNode !is null)
totalDistance += dist(prevNode.Position, node.Position);
@prevNode = node;
}
float currDistance = 0.0f;
auto distNode = cast<WorldScript::PayloadNode>(unitFirstNode.GetScriptBehavior());
while (distNode !is null)
{
if (distNode.m_prevNode is null)
distNode.m_locationFactor = 0.0f;
else
{
currDistance += dist(distNode.m_prevNode.Position, distNode.Position);
distNode.m_locationFactor = currDistance / totalDistance;
}
@distNode = distNode.m_nextNode;
}
m_payloadHUD.AddCheckpoints();
}
void SpawnPlayer(int i, vec2 pos = vec2(), int unitId = 0, uint team = 0) override
{
TeamVersusGameMode::SpawnPlayer(i, pos, unitId, team);
PayloadPlayerRecord@ record = cast<PayloadPlayerRecord>(g_players[i]);
record.HandlePlayerClass();
if (g_players[i].local)
{
//TODO: This doesn't work well
bool localAttackers = (team == HashString("player_1"));
for (uint j = 0; j < g_teamForceFields.length(); j++)
{
bool hasCollision = (localAttackers != g_teamForceFields[j].Attackers);
auto units = g_teamForceFields[j].Units.FetchAll();
for (uint k = 0; k < units.length(); k++)
{
PhysicsBody@ body = units[k].GetPhysicsBody();
if (body is null)
{
PrintError("PhysicsBody for unit " + units[k].GetDebugName() + "is null");
continue;
}
body.SetActive(hasCollision);
}
}
}
}
}

View File

@@ -0,0 +1,16 @@
import ballerina.lang.messages;
import ballerina.net.http;
import ballerina.doc;
@doc:Description {value:"By default Ballerina assumes that the service is to be exposed via HTTP/1.1 using the system default port and that all requests coming to the HTTP server will be delivered to this service."}
service<http> helloWorld {
@doc:Description {value:"All resources are invoked with an argument of type message, the built-in reference type representing a network invocation."}
resource sayHello (message m) {
// Creates an empty message.
message response = {};
// A util method that can be used to set string payload.
messages:setStringPayload(response, "Hello, World!");
// Reply keyword sends the response back to the client.
reply response;
}
}

View File

@@ -0,0 +1,6 @@
import ballerina.lang.system;
function main (string[] args) {
system:println("Hello, World!");
}

View File

@@ -0,0 +1,31 @@
import ballerina.lang.system;
function main (string[] args) {
// JSON string value.
json j1 = "Apple";
system:println(j1);
// JSON number value.
json j2 = 5.36;
system:println(j2);
// JSON true value.
json j3 = true;
system:println(j3);
// JSON false value.
json j4 = false;
system:println(j4);
// JSON null value.
json j5 = null;
//JSON Objects.
json j6 = {name:"apple", color:"red", price:j2};
system:println(j6);
//JSON Arrays. They are arrays of any JSON value.
json j7 = [1, false, null, "foo",
{first:"John", last:"Pala"}];
system:println(j7);
}

28
samples/Ballerina/var.bal Normal file
View File

@@ -0,0 +1,28 @@
import ballerina.lang.system;
function divideBy10 (int d) (int, int) {
return d / 10, d % 10;
}
function main (string[] args) {
//Here the variable type is inferred type from the initial value. This is same as "int k = 5";
var k = 5;
system:println(10 + k);
//Here the type of the 'strVar' is 'string'.
var strVar = "Hello!";
system:println(strVar);
//Multiple assignment with 'var' allows you to define the variable then and there.
//Variable type is inferred from the right-hand side.
var q, r = divideBy10(6);
system:println("06/10: " + "quotient=" + q + " " +
"remainder=" + r);
//To ignore a particular return value in a multiple assignment statement, use '_'.
var q1, _ = divideBy10(57);
system:println("57/10: " + "quotient=" + q1);
var _, r1 = divideBy10(9);
system:println("09/10: " + "remainder=" + r1);
}

26
samples/Ballerina/xml.bal Normal file
View File

@@ -0,0 +1,26 @@
import ballerina.lang.system;
function main (string[] args) {
// XML element. Can only have one root element.
xml x1 = xml `<book>The Lost World</book>`;
system:println(x1);
// XML text
xml x2 = xml `Hello, world!`;
system:println(x2);
// XML comment
xml x3 = xml `<!--I am a comment-->`;
system:println(x3);
// XML processing instruction
xml x4 = xml `<?target data?>`;
system:println(x4);
// Multiple XML items can be combined to form a sequence of XML. The resulting sequence is again an XML on its own.
xml x5 = x1 + x2 + x3 + x4;
system:println("\nResulting XML sequence:");
system:println(x5);
}

View File

@@ -0,0 +1,36 @@
#!/usr/bin/env cwl-runner
# Originally from
# https://github.com/Duke-GCB/GGR-cwl/blob/54e897263a702ff1074c8ac814b4bf7205d140dd/utils/trunk-peak-score.cwl
# Released under the MIT License:
# https://github.com/Duke-GCB/GGR-cwl/blob/54e897263a702ff1074c8ac814b4bf7205d140dd/LICENSE
# Converted to CWL v1.0 syntax using
# https://github.com/common-workflow-language/cwl-upgrader
# and polished by Michael R. Crusoe <mrc@commonwl.org>
# All modifications also released under the MIT License
cwlVersion: v1.0
class: CommandLineTool
doc: Trunk scores in ENCODE bed6+4 files
hints:
DockerRequirement:
dockerPull: dukegcb/workflow-utils
inputs:
peaks:
type: File
sep:
type: string
default: \t
outputs:
trunked_scores_peaks:
type: stdout
baseCommand: awk
arguments:
- -F $(inputs.sep)
- BEGIN{OFS=FS}$5>1000{$5=1000}{print}
- $(inputs.peaks.path)
stdout: $(inputs.peaks.nameroot).trunked_scores$(inputs.peaks.nameext)

View File

@@ -0,0 +1,12 @@
fun SQL(literals, parts) = ''
---
[
SQL `SELECT * FROM table WHERE id = $(1) AND name = $('a')`,
SQL `$('p')`,
SQL `$('a')$('b')`,
SQL `$('a')---$('b')`,
SQL `---$('a')---$('b')---`,
SQL `$('p')bbb`,
SQL `aaa$('p')`,
SQL `aaa$('p')bbb`
]

View File

@@ -0,0 +1,9 @@
%dw 2.0
var number = 1234
fun foo(func,name="Mariano") = func(name)
input payload application/test arg="value"
output application/json
---
{
foo: "bar"
}

View File

@@ -0,0 +1,27 @@
%dw 2.0
var x=(param1, param2) -> { "$param1": param2 }
var y=(param1, param2 = "c") -> { "$param1": param2 }
var toUser = (user) -> { name: user.name, lastName: user.lastName }
fun z(param1, param2) = { "$param1": param2 }
var a = { name: "Mariano" , toUser: ((param1, param2) -> { "$param1": param2 }) }
var applyFirst = (array, func) -> (func(array[0]) ++ array[1 to -1])
var nested = (array, func) -> (a) -> (b) -> (c) -> array map func(a ++ b ++ c)
fun f2(a1, a2) = ""
fun f3(a1:String, a2:Number):String = a1
fun f4(a1:String, a2:(a:Number) -> Number):String = a1
---
result: {
a: x("a", "b"),
b: y("a"),
c: y("a", "b"),
users: { (in1 map ((user) -> { user: (toUser(user) ++ user) })) },
d: z("a", "b"),
e: a.toUser("name","Mariano"),
f: a.toUser("name","Mariano").name,
f: applyFirst("mariano", (s) -> upper(s) ),
g: [] map (s) -> upper(s),
h: 1 f2 2
}

View File

@@ -0,0 +1,36 @@
%dw 2.0
---
{
"boolean":{
"true" : true,
"false": false
},
"Number": {
"int": 123,
"decimal": 123.23
},
"string": {
"singleQuote" : 'A String',
"doubleQuote" : "A String"
},
"regex": /foo/,
"date": {
a: |2003-10-01|,
b: |2005-045|,
c: |2003-W14-3|,
d: |23:57:59|,
e: |23:57:30.700|,
f: |23:50:30Z|,
g: |+13:00|,
h: |Z|,
i: |-02:00|,
j: |2005-06-02T15:10:16|,
k: |2005-06-02T15:10:16Z|,
l: |2005-06-02T15:10:16+03:00|,
m: |P12Y7M11D|,
n: |P12Y5M|,
o: |P45DT9H20M8S|,
p: |PT9H20M8S|
}
}

View File

@@ -0,0 +1,33 @@
{
// Regex Pattern Matching (Can be named or unnamed)
a: in0.phones map $ match {
case matches /\+(\d+)\s\((\d+)\)\s(\d+\-\d+)/ -> { country: $[0], area: $[1], number: $[2] }
case matches /\((\d+)\)\s(\d+\-\d+)/ -> { area: $[1], number: $[2] }
case phone matches /\((\d+)\)\s(\d+\-\d+)/ -> { area: phone[1], number: phone[2] }
},
// Type Pattern Matching (Can be named or unnamed)
b: in0.object match {
case is Object -> { object: $ }
case is Number -> { number: $ }
// This is how you name variables if needed
case y is Boolean -> { boolean: y }
},
// Literal Pattern Matching (Can be named or unnamed)
c: in0.value match {
case "Emiliano" -> { string: $ }
case 123 -> { number: $ }
// This is how you name variables if needed
case value: "Mariano" -> { name: value }
},
// Boolean Expression Pattern Matching (Always named)
d: in0.value match {
case x if x > 30 -> { biggerThan30: x }
case x if x == 9 -> { nine: x }
},
// Default matches
e: in0.value match {
case "Emiliano" -> "string"
case 3.14 -> number
else -> "1234"
}
}

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,227 @@
[app]
# (str) Title of your application
title = Kivy Kazam
# (str) Package name
package.name = kivykazam
# (str) Package domain (needed for android/ios packaging)
package.domain = org.test
# (str) Source code where the main.py live
source.dir = .
# (list) Source files to include (let empty to include all the files)
source.include_exts = py,png,jpg,kv,atlas
# (list) List of inclusions using pattern matching
#source.include_patterns = assets/*,images/*.png
# (list) Source files to exclude (let empty to not exclude anything)
#source.exclude_exts = spec
# (list) List of directory to exclude (let empty to not exclude anything)
#source.exclude_dirs = tests, bin
# (list) List of exclusions using pattern matching
#source.exclude_patterns = license,images/*/*.jpg
# (str) Application versioning (method 1)
version = 0.1
# (str) Application versioning (method 2)
# version.regex = __version__ = ['"](.*)['"]
# version.filename = %(source.dir)s/main.py
# (list) Application requirements
# comma seperated e.g. requirements = sqlite3,kivy
requirements = kivy
# (str) Custom source folders for requirements
# Sets custom source for any requirements with recipes
# requirements.source.kivy = ../../kivy
# (list) Garden requirements
#garden_requirements =
# (str) Presplash of the application
#presplash.filename = %(source.dir)s/data/presplash.png
# (str) Icon of the application
#icon.filename = %(source.dir)s/data/icon.png
# (str) Supported orientation (one of landscape, portrait or all)
orientation = all
# (list) List of service to declare
#services = NAME:ENTRYPOINT_TO_PY,NAME2:ENTRYPOINT2_TO_PY
#
# OSX Specific
#
#
# author = © Copyright Info
#
# Android specific
#
# (bool) Indicate if the application should be fullscreen or not
fullscreen = 1
# (list) Permissions
#android.permissions = INTERNET
# (int) Android API to use
#android.api = 19
# (int) Minimum API required
android.minapi = 13
# (int) Android SDK version to use
#android.sdk = 20
# (str) Android NDK version to use
#android.ndk = 9c
# (bool) Use --private data storage (True) or --dir public storage (False)
#android.private_storage = True
# (str) Android NDK directory (if empty, it will be automatically downloaded.)
#android.ndk_path =
# (str) Android SDK directory (if empty, it will be automatically downloaded.)
#android.sdk_path =
# (str) ANT directory (if empty, it will be automatically downloaded.)
#android.ant_path =
# (str) python-for-android git clone directory (if empty, it will be automatically cloned from github)
#android.p4a_dir =
# (list) python-for-android whitelist
#android.p4a_whitelist =
# (bool) If True, then skip trying to update the Android sdk
# This can be useful to avoid excess Internet downloads or save time
# when an update is due and you just want to test/build your package
# android.skip_update = False
# (str) Android entry point, default is ok for Kivy-based app
#android.entrypoint = org.renpy.android.PythonActivity
# (list) List of Java .jar files to add to the libs so that pyjnius can access
# their classes. Don't add jars that you do not need, since extra jars can slow
# down the build process. Allows wildcards matching, for example:
# OUYA-ODK/libs/*.jar
#android.add_jars = foo.jar,bar.jar,path/to/more/*.jar
# (list) List of Java files to add to the android project (can be java or a
# directory containing the files)
#android.add_src =
# (str) python-for-android branch to use, if not master, useful to try
# not yet merged features.
#android.branch = master
# (str) OUYA Console category. Should be one of GAME or APP
# If you leave this blank, OUYA support will not be enabled
#android.ouya.category = GAME
# (str) Filename of OUYA Console icon. It must be a 732x412 png image.
#android.ouya.icon.filename = %(source.dir)s/data/ouya_icon.png
# (str) XML file to include as an intent filters in <activity> tag
#android.manifest.intent_filters =
# (list) Android additionnal libraries to copy into libs/armeabi
#android.add_libs_armeabi = libs/android/*.so
#android.add_libs_armeabi_v7a = libs/android-v7/*.so
#android.add_libs_x86 = libs/android-x86/*.so
#android.add_libs_mips = libs/android-mips/*.so
# (bool) Indicate whether the screen should stay on
# Don't forget to add the WAKE_LOCK permission if you set this to True
#android.wakelock = False
# (list) Android application meta-data to set (key=value format)
#android.meta_data =
# (list) Android library project to add (will be added in the
# project.properties automatically.)
#android.library_references =
# (str) Android logcat filters to use
#android.logcat_filters = *:S python:D
# (bool) Copy library instead of making a libpymodules.so
#android.copy_libs = 1
#
# iOS specific
#
# (str) Path to a custom kivy-ios folder
#ios.kivy_ios_dir = ../kivy-ios
# (str) Name of the certificate to use for signing the debug version
# Get a list of available identities: buildozer ios list_identities
#ios.codesign.debug = "iPhone Developer: <lastname> <firstname> (<hexstring>)"
# (str) Name of the certificate to use for signing the release version
#ios.codesign.release = %(ios.codesign.debug)s
[buildozer]
# (int) Log level (0 = error only, 1 = info, 2 = debug (with command output))
log_level = 1
# (int) Display warning if buildozer is run as root (0 = False, 1 = True)
warn_on_root = 1
# (str) Path to build artifact storage, absolute or relative to spec file
# build_dir = ./.buildozer
# (str) Path to build output (i.e. .apk, .ipa) storage
# bin_dir = ./bin
# -----------------------------------------------------------------------------
# List as sections
#
# You can define all the "list" as [section:key].
# Each line will be considered as a option to the list.
# Let's take [app] / source.exclude_patterns.
# Instead of doing:
#
#[app]
#source.exclude_patterns = license,data/audio/*.wav,data/images/original/*
#
# This can be translated into:
#
#[app:source.exclude_patterns]
#license
#data/audio/*.wav
#data/images/original/*
#
# -----------------------------------------------------------------------------
# Profiles
#
# You can extend section / key with a profile
# For example, you want to deploy a demo version of your application without
# HD content. You could first change the title to add "(demo)" in the name
# and extend the excluded directories to remove the HD content.
#
#[app@demo]
#title = My Application (demo)
#
#[app:source.exclude_patterns@demo]
#images/hd/*
#
# Then, invoke the command line with the "demo" profile:
#
#buildozer --profile demo android debug

View File

@@ -0,0 +1,955 @@
// consumes <stdin> and performs constant folding
// echo '"use strict";"_"[0],1+2;' | node constant_fold.js
import _NodePath from '../NodePath';
const {NodePath} = _NodePath;
import _WalkCombinator from '../WalkCombinator';
const {WalkCombinator} = _WalkCombinator;
const $CONSTEXPR = Symbol.for('$CONSTEXTR');
const $CONSTVALUE = Symbol.for('$CONSTVALUE');
const IS_EMPTY = path => {
return (path.node.type === 'BlockStatement' && path.node.body.length === 0) ||
path.node.type === 'EmptyStatement';
};
const IN_PRAGMA_POS = path => {
if (path.parent && Array.isArray(path.parent.node)) {
const siblings = path.parent.node;
for (let i = 0; i < path.key; i++) {
// preceded by non-pragma
if (
siblings[i].type !== 'ExpressionStatement' ||
!IS_CONSTEXPR(siblings[i].expression) ||
typeof CONSTVALUE(siblings[i].expression) !== 'string'
) {
return false;
}
}
}
return true;
};
const IS_PRAGMA = path => {
if (path.parent && Array.isArray(path.parent.node)) {
const siblings = path.parent.node;
for (let i = 0; i < path.key + 1; i++) {
// preceded by non-pragma
if (
siblings[i].type !== 'ExpressionStatement' ||
!IS_CONSTEXPR(siblings[i].expression) ||
typeof CONSTVALUE(siblings[i].expression) !== 'string'
) {
return false;
}
}
}
return true;
};
// worst case is the completion value
const IS_NOT_COMPLETION = path => {
while (true) {
if (!path.parent) {
return true;
}
if (
Array.isArray(path.parent.node) &&
path.key !== path.parent.node.length - 1
) {
return true;
}
path = path.parent;
while (Array.isArray(path.node)) {
path = path.parent;
}
if (/Function/.test(path.node.type)) {
return true;
} else if (path.node.type === 'Program') {
return false;
}
}
};
const REMOVE_IF_EMPTY = path => {
if (IS_EMPTY(path)) REMOVE(path);
return null;
};
const REPLACE_IF_EMPTY = (path, folded) => {
if (IS_EMPTY(path)) return REPLACE(path, folded);
return path;
};
const REMOVE = path => {
if (Array.isArray(path.parent.node)) {
path.parent.node.splice(path.key, 1);
} else {
path.parent.node[path.key] = null;
}
return null;
};
const REPLACE = (path, folded) => {
const replacement = new NodePath(path.parent, folded, path.key);
path.parent.node[path.key] = folded;
return replacement;
};
// no mutation, this is an atomic value
const NEG_ZERO = Object.freeze({
[$CONSTEXPR]: true,
type: 'UnaryExpression',
operator: '-',
argument: Object.freeze({
[$CONSTEXPR]: true,
type: 'Literal',
value: 0,
}),
});
const INFINITY = Object.freeze({
[$CONSTEXPR]: true,
type: 'BinaryExpression',
operator: '/',
left: Object.freeze({
[$CONSTEXPR]: true,
type: 'Literal',
value: 1,
}),
right: Object.freeze({
[$CONSTEXPR]: true,
type: 'Literal',
value: 0,
}),
});
const NEG_INFINITY = Object.freeze({
[$CONSTEXPR]: true,
type: 'BinaryExpression',
operator: '/',
left: Object.freeze({
[$CONSTEXPR]: true,
type: 'Literal',
value: 1,
}),
right: NEG_ZERO,
});
const EMPTY = Object.freeze({
[$CONSTEXPR]: true,
type: 'EmptyStatement',
});
const NULL = Object.freeze({
[$CONSTEXPR]: true,
type: 'Literal',
value: null,
});
const NAN = Object.freeze({
[$CONSTEXPR]: true,
type: 'BinaryExpression',
operator: '/',
left: Object.freeze({
[$CONSTEXPR]: true,
type: 'Literal',
value: 0,
}),
right: Object.freeze({
[$CONSTEXPR]: true,
type: 'Literal',
value: 0,
}),
});
const UNDEFINED = Object.freeze({
[$CONSTEXPR]: true,
type: 'UnaryExpression',
operator: 'void',
argument: Object.freeze({
[$CONSTEXPR]: true,
type: 'Literal',
value: 0,
}),
});
// ESTree doesn't like negative numeric literals
// this also preserves -0
const IS_UNARY_NEGATIVE = node => {
if (
node.type === 'UnaryExpression' &&
node.operator === '-' &&
typeof node.argument.value === 'number' &&
node.argument.value === node.argument.value &&
node.argument.type === 'Literal'
) {
return true;
}
return false;
};
const IS_CONSTEXPR = node => {
if (typeof node !== 'object' || node === null) {
return false;
}
// DONT CALCULATE THINGS MULTIPLE TIMES!!@!@#
if (node[$CONSTEXPR]) return true;
if (node.type === 'ArrayExpression') {
for (let i = 0; i < node.elements.length; i++) {
const element = node.elements[i];
// hole == null
if (element !== null && !IS_CONSTEXPR(element)) {
return false;
}
}
return true;
}
if (node.type === 'ObjectExpression') {
for (let i = 0; i < node.properties.length; i++) {
const element = node.properties[i];
if (element.kind !== 'init') return false;
if (element.method) return false;
let key;
if (element.computed) {
// be sure {["y"]:1} works
if (!IS_CONSTEXPR(element.key)) {
return false;
}
}
if (!IS_CONSTEXPR(element.value)) return false;
}
return true;
}
if (node.type === 'Literal' || IS_UNDEFINED(node) || IS_NAN(node)) {
return true;
}
if (IS_UNARY_NEGATIVE(node)) {
return true;
}
return false;
};
const IS_NAN = node => {
return node === NAN;
};
const IS_UNDEFINED = node => {
return node === UNDEFINED;
};
const CONSTVALUE = node => {
if (node[$CONSTVALUE]) {
return node[$CONSTVALUE];
}
if (IS_UNDEFINED(node)) return void 0;
if (IS_NAN(node)) return +'_';
if (!IS_CONSTEXPR(node)) throw new Error('Not a CONSTEXPR');
if (node.type === 'ArrayExpression') {
let ret = [];
ret.length = node.elements.length;
for (let i = 0; i < node.elements.length; i++) {
if (node.elements[i] !== null) {
ret[i] = CONSTVALUE(node.elements[i]);
}
}
return ret;
}
if (node.type === 'ObjectExpression') {
let ret = Object.create(null);
for (let i = 0; i < node.properties.length; i++) {
const element = node.properties[i];
let key;
if (element.computed) {
key = `${CONSTVALUE(element.key)}`;
}
else {
key = element.key.name;
}
Object.defineProperty(ret, key, {
// duplicate keys...
configurable: true,
writable: true,
value: CONSTVALUE(element.value),
enumerable: true
});
}
Object.freeze(ret);
return ret;
}
if (IS_UNARY_NEGATIVE(node)) {
return -node.argument.value;
}
if (node.regex !== void 0) {
return new RegExp(node.regex.pattern, node.regex.flags);
}
return node.value;
};
const CONSTEXPRS = new Map();
CONSTEXPRS.set(void 0, UNDEFINED);
CONSTEXPRS.set(+'_', NAN);
CONSTEXPRS.set(null, NULL);
const TO_CONSTEXPR = value => {
if (value === -Infinity) {
return NEG_INFINITY;
}
if (value === Infinity) {
return INFINITY;
}
let is_neg_zero = 1 / value === -Infinity;
if (is_neg_zero) return NEG_ZERO;
if (CONSTEXPRS.has(value)) {
return CONSTEXPRS.get(value);
}
if (typeof value === 'number') {
if (value < 0) {
const CONSTEXPR = Object.freeze({
[$CONSTEXPR]: true,
[$CONSTVALUE]: value,
type: 'UnaryExpression',
operator: '-',
argument: Object.freeze({ type: 'Literal', value: -value }),
});
CONSTEXPRS.set(value, CONSTEXPR);
return CONSTEXPR;
}
}
if (
value === null ||
typeof value === 'number' ||
typeof value === 'boolean' ||
typeof value === 'string'
) {
const CONSTEXPR = Object.freeze({
[$CONSTEXPR]: true,
[$CONSTVALUE]: value,
type: 'Literal',
value,
});
CONSTEXPRS.set(value, CONSTEXPR);
return CONSTEXPR;
}
// have to generate new one every time :-/
if (Array.isArray(value)) {
return Object.freeze({
[$CONSTEXPR]: true,
type: 'ArrayExpression',
elements: Object.freeze(value.map(TO_CONSTEXPR)),
});
}
if (typeof value === 'object' && Object.getPrototypeOf(value) === Object.getPrototypeOf({}) && [...Object.getOwnPropertySymbols(value)].length === 0) {
return Object.freeze({
[$CONSTEXPR]: true,
type: 'ObjectExpression',
properties: Object.freeze(
[...Object.getOwnPropertyKeys(value)].map(key => {
if (!('value' in Object.getOwnProperty(value, key))) {
throw Error('Not a CONSTVALUE (found a setter or getter?)');
}
return {
type: 'Property',
kind: 'init',
method: false,
shorthand: false,
computed: true,
key: {
type: 'Literal',
value: key
},
value: TO_CONSTEXPR(value[key])
}
})),
});
}
throw Error('Not a CONSTVALUE (did you pass a RegExp?)');
};
// THIS DOES NOT HANDLE NODE SPECIFIC CASES LIKE IfStatement
const FOLD_EMPTY = function*(path) {
if (
path &&
path.node &&
path.parent &&
Array.isArray(path.parent.node) &&
IS_EMPTY(path)
) {
REMOVE(path);
return yield;
}
return yield path;
};
// THIS DOES NOT HANDLE NODE SPECIFIC CASES LIKE IfStatement
const FOLD_TEMPLATE = function*(path) {
if (
path &&
path.node &&
path.type === 'TemplateLiteral'
) {
let updated = false;
for (let i = 0; i < path.node.exressions.length; i++) {
if (IS_CONSTEXPR(path.node.expressions[i])) {
//let
}
}
}
return yield path;
};
const FOLD_EXPR_STMT = function*(path) {
// TODO: enforce completion value checking
if (path && path.node && path.node.type === 'ExpressionStatement') {
// merge all the adjacent expression statements into sequences
if (Array.isArray(path.parent.node)) {
// could have nodes after it
const siblings = path.parent.node;
if (!IS_PRAGMA(path)) {
if (path.key < siblings.length - 1) {
const mergeable = [path.node];
for (let needle = path.key + 1; needle < siblings.length; needle++) {
if (siblings[needle].type !== 'ExpressionStatement') {
break;
}
mergeable.push(siblings[needle]);
}
if (mergeable.length > 1) {
siblings.splice(path.key, mergeable.length, {
type: 'ExpressionStatement',
expression: {
type: 'SequenceExpression',
expressions: mergeable.reduce(
(acc, es) => {
if (es.expression.type == 'SequenceExpression') {
return [...acc, ...es.expression.expressions];
} else {
return [...acc, es.expression];
}
},
[]
),
},
});
return path;
}
}
}
}
if (IS_NOT_COMPLETION(path) && IS_CONSTEXPR(path.node.expression)) {
return REPLACE(path, EMPTY);
}
}
return yield path;
};
const FOLD_WHILE = function*(path) {
if (path && path.node) {
if (path.node.type === 'DoWhileStatement') {
console.error('FOLD_DOWHILE');
REPLACE_IF_EMPTY(path.get(['body']), EMPTY);
}
if (path.node.type === 'WhileStatement') {
console.error('FOLD_WHILE');
let { test, consequent, alternate } = path.node;
if (IS_CONSTEXPR(test)) {
test = CONSTVALUE(test);
if (!test) {
return REPLACE(path, EMPTY);
}
}
REPLACE_IF_EMPTY(path.get(['body']), EMPTY);
}
if (path.node.type === 'ForStatement') {
console.error('FOLD_FOR');
REPLACE_IF_EMPTY(path.get(['body']), EMPTY);
let { init, test, update } = path.node;
let updated = false;
if (init && IS_CONSTEXPR(init)) {
updated = true;
REPLACE(path.get(['init']), null);
}
if (test && IS_CONSTEXPR(test)) {
let current = CONSTVALUE(test);
let coerced = Boolean(current);
// remove the test if it is always true
if (coerced === true) {
updated = true;
REPLACE(path.get(['test']), null);
} else if (coerced !== current) {
updated = true;
REPLACE(path.get(['test']), TO_CONSTEXPR(coerced));
}
}
if (update && IS_CONSTEXPR(update)) {
updated = true;
REPLACE(path.get(['update']), null);
}
if (updated) {
return path;
}
}
}
return yield path;
};
const FOLD_IF = function*(path) {
if (path && path.node && path.node.type === 'IfStatement') {
let { test, consequent, alternate } = path.node;
const is_not_completion = IS_NOT_COMPLETION(path);
if (is_not_completion && !alternate) {
if (IS_EMPTY(path.get(['consequent']))) {
console.error('FOLD_IF_EMPTY_CONSEQUENT');
REPLACE(path, {
type: 'ExpressionStatement',
expression: test,
});
return path.parent;
}
}
if (alternate) {
if (alternate.type === consequent.type) {
if (consequent.type === 'ExpressionStatement') {
console.error('FOLD_IF_BOTH_EXPRSTMT');
REPLACE(path, {
type: 'ExpressionStatement', expression:
{
type: 'ConditionalExpression',
test: test,
consequent: consequent.expression,
alternate: alternate.expression,
}});
return path.parent;
}
else if (consequent.type === 'ReturnStatement' ||
consequent.type === 'ThrowStatement') {
console.error('FOLD_IF_BOTH_COMPLETIONS');
REPLACE(path, {
type: 'ExpressionStatement', expression:{
type: consequent.type,
argument: {
type: 'ConditionalExpression',
test: test,
consequent: consequent.argument,
alternate: alternate.argument,
}}
});
return path.parent;
}
}
}
else if (is_not_completion && consequent.type === 'ExpressionStatement') {
console.error('FOLD_IF_NON_COMPLETION_TO_&&');
REPLACE(path, {
type: 'ExpressionStatement',
expression: {
type: 'BinaryExpression',
operator: '&&',
left: test,
right: consequent.expression,
}
});
return path.parent;
}
if (IS_CONSTEXPR(test)) {
test = CONSTVALUE(test);
if (test) {
return REPLACE(path, consequent);
}
if (alternate) {
return REPLACE(path, alternate);
}
return REPLACE(path, EMPTY);
}
consequent = path.get(['consequent']);
let updated;
if (consequent.node !== EMPTY) {
REPLACE_IF_EMPTY(consequent, EMPTY);
if (consequent.parent.node[consequent.key] === EMPTY) {
updated = true;
}
}
if (alternate) {
alternate = path.get(['alternate']);
REMOVE_IF_EMPTY(alternate);
if (path.node.alternate === null) {
updated = true;
}
}
if (updated) {
return path;
}
}
return yield path;
};
const FOLD_SEQUENCE = function*(path) {
if (path && path.node && path.node.type === 'SequenceExpression') {
console.error('FOLD_SEQUENCE');
// never delete the last value
for (let i = 0; i < path.node.expressions.length - 1; i++) {
if (IS_CONSTEXPR(path.node.expressions[i])) {
path.node.expressions.splice(i, 1);
i--;
}
}
if (path.node.expressions.length === 1) {
return REPLACE(path, path.node.expressions[0]);
}
}
return yield path;
};
const FOLD_LOGICAL = function*(path) {
if (path && path.node && path.node.type === 'LogicalExpression') {
console.error('FOLD_LOGICAL');
let { left, right, operator } = path.node;
if (IS_CONSTEXPR(left)) {
left = CONSTVALUE(left);
if (operator === '||') {
if (left) {
return REPLACE(path, TO_CONSTEXPR(left));
}
return REPLACE(path, right);
} else if (operator === '&&') {
if (!left) {
return REPLACE(path, TO_CONSTEXPR(left));
}
return REPLACE(path, right);
}
}
}
return yield path;
};
const FOLD_SWITCH = function*(path) {
if (path && path.node && path.node.type === 'SwitchStatement') {
let { discriminant, cases } = path.node;
// if there are no cases, just become an expression
if (cases.length === 0 && IS_NOT_COMPLETION(path)) {
return REPLACE(path, {
type: 'ExpressionStatement',
expression: discriminant
});
}
// if the discriminant is static
// remove any preceding non-matching static cases
// fold any trailing cases into the matching case
if (cases.length > 1 && IS_CONSTEXPR(discriminant)) {
const discriminant_value = CONSTVALUE(discriminant);
for (var i = 0; i < cases.length; i++) {
const test = cases[i].test;
if (IS_CONSTEXPR(test)) {
let test_value = CONSTVALUE(test);
if (discriminant_value === test_value) {
let new_consequent = cases[i].consequent;
if (i < cases.length - 1) {
for (let fallthrough of cases.slice(i+1)) {
new_consequent.push(...fallthrough.consequent);
}
}
cases[i].consequent = new_consequent;
REPLACE(path.get(['cases']), [cases[i]]);
return path;
}
}
else {
// we had a dynamic case need to bail
break;
}
}
}
}
return yield path;
};
const FOLD_UNREACHABLE = function*(path) {
if (path && path.node && path.parent && Array.isArray(path.parent.node)) {
if (path.node.type === 'ReturnStatement' ||
path.node.type === 'ContinueStatement' ||
path.node.type === 'BreakStatement' ||
path.node.type === 'ThrowStatement') {
const next_key = path.key + 1;
path.parent.node.splice(next_key, path.parent.node.length - next_key);
}
}
return yield path;
}
const FOLD_CONDITIONAL = function*(path) {
if (path && path.node && path.node.type === 'ConditionalExpression') {
console.error('FOLD_CONDITIONAL');
let { test, consequent, alternate } = path.node;
if (IS_CONSTEXPR(test)) {
test = CONSTVALUE(test);
if (test) {
return REPLACE(path, consequent);
}
return REPLACE(path, alternate);
}
}
return yield path;
};
const FOLD_BINARY = function*(path) {
if (
path &&
path.node &&
path.node.type === 'BinaryExpression' &&
!IS_NAN(path.node)
) {
console.error('FOLD_BINARY');
let { left, right, operator } = path.node;
if (operator === '==' || operator === '!=') {
let updated = false;
if (IS_UNDEFINED(left)) {
updated = true;
REPLACE(path.get(['left']), NULL);
}
if (IS_UNDEFINED(right)) {
updated = true;
REPLACE(path.get(['right']), NULL);
}
if (updated) {
return path;
}
}
if (path.node !== INFINITY && path.node !== NEG_INFINITY && IS_CONSTEXPR(left) && IS_CONSTEXPR(right)) {
left = CONSTVALUE(left);
right = CONSTVALUE(right);
let value;
if ((!left || typeof left !== 'object') && (!right || typeof right !== 'object')) {
if (operator === '+') {
value = left + right;
} else if (operator === '-') {
value = left - right;
} else if (operator === '*') {
value = left * right;
} else if (operator === '/') {
value = left / right;
} else if (operator === '%') {
value = left % right;
} else if (operator === '==') {
value = left == right;
} else if (operator === '!=') {
value = left != right;
} else if (operator === '===') {
value = left === right;
} else if (operator === '!==') {
value = left !== right;
} else if (operator === '<') {
value = left < right;
} else if (operator === '<=') {
value = left <= right;
} else if (operator === '>') {
value = left > right;
} else if (operator === '>=') {
value = left >= right;
} else if (operator === '<<') {
value = left << right;
} else if (operator === '>>') {
value = left >> right;
} else if (operator === '>>>') {
value = left >>> right;
} else if (operator === '|') {
value = left | right;
} else if (operator === '&') {
value = left & right;
} else if (operator === '^') {
value = left ^ right;
}
}
else {
if (operator === '==') value = false;
if (operator === '===') value = false;
if (operator === '!=') value = true;
if (operator === '!==') value = true;
if (operator === 'in' && typeof right === 'object' && right) {
value = Boolean(Object.getOwnPropertyDescriptor(right, left));
}
}
if (value !== void 0) {
if (typeof value === 'string' || typeof value === 'boolean' || value === null) {
return REPLACE(path, TO_CONSTEXPR(value));
}
if (typeof value === 'number') {
return REPLACE(path, TO_CONSTEXPR(value));
}
}
}
}
return yield path;
};
const FOLD_UNARY = function*(path) {
if (path && path.node && path.node.type === 'UnaryExpression') {
console.error('FOLD_UNARY');
if (IS_CONSTEXPR(path.node)) {
return yield path;
}
let { argument, operator } = path.node;
if (IS_CONSTEXPR(argument)) {
if (operator === 'void') {
return REPLACE(path, UNDEFINED);
}
let value = CONSTVALUE(argument);
if (operator === '-') {
value = -value;
} else if (operator === '+') {
value = +value;
} else if (operator === '~') {
value = ~value;
} else if (operator === '!') {
value = !value;
} else if (operator === 'typeof') {
value = typeof value;
} else if (operator === 'delete') {
value = true;
}
return REPLACE(path, TO_CONSTEXPR(value));
}
}
return yield path;
};
const FOLD_EVAL = function*(path) {
if (path && path.node && path.node.type === 'CallExpression' &&
path.node.callee.type === 'Identifier' && path.node.callee.name === 'eval') {
console.error('FOLD_EVAL');
if (path.node.arguments.length === 1 && path.node.arguments[0].type === 'Literal') {
let result = esprima.parse(`${
CONSTVALUE(path.node.arguments[0])
}`);
if (result.body.length === 1 && result.body[0].type === 'ExpressionStatement') {
return REPLACE(path, result.body[0].expression);
}
}
}
return yield path;
}
const FOLD_MEMBER = function*(path) {
if (path && path.node && path.node.type === 'MemberExpression') {
console.error('FOLD_MEMBER');
if (path.node.computed && path.node.property.type === 'Literal') {
const current = `${CONSTVALUE(path.node.property)}`;
if (typeof current === 'string' && /^[$_a-z][$_a-z\d]*$/i.test(current)) {
path.node.computed = false;
path.node.property = {
type: 'Identifier',
name: current,
};
return path;
}
}
if (IS_CONSTEXPR(path.node.object)) {
const value = CONSTVALUE(path.node.object);
if (typeof value === 'string' || Array.isArray(value) || (value && typeof value === 'object')) {
let key;
if (IS_CONSTEXPR(path.node.property)) {
key = `${CONSTVALUE(path.node.property)}`;
}
else if (!path.node.computed) {
key = path.node.property.name;
}
if (key !== void 0) {
const desc = Object.getOwnPropertyDescriptor(value, key);
if (desc) {
const folded = value[key];
console.error('FOLDING', JSON.stringify(folded));
if (IN_PRAGMA_POS(path) && typeof folded === 'string') {
if (value.length > 1) {
REPLACE(
path.get(['object']),
TO_CONSTEXPR(value.slice(key, key + 1))
);
REPLACE(path.get(['property']), TO_CONSTEXPR(0));
return path;
}
} else {
return REPLACE(path, TO_CONSTEXPR(value[key]));
}
}
}
}
}
}
return yield path;
};
const $MIN = Symbol();
const MIN_TRUE = Object.freeze({
[$MIN]: true,
type: 'UnaryExpression',
operator: '!',
argument: Object.freeze({
[$MIN]: true,
type: 'Literal',
value: 0
})
});
const MIN_FALSE = Object.freeze({
[$MIN]: true,
type: 'UnaryExpression',
operator: '!',
argument: Object.freeze({
[$MIN]: true,
type: 'Literal',
value: 1
})
});
const MIN_REPLACEMENTS = new Map;
MIN_REPLACEMENTS.set(true, MIN_TRUE);
MIN_REPLACEMENTS.set(false, MIN_FALSE);
const MIN_VALUES = function*(path) {
if (path && path.node && !path.node[$MIN] && IS_CONSTEXPR(path.node)) {
let value = CONSTVALUE(path.node);
if (MIN_REPLACEMENTS.has(value)) {
console.error('MIN_VALUE', value)
return REPLACE(path, MIN_REPLACEMENTS.get(value));
}
}
return yield path;
}
import esprima from 'esprima';
import util from 'util';
import escodegen from 'escodegen';
const optimize = (src) => {
const ROOT = new NodePath(
null,
esprima.parse(
src,
{
// loc: true,
// source: '<stdin>',
}
),
null
);
// all of these are things that could affect completion value positions
const walk_expressions = WalkCombinator.pipe(
...[
WalkCombinator.DEPTH_FIRST,
{
// We never work on Arrays
*inputs(path) {
if (Array.isArray(path)) return;
return yield path;
},
},
{ inputs: FOLD_UNREACHABLE },
{ inputs: FOLD_IF },
{ inputs: FOLD_SWITCH },
{ inputs: FOLD_EXPR_STMT },
{ inputs: FOLD_CONDITIONAL },
{ inputs: FOLD_LOGICAL },
{ inputs: FOLD_BINARY },
{ inputs: FOLD_UNARY },
{ inputs: FOLD_SEQUENCE },
{ inputs: FOLD_MEMBER },
{ inputs: FOLD_EMPTY },
{ inputs: FOLD_WHILE },
{ inputs: FOLD_EVAL },
]
).walk(ROOT);
for (const _ of walk_expressions) {
}
const minify = WalkCombinator.pipe(
...[
WalkCombinator.DEPTH_FIRST,
{
// We never work on Arrays
*inputs(path) {
if (Array.isArray(path)) return;
return yield path;
},
},
{ inputs: MIN_VALUES },
]
).walk(ROOT);
for (const _ of minify) {
}
return ROOT;
}
import mississippi from 'mississippi';
process.stdin.pipe(
mississippi.concat(buff => {
const ROOT = optimize(`${buff}`)
console.error(
'%s',
util.inspect(ROOT.node, {
depth: null,
colors: true,
})
);
const out = escodegen.generate(ROOT.node);
console.log(out);
})
);

View File

@@ -0,0 +1,6 @@
import bar from './module.mjs';
function foo() {
return "I am foo";
}
export {foo};
console.log(bar);

View File

@@ -0,0 +1,5 @@
import {foo} from './entry.mjs';
console.log(foo());
const bar = "I am bar.";
export {bar as default};

View File

@@ -0,0 +1,106 @@
# nearley grammar
@builtin "string.ne"
@{%
function insensitive(sl) {
var s = sl.literal;
result = [];
for (var i=0; i<s.length; i++) {
var c = s.charAt(i);
if (c.toUpperCase() !== c || c.toLowerCase() !== c) {
result.push(new RegExp("[" + c.toLowerCase() + c.toUpperCase() + "]"));
} else {
result.push({literal: c});
}
}
return {subexpression: [{tokens: result, postprocess: function(d) {return d.join(""); }}]};
}
%}
final -> whit? prog whit? {% function(d) { return d[1]; } %}
prog -> prod {% function(d) { return [d[0]]; } %}
| prod whit prog {% function(d) { return [d[0]].concat(d[2]); } %}
prod -> word whit? ("-"|"="):+ ">" whit? expression+ {% function(d) { return {name: d[0], rules: d[5]}; } %}
| word "[" wordlist "]" whit? ("-"|"="):+ ">" whit? expression+ {% function(d) {return {macro: d[0], args: d[2], exprs: d[8]}} %}
| "@" whit? js {% function(d) { return {body: d[2]}; } %}
| "@" word whit word {% function(d) { return {config: d[1], value: d[3]}; } %}
| "@include" whit? string {% function(d) {return {include: d[2].literal, builtin: false}} %}
| "@builtin" whit? string {% function(d) {return {include: d[2].literal, builtin: true }} %}
expression+ -> completeexpression
| expression+ whit? "|" whit? completeexpression {% function(d) { return d[0].concat([d[4]]); } %}
expressionlist -> completeexpression
| expressionlist whit? "," whit? completeexpression {% function(d) { return d[0].concat([d[4]]); } %}
wordlist -> word
| wordlist whit? "," whit? word {% function(d) { return d[0].concat([d[4]]); } %}
completeexpression -> expr {% function(d) { return {tokens: d[0]}; } %}
| expr whit? js {% function(d) { return {tokens: d[0], postprocess: d[2]}; } %}
expr_member ->
word {% id %}
| "$" word {% function(d) {return {mixin: d[1]}} %}
| word "[" expressionlist "]" {% function(d) {return {macrocall: d[0], args: d[2]}} %}
| string "i":? {% function(d) { if (d[1]) {return insensitive(d[0]); } else {return d[0]; } } %}
| "%" word {% function(d) {return {token: d[1]}} %}
| charclass {% id %}
| "(" whit? expression+ whit? ")" {% function(d) {return {'subexpression': d[2]} ;} %}
| expr_member whit? ebnf_modifier {% function(d) {return {'ebnf': d[0], 'modifier': d[2]}; } %}
ebnf_modifier -> ":+" {% id %} | ":*" {% id %} | ":?" {% id %}
expr -> expr_member
| expr whit expr_member {% function(d){ return d[0].concat([d[2]]); } %}
word -> [\w\?\+] {% function(d){ return d[0]; } %}
| word [\w\?\+] {% function(d){ return d[0]+d[1]; } %}
string -> dqstring {% function(d) {return { literal: d[0] }; } %}
#string -> "\"" charset "\"" {% function(d) { return { literal: d[1].join("") }; } %}
#
#charset -> null
# | charset char {% function(d) { return d[0].concat([d[1]]); } %}
#
#char -> [^\\"] {% function(d) { return d[0]; } %}
# | "\\" . {% function(d) { return JSON.parse("\""+"\\"+d[1]+"\""); } %}
charclass -> "." {% function(d) { return new RegExp("."); } %}
| "[" charclassmembers "]" {% function(d) { return new RegExp("[" + d[1].join('') + "]"); } %}
charclassmembers -> null
| charclassmembers charclassmember {% function(d) { return d[0].concat([d[1]]); } %}
charclassmember -> [^\\\]] {% function(d) { return d[0]; } %}
| "\\" . {% function(d) { return d[0] + d[1]; } %}
js -> "{" "%" jscode "%" "}" {% function(d) { return d[2]; } %}
jscode -> null {% function() {return "";} %}
| jscode [^%] {% function(d) {return d[0] + d[1];} %}
| jscode "%" [^}] {% function(d) {return d[0] + d[1] + d[2]; } %}
# Whitespace with a comment
whit -> whitraw
| whitraw? comment whit?
# Optional whitespace with a comment
whit? -> null
| whit
# Literally a string of whitespace
whitraw -> [\s]
| whitraw [\s]
# A string of whitespace OR the empty string
whitraw? -> null
| whitraw
comment -> "#" commentchars "\n"
commentchars -> null
| commentchars [^\n]

View File

@@ -0,0 +1,230 @@
&ANALYZE-SUSPEND _VERSION-NUMBER AB_v10r12 GUI
&ANALYZE-RESUME
&Scoped-define WINDOW-NAME C-Win
&ANALYZE-SUSPEND _UIB-CODE-BLOCK _CUSTOM _DEFINITIONS C-Win
/*------------------------------------------------------------------------
File:
Description:
Input Parameters:
<none>
Output Parameters:
<none>
Author:
Created:
------------------------------------------------------------------------*/
/* This .W file was created with the Progress AppBuilder. */
/*----------------------------------------------------------------------*/
/* Create an unnamed pool to store all the widgets created
by this procedure. This is a good default which assures
that this procedure's triggers and internal procedures
will execute in this procedure's storage, and that proper
cleanup will occur on deletion of the procedure. */
CREATE WIDGET-POOL.
/* *************************** Definitions ************************** */
/* Parameters Definitions --- */
/* Local Variable Definitions --- */
/* _UIB-CODE-BLOCK-END */
&ANALYZE-RESUME
&ANALYZE-SUSPEND _UIB-PREPROCESSOR-BLOCK
/* ******************** Preprocessor Definitions ******************** */
&Scoped-define PROCEDURE-TYPE Window
&Scoped-define DB-AWARE no
/* Name of designated FRAME-NAME and/or first browse and/or first query */
&Scoped-define FRAME-NAME DEFAULT-FRAME
/* Custom List Definitions */
/* List-1,List-2,List-3,List-4,List-5,List-6 */
/* _UIB-PREPROCESSOR-BLOCK-END */
&ANALYZE-RESUME
/* *********************** Control Definitions ********************** */
/* Define the widget handle for the window */
DEFINE VAR C-Win AS WIDGET-HANDLE NO-UNDO.
/* ************************ Frame Definitions *********************** */
DEFINE FRAME DEFAULT-FRAME
WITH 1 DOWN NO-BOX KEEP-TAB-ORDER OVERLAY
SIDE-LABELS NO-UNDERLINE THREE-D
AT COL 1 ROW 1
SIZE 80 BY 16 WIDGET-ID 100.
/* *********************** Procedure Settings ************************ */
&ANALYZE-SUSPEND _PROCEDURE-SETTINGS
/* Settings for THIS-PROCEDURE
Type: Window
Allow: Basic,Browse,DB-Fields,Window,Query
Other Settings: COMPILE
*/
&ANALYZE-RESUME _END-PROCEDURE-SETTINGS
/* ************************* Create Window ************************** */
&ANALYZE-SUSPEND _CREATE-WINDOW
IF SESSION:DISPLAY-TYPE = "GUI":U THEN
CREATE WINDOW C-Win ASSIGN
HIDDEN = YES
TITLE = "<insert window title>"
HEIGHT = 16
WIDTH = 80
MAX-HEIGHT = 16
MAX-WIDTH = 80
VIRTUAL-HEIGHT = 16
VIRTUAL-WIDTH = 80
RESIZE = yes
SCROLL-BARS = no
STATUS-AREA = no
BGCOLOR = ?
FGCOLOR = ?
KEEP-FRAME-Z-ORDER = yes
THREE-D = yes
MESSAGE-AREA = no
SENSITIVE = yes.
ELSE {&WINDOW-NAME} = CURRENT-WINDOW.
/* END WINDOW DEFINITION */
&ANALYZE-RESUME
/* *********** Runtime Attributes and AppBuilder Settings *********** */
&ANALYZE-SUSPEND _RUN-TIME-ATTRIBUTES
/* SETTINGS FOR WINDOW C-Win
VISIBLE,,RUN-PERSISTENT */
/* SETTINGS FOR FRAME DEFAULT-FRAME
FRAME-NAME */
IF SESSION:DISPLAY-TYPE = "GUI":U AND VALID-HANDLE(C-Win)
THEN C-Win:HIDDEN = no.
/* _RUN-TIME-ATTRIBUTES-END */
&ANALYZE-RESUME
/* ************************ Control Triggers ************************ */
&Scoped-define SELF-NAME C-Win
&ANALYZE-SUSPEND _UIB-CODE-BLOCK _CONTROL C-Win C-Win
ON END-ERROR OF C-Win /* <insert window title> */
OR ENDKEY OF {&WINDOW-NAME} ANYWHERE DO:
/* This case occurs when the user presses the "Esc" key.
In a persistently run window, just ignore this. If we did not, the
application would exit. */
IF THIS-PROCEDURE:PERSISTENT THEN RETURN NO-APPLY.
END.
/* _UIB-CODE-BLOCK-END */
&ANALYZE-RESUME
&ANALYZE-SUSPEND _UIB-CODE-BLOCK _CONTROL C-Win C-Win
ON WINDOW-CLOSE OF C-Win /* <insert window title> */
DO:
/* This event will close the window and terminate the procedure. */
APPLY "CLOSE":U TO THIS-PROCEDURE.
RETURN NO-APPLY.
END.
/* _UIB-CODE-BLOCK-END */
&ANALYZE-RESUME
&UNDEFINE SELF-NAME
&ANALYZE-SUSPEND _UIB-CODE-BLOCK _CUSTOM _MAIN-BLOCK C-Win
/* *************************** Main Block *************************** */
/* Set CURRENT-WINDOW: this will parent dialog-boxes and frames. */
ASSIGN CURRENT-WINDOW = {&WINDOW-NAME}
THIS-PROCEDURE:CURRENT-WINDOW = {&WINDOW-NAME}.
/* The CLOSE event can be used from inside or outside the procedure to */
/* terminate it. */
ON CLOSE OF THIS-PROCEDURE
RUN disable_UI.
/* Best default for GUI applications is... */
PAUSE 0 BEFORE-HIDE.
/* Now enable the interface and wait for the exit condition. */
/* (NOTE: handle ERROR and END-KEY so cleanup code will always fire. */
MAIN-BLOCK:
DO ON ERROR UNDO MAIN-BLOCK, LEAVE MAIN-BLOCK
ON END-KEY UNDO MAIN-BLOCK, LEAVE MAIN-BLOCK:
RUN enable_UI.
IF NOT THIS-PROCEDURE:PERSISTENT THEN
WAIT-FOR CLOSE OF THIS-PROCEDURE.
END.
/* _UIB-CODE-BLOCK-END */
&ANALYZE-RESUME
/* ********************** Internal Procedures *********************** */
&ANALYZE-SUSPEND _UIB-CODE-BLOCK _PROCEDURE disable_UI C-Win _DEFAULT-DISABLE
PROCEDURE disable_UI :
/*------------------------------------------------------------------------------
Purpose: DISABLE the User Interface
Parameters: <none>
Notes: Here we clean-up the user-interface by deleting
dynamic widgets we have created and/or hide
frames. This procedure is usually called when
we are ready to "clean-up" after running.
------------------------------------------------------------------------------*/
/* Delete the WINDOW we created */
IF SESSION:DISPLAY-TYPE = "GUI":U AND VALID-HANDLE(C-Win)
THEN DELETE WIDGET C-Win.
IF THIS-PROCEDURE:PERSISTENT THEN DELETE PROCEDURE THIS-PROCEDURE.
END PROCEDURE.
/* _UIB-CODE-BLOCK-END */
&ANALYZE-RESUME
&ANALYZE-SUSPEND _UIB-CODE-BLOCK _PROCEDURE enable_UI C-Win _DEFAULT-ENABLE
PROCEDURE enable_UI :
/*------------------------------------------------------------------------------
Purpose: ENABLE the User Interface
Parameters: <none>
Notes: Here we display/view/enable the widgets in the
user-interface. In addition, OPEN all queries
associated with each FRAME and BROWSE.
These statements here are based on the "Other
Settings" section of the widget Property Sheets.
------------------------------------------------------------------------------*/
VIEW FRAME DEFAULT-FRAME IN WINDOW C-Win.
{&OPEN-BROWSERS-IN-QUERY-DEFAULT-FRAME}
VIEW C-Win.
END PROCEDURE.
/* _UIB-CODE-BLOCK-END */
&ANALYZE-RESUME

View File

@@ -0,0 +1,13 @@
@define-mixin size $size {
width: $size;
}
$big: 100px;
/* Main block */
.block {
&_logo {
background: inline("./logo.png");
@mixin size $big;
}
}

View File

@@ -0,0 +1,10 @@
@define-mixin size $size
width: $size
$big: 100px
// Main block
.block
&_logo
background: inline("./logo.png")
@mixin size $big

102
samples/TypeScript/cache.ts Normal file
View File

@@ -0,0 +1,102 @@
import { DocumentNode } from 'graphql';
import { getFragmentQueryDocument } from 'apollo-utilities';
import { DataProxy, Cache } from './types';
export type Transaction<T> = (c: ApolloCache<T>) => void;
export abstract class ApolloCache<TSerialized> implements DataProxy {
// required to implement
// core API
public abstract read<T>(query: Cache.ReadOptions): T;
public abstract write(write: Cache.WriteOptions): void;
public abstract diff<T>(query: Cache.DiffOptions): Cache.DiffResult<T>;
public abstract watch(watch: Cache.WatchOptions): () => void;
public abstract evict(query: Cache.EvictOptions): Cache.EvictionResult;
public abstract reset(): Promise<void>;
// intializer / offline / ssr API
/**
* Replaces existing state in the cache (if any) with the values expressed by
* `serializedState`.
*
* Called when hydrating a cache (server side rendering, or offline storage),
* and also (potentially) during hot reloads.
*/
public abstract restore(
serializedState: TSerialized,
): ApolloCache<TSerialized>;
/**
* Exposes the cache's complete state, in a serializable format for later restoration.
*/
public abstract extract(optimistic: boolean): TSerialized;
// optimistic API
public abstract removeOptimistic(id: string): void;
// transactional API
public abstract performTransaction(
transaction: Transaction<TSerialized>,
): void;
public abstract recordOptimisticTransaction(
transaction: Transaction<TSerialized>,
id: string,
): void;
// optional API
public transformDocument(document: DocumentNode): DocumentNode {
return document;
}
// experimental
public transformForLink(document: DocumentNode): DocumentNode {
return document;
}
// DataProxy API
/**
*
* @param options
* @param optimistic
*/
public readQuery<QueryType>(
options: DataProxy.Query,
optimistic: boolean = false,
): QueryType {
return this.read({
query: options.query,
variables: options.variables,
optimistic,
});
}
public readFragment<FragmentType>(
options: DataProxy.Fragment,
optimistic: boolean = false,
): FragmentType | null {
return this.read({
query: getFragmentQueryDocument(options.fragment, options.fragmentName),
variables: options.variables,
rootId: options.id,
optimistic,
});
}
public writeQuery(options: Cache.WriteQueryOptions): void {
this.write({
dataId: 'ROOT_QUERY',
result: options.data,
query: options.query,
variables: options.variables,
});
}
public writeFragment(options: Cache.WriteFragmentOptions): void {
this.write({
dataId: options.id,
result: options.data,
variables: options.variables,
query: getFragmentQueryDocument(options.fragment, options.fragmentName),
});
}
}

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,77 @@
<?xml version="1.0" encoding="utf-8" standalone="yes"?>
<NDepend AppName="ExampleNDApp" Platform="DotNet">
<OutputDir KeepHistoric="True" KeepXmlFiles="True">c:\temp</OutputDir>
<Assemblies />
<FrameworkAssemblies />
<Dirs>
<Dir>C:\Windows\Microsoft.NET\Framework\v4.0.30319</Dir>
<Dir>C:\Windows\Microsoft.NET\Framework\v4.0.30319\WPF</Dir>
</Dirs>
<Report Kind="0" SectionsEnabled="12287" XslPath="" Flags="64512">
<Section Enabled="True">Application Metrics</Section>
<Section Enabled="True">.NET Assemblies Metrics</Section>
<Section Enabled="True">Treemap Metric View</Section>
<Section Enabled="True">.NET Assemblies Abstractness vs. Instability</Section>
<Section Enabled="True">.NET Assemblies Dependencies</Section>
<Section Enabled="True">.NET Assemblies Dependency Graph</Section>
<Section Enabled="True">.NET Assemblies Build Order</Section>
<Section Enabled="True">Analysis Log</Section>
<Section Enabled="True">CQL Rules Violated</Section>
<Section Enabled="True">Types Metrics</Section>
<Section Enabled="False">Types Dependencies</Section>
</Report>
<BuildComparisonSetting ProjectMode="DontCompare" BuildMode="MostRecentAnalysisResultAvailable" ProjectFileToCompareWith="" BuildFileToCompareWith="" NDaysAgo="1" />
<BaselineInUISetting ProjectMode="DontCompare" BuildMode="MostRecentAnalysisResultAvailable" ProjectFileToCompareWith="" BuildFileToCompareWith="" NDaysAgo="1" />
<CoverageFiles UncoverableAttribute="" />
<SourceFileRebasing FromPath="" ToPath="" />
<Queries>
<Group Name="Code Quality" Active="True" ShownInReport="False">
<Query Active="True" DisplayList="True" DisplayStat="True" DisplaySelectionView="False" IsCriticalRule="False"><![CDATA[// <Name>Discard generated and designer Methods from JustMyCode</Name>
// --- Make sure to make this query richer to discard generated methods from NDepend rules results ---
notmycode
//
// First define source files paths to discard
//
from a in Application.Assemblies
where a.SourceFileDeclAvailable
let asmSourceFilesPaths = a.SourceDecls.Select(s => s.SourceFile.FilePath)
let sourceFilesPathsToDiscard = (
from filePath in asmSourceFilesPaths
let filePathLower= filePath.ToString().ToLower()
where
filePathLower.EndsWithAny(
".g.cs", // Popular pattern to name generated files.
".g.vb",
".xaml", // notmycode WPF xaml code
".designer.cs", // notmycode C# Windows Forms designer code
".designer.vb") // notmycode VB.NET Windows Forms designer code
||
// notmycode methods in source files in a directory containing generated
filePathLower.Contains("generated")
select filePath
).ToHashSet()
//
// Second: discard methods in sourceFilesPathsToDiscard
//
from m in a.ChildMethods
where (m.SourceFileDeclAvailable &&
sourceFilesPathsToDiscard.Contains(m.SourceDecls.First().SourceFile.FilePath)) ||
// Generated methods might be tagged with this attribute
m.HasAttribute ("System.CodeDom.Compiler.GeneratedCodeAttribute".AllowNoMatch())
select new { m, m.NbLinesOfCode }]]></Query>
<Query Active="True" DisplayList="True" DisplayStat="True" DisplaySelectionView="False" IsCriticalRule="False"><![CDATA[// <Name>Discard generated Fields from JustMyCode</Name>
// --- Make sure to make this query richer to discard generated fields from NDepend rules results ---
notmycode
from f in Application.Fields where
f.HasAttribute ("System.CodeDom.Compiler.GeneratedCodeAttribute".AllowNoMatch()) ||
// Eliminate "components" generated in Windows Form Conrol context
f.Name == "components" && f.ParentType.DeriveFrom("System.Windows.Forms.Control".AllowNoMatch())
select f]]></Query>
</Group>
</Queries>
<WarnFilter />
</NDepend>

183
samples/XML/chrome.natvis Normal file
View File

@@ -0,0 +1,183 @@
<?xml version="1.0" encoding="utf-8" ?>
<!--
Copyright 2015 The Chromium Authors. All rights reserved.
https://cs.chromium.org/chromium/src/tools/win/DebugVisualizers/chrome.natvis
-->
<AutoVisualizer
xmlns="http://schemas.microsoft.com/vstudio/debugger/natvis/2010">
<Type Name="gfx::Point">
<AlternativeType Name="gfx::PointF"/>
<DisplayString>({x_}, {y_})</DisplayString>
</Type>
<Type Name="gfx::Size">
<AlternativeType Name="gfx::SizeF"/>
<DisplayString>({width_}, {height_})</DisplayString>
</Type>
<Type Name="gfx::Rect">
<AlternativeType Name="gfx::RectF"/>
<DisplayString>({origin_.x_}, {origin_.y_}) x ({size_.width_}, {size_.height_})</DisplayString>
</Type>
<Type Name="scoped_refptr&lt;*&gt;">
<DisplayString Condition="ptr_ == 0">null</DisplayString>
<DisplayString>[{((base::subtle::RefCountedBase*)ptr_)-&gt;ref_count_}] {(void*)ptr_} {*ptr_}</DisplayString>
<Expand>
<Item Name="Ptr">ptr_</Item>
<Item Name="RefCount">((base::subtle::RefCountedBase*)ptr_)-&gt;ref_count_</Item>
</Expand>
</Type>
<Type Name="base::Optional&lt;*&gt;">
<DisplayString Condition="storage_.is_null_">(null)</DisplayString>
<DisplayString>{storage_.value_}</DisplayString>
</Type>
<Type Name="base::RefCounted&lt;*&gt;">
<DisplayString>RefCount: {ref_count_}</DisplayString>
<Expand>
<Item Name="RefCount">ref_count_</Item>
</Expand>
</Type>
<Type Name="IPC::Message::Header">
<DisplayString>{{Routing: {routing}, Type: {type}}}</DisplayString>
<Expand>
<Item Name="RoutingId">routing</Item>
<Item Name="Type">type</Item>
<Synthetic Name="Priority"
Condition="(flags &amp; IPC::Message::PRIORITY_MASK) ==
IPC::Message::PRIORITY_LOW">
<DisplayString>Low</DisplayString>
</Synthetic>
<Synthetic Name="Priority"
Condition="(flags &amp; IPC::Message::PRIORITY_MASK) ==
IPC::Message::PRIORITY_NORMAL">
<DisplayString>Normal</DisplayString>
</Synthetic>
<Synthetic Name="Priority"
Condition="(flags &amp; IPC::Message::PRIORITY_MASK) ==
IPC::Message::PRIORITY_HIGH">
<DisplayString>High</DisplayString>
</Synthetic>
<Synthetic Name="Sync"
Condition="(flags &amp; IPC::Message::SYNC_BIT) != 0">
<DisplayString>true</DisplayString>
</Synthetic>
<Synthetic Name="Sync"
Condition="(flags &amp; IPC::Message::SYNC_BIT) == 0">
<DisplayString>false</DisplayString>
</Synthetic>
<Synthetic Name="Reply"
Condition="(flags &amp; IPC::Message::REPLY_BIT) != 0">
<DisplayString>true</DisplayString>
</Synthetic>
<Synthetic Name="Reply"
Condition="(flags &amp; IPC::Message::REPLY_BIT) == 0">
<DisplayString>false</DisplayString>
</Synthetic>
<Synthetic Name="ReplyError"
Condition="(flags &amp; IPC::Message::REPLY_ERROR_BIT) != 0">
<DisplayString>true</DisplayString>
</Synthetic>
<Synthetic Name="ReplyError"
Condition="(flags &amp; IPC::Message::REPLY_ERROR_BIT) == 0">
<DisplayString>false</DisplayString>
</Synthetic>
<Synthetic Name="Unblock"
Condition="(flags &amp; IPC::Message::UNBLOCK_BIT) != 0">
<DisplayString>true</DisplayString>
</Synthetic>
<Synthetic Name="Unblock"
Condition="(flags &amp; IPC::Message::UNBLOCK_BIT) == 0">
<DisplayString>false</DisplayString>
</Synthetic>
<Synthetic Name="PumpingMessages"
Condition="(flags &amp; IPC::Message::PUMPING_MSGS_BIT) != 0">
<DisplayString>true</DisplayString>
</Synthetic>
<Synthetic Name="PumpingMessages"
Condition="(flags &amp; IPC::Message::PUMPING_MSGS_BIT) == 0">
<DisplayString>false</DisplayString>
</Synthetic>
<Synthetic Name="HasSentTime"
Condition="(flags &amp; IPC::Message::HAS_SENT_TIME_BIT) != 0">
<DisplayString>true</DisplayString>
</Synthetic>
<Synthetic Name="HasSentTime"
Condition="(flags &amp; IPC::Message::HAS_SENT_TIME_BIT) == 0">
<DisplayString>false</DisplayString>
</Synthetic>
</Expand>
</Type>
<Type Name="IPC::Message">
<DisplayString>{{size = {header_size_+capacity_after_header_}}}</DisplayString>
<Expand>
<ExpandedItem>*((IPC::Message::Header*)header_),nd</ExpandedItem>
<Item Name="Payload">(void*)((char*)header_ + header_size_)</Item>
</Expand>
</Type>
<Type Name="base::TimeDelta">
<DisplayString>{delta_}</DisplayString>
<Expand>
<Synthetic Name="Days">
<DisplayString>{(int)(delta_ / {,,base.dll}base::Time::kMicrosecondsPerDay)}</DisplayString>
</Synthetic>
<Synthetic Name="Hours">
<DisplayString>{(int)(delta_ / {,,base.dll}base::Time::kMicrosecondsPerHour)}</DisplayString>
</Synthetic>
<Synthetic Name="Minutes">
<DisplayString>{(int)(delta_ / {,,base.dll}base::Time::kMicrosecondsPerMinute)}</DisplayString>
</Synthetic>
<Synthetic Name="Seconds">
<DisplayString>{(int)(delta_ / {,,base.dll}base::Time::kMicrosecondsPerSecond)}</DisplayString>
</Synthetic>
<Synthetic Name="Milliseconds">
<DisplayString>{(int)(delta_ / {,,base.dll}base::Time::kMicrosecondsPerMillisecond)}</DisplayString>
</Synthetic>
<Item Name="Microseconds">delta_</Item>
</Expand>
</Type>
<Type Name="GURL">
<DisplayString>{spec_}</DisplayString>
</Type>
<Type Name="base::ManualConstructor&lt;*&gt;">
<!-- $T1 expands to the first "*" in the name which is the template
type. Use that to cast to the correct value. -->
<DisplayString>{*($T1*)space_.data_}</DisplayString>
<Expand>
<ExpandedItem>*($T1*)space_.data_</ExpandedItem>
</Expand>
</Type>
<Type Name="base::internal::flat_tree&lt;*&gt;">
<AlternativeType Name="base::flat_set&lt;*&gt;"/>
<DisplayString>{impl_.body_}</DisplayString>
<Expand>
<ExpandedItem>impl_.body_</ExpandedItem>
</Expand>
</Type>
<Type Name="base::flat_map&lt;*&gt;">
<DisplayString>{impl_.body_}</DisplayString>
<Expand>
<ExpandedItem>impl_.body_</ExpandedItem>
</Expand>
</Type>
<Type Name="base::Value">
<DisplayString Condition="type_ == NONE">NONE</DisplayString>
<DisplayString Condition="type_ == BOOLEAN">BOOLEAN {bool_value_}</DisplayString>
<DisplayString Condition="type_ == INTEGER">INTEGER {int_value_}</DisplayString>
<DisplayString Condition="type_ == DOUBLE">DOUBLE {double_value_}</DisplayString>
<DisplayString Condition="type_ == STRING">STRING {string_value_}</DisplayString>
<DisplayString Condition="type_ == BINARY">BINARY {binary_value_}</DisplayString>
<DisplayString Condition="type_ == DICTIONARY">DICTIONARY {dict_}</DisplayString>
<DisplayString Condition="type_ == LIST">LIST {list_}</DisplayString>
<Expand>
<Item Name="[type]">type_</Item>
<Item Condition="type_ == BOOLEAN" Name="[boolean]">bool_value_</Item>
<Item Condition="type_ == INTEGER" Name="[integer]">int_value_</Item>
<Item Condition="type_ == DOUBLE" Name="[double]">double_value_</Item>
<Item Condition="type_ == STRING" Name="[string]">string_value_</Item>
<Item Condition="type_ == BINARY" Name="[binary]">binary_value_</Item>
<!-- Put the members for dictionary and list directly inline without
requiring a separate expansion to view. -->
<ExpandedItem Condition="type_ == DICTIONARY">dict_</ExpandedItem>
<ExpandedItem Condition="type_ == LIST">list_</ExpandedItem>
</Expand>
</Type>
</AutoVisualizer>

View File

@@ -0,0 +1,9 @@
<?xml version="1.0"?>
<ServiceConfiguration serviceName="MyDef" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceConfiguration">
<Role name="My.Web">
<Instances count="1" />
<ConfigurationSettings>
<Setting name="DiagnosticsConnectionString" value="UseDevelopmentStorage=true" />
</ConfigurationSettings>
</Role>
</ServiceConfiguration>

View File

@@ -0,0 +1,11 @@
<?xml version="1.0" encoding="utf-8"?>
<ServiceDefinition name="MyDef" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceDefinition">
<WebRole name="My.Web">
<InputEndpoints>
<InputEndpoint name="HttpIn" protocol="http" port="80" />
</InputEndpoints>
<ConfigurationSettings>
<Setting name="DiagnosticsConnectionString" />
</ConfigurationSettings>
</WebRole>
</ServiceDefinition>

View File

@@ -0,0 +1,9 @@
<?xml version="1.0"?>
<ServiceConfiguration serviceName="MyDef" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceConfiguration">
<Role name="My.Web">
<Instances count="1" />
<ConfigurationSettings>
<Setting name="DiagnosticsConnectionString" value="UseDevelopmentStorage=true" />
</ConfigurationSettings>
</Role>
</ServiceConfiguration>

View File

@@ -0,0 +1,14 @@
<?xml version="1.0" encoding="utf-8"?>
<Project ToolsVersion="4.0" xmlns="http://schemas.microsoft.com/developer/msbuild/2003" DefaultTargets="Build">
<Import Project="$([MSBuild]::GetDirectoryNameOfFileAbove($(MSBuildThisFileDirectory), dir.props))\dir.props" />
<PropertyGroup>
<AssemblyVersion>3.9.0.0</AssemblyVersion>
<OutputType>Library</OutputType>
<PackageTargetFramework>dotnet5.1</PackageTargetFramework>
<NuGetTargetMoniker>.NETPlatform,Version=v5.1</NuGetTargetMoniker>
</PropertyGroup>
<ItemGroup>
<None Include="project.json" />
</ItemGroup>
<Import Project="$([MSBuild]::GetDirectoryNameOfFileAbove($(MSBuildThisFileDirectory), dir.targets))\dir.targets" />
</Project>

View File

@@ -0,0 +1,11 @@
<?xml version="1.0" encoding="utf-8"?>
<Project ToolsVersion="4.0" xmlns="http://schemas.microsoft.com/developer/msbuild/2003">
<PropertyGroup>
<ProjectGuid>{86244B26-C4AE-4F69-9315-B6148C0FE270}</ProjectGuid>
</PropertyGroup>
<Import Project="$(MSBuildExtensionsPath)\$(MSBuildToolsVersion)\Microsoft.Common.props" Condition="Exists('$(MSBuildExtensionsPath)\$(MSBuildToolsVersion)\Microsoft.Common.props')" />
<Import Project="$(MSBuildExtensionsPath32)\Microsoft\VisualStudio\v$(VisualStudioVersion)\CodeSharing\Microsoft.CodeSharing.Common.Default.props" />
<Import Project="$(MSBuildExtensionsPath32)\Microsoft\VisualStudio\v$(VisualStudioVersion)\CodeSharing\Microsoft.CodeSharing.Common.props" />
<Import Project="SharedProject.projitems" Label="Shared" />
<Import Project="$(MSBuildExtensionsPath32)\Microsoft\VisualStudio\v$(VisualStudioVersion)\CodeSharing\Microsoft.CodeSharing.CSharp.targets" />
</Project>

View File

@@ -0,0 +1,38 @@
<?xml version="1.0" encoding="utf-8"?>
<Project ToolsVersion="4.0" DefaultTargets="Build" xmlns="http://schemas.microsoft.com/developer/msbuild/2003">
<PropertyGroup>
<Configuration Condition=" '$(Configuration)' == '' ">Debug</Configuration>
<Platform Condition=" '$(Platform)' == '' ">AnyCPU</Platform>
<ProductVersion>1.0.0</ProductVersion>
<ProjectGuid>{0beae469-c1c6-4648-a2e5-0ae0ea9efffa}</ProjectGuid>
<OutputType>Library</OutputType>
<AppDesignerFolder>Properties</AppDesignerFolder>
<RootNamespace>MyDef</RootNamespace>
<AssemblyName>MyDef</AssemblyName>
<StartDevelopmentStorage>True</StartDevelopmentStorage>
<Name>My</Name>
</PropertyGroup>
<PropertyGroup Condition=" '$(Configuration)|$(Platform)' == 'Debug|AnyCPU' ">
<DebugSymbols>true</DebugSymbols>
<DebugType>full</DebugType>
<Optimize>false</Optimize>
<OutputPath>bin\Debug\</OutputPath>
<DefineConstants>DEBUG;TRACE</DefineConstants>
<ErrorReport>prompt</ErrorReport>
<WarningLevel>4</WarningLevel>
</PropertyGroup>
<!-- Items for the project -->
<ItemGroup>
<ServiceDefinition Include="ServiceDefinition.csdef" />
<ServiceConfiguration Include="ServiceConfiguration.cscfg" />
</ItemGroup>
<ItemGroup>
<ProjectReference Include="..\My.Web\My.Web.csproj">
<Name>My.Web</Name>
<Project>{1515c2c3-0b57-422c-a6f9-0891b86fb7d3}</Project>
<Private>True</Private>
<RoleType>Web</RoleType>
<RoleName>My.Web</RoleName>
</ProjectReference>
</ItemGroup>
</Project>

View File

@@ -0,0 +1,85 @@
<Project xmlns="http://schemas.microsoft.com/developer/msbuild/2003">
<Import Project="$(MSBuildExtensionsPath)\MSBuildCommunityTasks\MSBuild.Community.Tasks.Targets"/>
<UsingTask TaskName="Microsoft.Build.Tasks.XmlPeek" AssemblyName="Microsoft.Build.Tasks.v4.0, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a"/>
<UsingTask TaskName="Microsoft.Build.Tasks.XmlPoke" AssemblyName="Microsoft.Build.Tasks.v4.0, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a"/>
<PropertyGroup>
<SolutionRoot>$(MSBuildProjectDirectory)\..</SolutionRoot>
<ProjectRoot>$(SolutionRoot)\Src\Bowerbird.Website</ProjectRoot>
<ArtifactsDir>$(SolutionRoot)\Release</ArtifactsDir>
<CurrentBuildDateStamp>$([System.DateTime]::Now.ToString("yyyyMMdd"))</CurrentBuildDateStamp>
<CurrentBuildTimeStamp>$([System.DateTime]::Now.ToString("hhmm"))</CurrentBuildTimeStamp>
<CurrentBuildDir>$(ArtifactsDir)\$(CurrentBuildDateStamp)-$(Configuration)</CurrentBuildDir>
</PropertyGroup>
<PropertyGroup>
<VersionMajor>0</VersionMajor>
<VersionMinor>1</VersionMinor>
<VersionPatch>0</VersionPatch>
<VersionPreRelease></VersionPreRelease>
</PropertyGroup>
<PropertyGroup>
<WebConfig>$(CurrentBuildDir)\Web.config</WebConfig>
</PropertyGroup>
<ItemGroup>
<PackageFiles Include="$(ProjectRoot)\**\*.*"
Exclude="$(ProjectRoot)\bin\*.pdb;
$(ProjectRoot)\bin\*.xml;
$(ProjectRoot)\Logs\**\*.*;
$(ProjectRoot)\obj\**\*.*;
$(ProjectRoot)\test\**\*.*;
$(ProjectRoot)\media\**\*.*;
$(ProjectRoot)\**\*.orig;
$(ProjectRoot)\*.config;
$(ProjectRoot)\*.xml;
$(ProjectRoot)\**\*.csproj;
$(ProjectRoot)\*.csproj.user;">
</PackageFiles>
<ConfigFiles Include="$(ProjectRoot)\Web.config" >
</ConfigFiles>
</ItemGroup>
<Target Name="UpdateWebConfig" Condition=" '$(CurrentBuildDateStamp)' != '' ">
<XmlPoke Namespaces="&lt;Namespace Prefix='msb' Uri='http://schemas.microsoft.com/developer/msbuild/2003'/&gt;"
XmlInputPath="$(WebConfig)"
Query="//add[@key='staticContentIncrement']/@value"
Value="$(CurrentBuildDateStamp)-$(CurrentBuildTimeStamp)" />
</Target>
<Target Name="CreateOutputDir">
<Message Text="Creating Directory $(CurrentBuildDir)" />
<RemoveDir Directories="$(CurrentBuildDir)" />
<Delete Files="$(CurrentBuildDir)" />
<MakeDir Directories="$(CurrentBuildDir)" />
</Target>
<Target Name="BuildMediaDirectories">
<MakeDir Directories="$(CurrentBuildDir)\media" />
</Target>
<Target Name="ConfigSettingsMessages">
<Message Text="Configuration is $(Configuration)" />
<Message Text="BuildNumber is $(BuildNumber)" />
<Message Text="ProjectRoot is $(ProjectRoot)" />
<Message Text="CurrentBuildDir is $(CurrentBuildDir)" />
</Target>
<Target Name="BuildSolution">
<MSBuild Projects="$(SolutionRoot)\Bowerbird.sln" Targets="Build" Properties="Configuration=$(Configuration)" />
</Target>
<Target Name="CopyFilesToReleaseDir">
<Copy SourceFiles="@(PackageFiles)" DestinationFiles="@(PackageFiles->'$(CurrentBuildDir)\%(RecursiveDir)%(Filename)%(Extension)')" />
<Copy SourceFiles="@(ConfigFiles)" DestinationFiles="$(CurrentBuildDir)\web.config" />
</Target>
<Target Name="ZipUpReleaseFiles">
<ItemGroup>
<ZipFiles Include="$(CurrentBuildDir)\**\*.*" Exclude="*.zip" />
</ItemGroup>
<Zip Files="@(ZipFiles)" WorkingDirectory="$(CurrentBuildDir)\$(Configuration)\" ZipFileName="$(CurrentBuildDateStamp)-$(Configuration).zip" ZipLevel="9" />
</Target>
<Target Name="CopyZipToReleaseDir" DependsOnTargets="ZipUpReleaseFiles">
<Copy SourceFiles="$(MSBuildProjectDirectory)\$(CurrentBuildDateStamp)-$(Configuration).zip" DestinationFiles="$(ArtifactsDir)\$(CurrentBuildDateStamp)-$(Configuration).zip" />
<Delete Files="$(MSBuildProjectDirectory)\$(CurrentBuildDateStamp)-$(Configuration).zip" />
</Target>
<Target Name="Build" DependsOnTargets="CreateOutputDir">
<CallTarget Targets="BuildMediaDirectories"/>
<CallTarget Targets="ConfigSettingsMessages"/>
<CallTarget Targets="BuildSolution"/>
<CallTarget Targets="CopyFilesToReleaseDir"/>
<CallTarget Targets="UpdateWebConfig" />
<CallTarget Targets="CopyZipToReleaseDir"/>
</Target>
</Project>

View File

@@ -0,0 +1,30 @@
---
Checks: 'clang-diagnostic-*,clang-analyzer-*'
WarningsAsErrors: ''
HeaderFilterRegex: ''
AnalyzeTemporaryDtors: false
FormatStyle: none
User: linguist-user
CheckOptions:
- key: google-readability-braces-around-statements.ShortStatementLines
value: '1'
- key: google-readability-function-size.StatementThreshold
value: '800'
- key: google-readability-namespace-comments.ShortNamespaceLines
value: '10'
- key: google-readability-namespace-comments.SpacesBeforeComments
value: '2'
- key: modernize-loop-convert.MaxCopySize
value: '16'
- key: modernize-loop-convert.MinConfidence
value: reasonable
- key: modernize-loop-convert.NamingStyle
value: CamelCase
- key: modernize-pass-by-value.IncludeStyle
value: llvm
- key: modernize-replace-auto-ptr.IncludeStyle
value: llvm
- key: modernize-use-nullptr.NullMacros
value: 'NULL'
...

View File

@@ -0,0 +1,23 @@
rule OfExample2
{
strings:
$foo1 = "foo1"
$foo2 = "foo2"
$foo3 = "foo3"
condition:
2 of ($foo*) // equivalent to 2 of ($foo1,$foo2,$foo3)
}
rule OfExample3
{
strings:
$foo1 = "foo1"
$foo2 = "foo2"
$bar1 = "bar1"
$bar2 = "bar2"
condition:
3 of ($foo*,$bar1,$bar2)
}

13
samples/YARA/example.yara Normal file
View File

@@ -0,0 +1,13 @@
rule silent_banker : banker
{
meta:
description = "This is just an example"
thread_level = 3
in_the_wild = true
strings:
$a = {6A 40 68 00 30 00 00 6A 14 8D 91}
$b = {8D 4D B0 2B C1 83 C0 27 99 6A 4E 59 F7 F9}
$c = "UVODFRYSIHLNWPEJXQZAKCBGMT"
condition:
$a or $b or $c
}

1
samples/YARA/true.yar Normal file
View File

@@ -0,0 +1 @@
rule test { condition: true }

21
samples/wdl/hello.wdl Normal file
View File

@@ -0,0 +1,21 @@
# Sample originally from https://github.com/broadinstitute/centaur
task hello {
String addressee
command {
echo "Hello ${addressee}!"
}
output {
String salutation = read_string(stdout())
}
runtime {
docker: "ubuntu@sha256:71cd81252a3563a03ad8daee81047b62ab5d892ebbfbf71cf53415f29c130950"
}
}
workflow wf_hello {
call hello
output {
hello.salutation
}
}

View File

@@ -0,0 +1,44 @@
# Sample originally from https://github.com/broadinstitute/centaur
task validate_int {
Int i
command {
echo $(( ${i} % 2 ))
}
output {
Boolean validation = read_int(stdout()) == 1
}
runtime {
docker: "ubuntu:latest"
}
}
task mirror {
Int i
command {
echo ${i}
}
output {
Int out = read_int(stdout())
}
runtime {
docker: "ubuntu:latest"
}
}
workflow ifs_in_scatters {
Array[Int] numbers = range(5)
scatter (n in numbers) {
call validate_int { input: i = n }
if (validate_int.validation) {
Int incremented = n + 1
call mirror { input: i = incremented }
}
}
output {
Array[Int?] mirrors = mirror.out
}
}

View File

@@ -0,0 +1,42 @@
# Sample originally from https://github.com/broadinstitute/centaur
##
# Check that we can:
# - Create a file from a task and feed it into subsequent commands.
# - Create a file output by interpolating a file name
# - Use engine functions on an interpolated file name
##
task mkFile {
command {
echo "small file contents" > out.txt
}
output { File out = "out.txt" }
runtime { docker: "ubuntu:latest" }
}
task consumeFile {
File in_file
String out_name
command {
cat ${in_file} > ${out_name}
}
runtime {
docker: "ubuntu:latest"
}
output {
File out_interpolation = "${out_name}"
String contents = read_string("${out_name}")
String contentsAlt = read_string(out_interpolation)
}
}
workflow filepassing {
call mkFile
call consumeFile {input: in_file=mkFile.out, out_name = "myFileName.abc.txt" }
output {
consumeFile.contents
consumeFile.contentsAlt
}
}

View File

@@ -1,6 +1,7 @@
#!/usr/bin/env ruby
require "optparse"
require "open3"
ROOT = File.expand_path("../../", __FILE__)
@@ -42,6 +43,17 @@ def log(msg)
puts msg if $verbose
end
def command(*args)
log "$ #{args.join(' ')}"
output, status = Open3.capture2e(*args)
if !status.success?
output.each_line do |line|
log " > #{line}"
end
warn "Command failed. Aborting."
exit 1
end
end
usage = """Usage:
#{$0} [-v|--verbose] [--replace grammar] url
@@ -51,12 +63,12 @@ Examples:
"""
$replace = nil
$verbose = false
$verbose = true
OptionParser.new do |opts|
opts.banner = usage
opts.on("-v", "--verbose", "Print verbose feedback to STDOUT") do
$verbose = true
opts.on("-q", "--quiet", "Do not print output unless there's a failure") do
$verbose = false
end
opts.on("-rSUBMODULE", "--replace=SUBMODDULE", "Replace an existing grammar submodule.") do |name|
$replace = name
@@ -82,23 +94,22 @@ Dir.chdir(ROOT)
if repo_old
log "Deregistering: #{repo_old}"
`git submodule deinit #{repo_old}`
`git rm -rf #{repo_old}`
`script/convert-grammars`
command('git', 'submodule', 'deinit', repo_old)
command('git', 'rm', '-rf', repo_old)
command('script/grammar-compiler', 'update', '-f')
end
log "Registering new submodule: #{repo_new}"
`git submodule add -f #{https} #{repo_new}`
exit 1 if $?.exitstatus > 0
`script/convert-grammars --add #{repo_new}`
command('git', 'submodule', 'add', '-f', https, repo_new)
command('script/grammar-compiler', 'add', repo_new)
log "Confirming license"
if repo_old
`script/licensed`
command('script/licensed')
else
`script/licensed --module "#{repo_new}"`
command('script/licensed', '--module', repo_new)
end
log "Updating grammar documentation in vendor/REAEDME.md"
`bundle exec rake samples`
`script/list-grammars`
log "Updating grammar documentation in vendor/README.md"
command('bundle', 'exec', 'rake', 'samples')
command('script/list-grammars')

View File

@@ -1,319 +0,0 @@
#!/usr/bin/env ruby
require 'bundler/setup'
require 'json'
require 'net/http'
require 'optparse'
require 'plist'
require 'set'
require 'thread'
require 'tmpdir'
require 'uri'
require 'yaml'
ROOT = File.expand_path("../..", __FILE__)
GRAMMARS_PATH = File.join(ROOT, "grammars")
SOURCES_FILE = File.join(ROOT, "grammars.yml")
CSONC = File.join(ROOT, "node_modules", ".bin", "csonc")
$options = {
:add => false,
:install => true,
:output => SOURCES_FILE,
:remote => true,
}
class SingleFile
def initialize(path)
@path = path
end
def url
@path
end
def fetch(tmp_dir)
[@path]
end
end
class DirectoryPackage
def self.fetch(dir)
Dir["#{dir}/**/*"].select do |path|
case File.extname(path.downcase)
when '.plist'
path.split('/')[-2] == 'Syntaxes'
when '.tmlanguage', '.yaml-tmlanguage'
true
when '.cson', '.json'
path.split('/')[-2] == 'grammars'
else
false
end
end
end
def initialize(directory)
@directory = directory
end
def url
@directory
end
def fetch(tmp_dir)
self.class.fetch(File.join(ROOT, @directory))
end
end
class TarballPackage
def self.fetch(tmp_dir, url)
`curl --silent --location --max-time 30 --output "#{tmp_dir}/archive" "#{url}"`
raise "Failed to fetch GH package: #{url} #{$?.to_s}" unless $?.success?
output = File.join(tmp_dir, 'extracted')
Dir.mkdir(output)
`tar -C "#{output}" -xf "#{tmp_dir}/archive"`
raise "Failed to uncompress tarball: #{tmp_dir}/archive (from #{url}) #{$?.to_s}" unless $?.success?
DirectoryPackage.fetch(output)
end
attr_reader :url
def initialize(url)
@url = url
end
def fetch(tmp_dir)
self.class.fetch(tmp_dir, url)
end
end
class SingleGrammar
attr_reader :url
def initialize(url)
@url = url
end
def fetch(tmp_dir)
filename = File.join(tmp_dir, File.basename(url))
`curl --silent --location --max-time 10 --output "#{filename}" "#{url}"`
raise "Failed to fetch grammar: #{url}: #{$?.to_s}" unless $?.success?
[filename]
end
end
class SVNPackage
attr_reader :url
def initialize(url)
@url = url
end
def fetch(tmp_dir)
`svn export -q "#{url}/Syntaxes" "#{tmp_dir}/Syntaxes"`
raise "Failed to export SVN repository: #{url}: #{$?.to_s}" unless $?.success?
Dir["#{tmp_dir}/Syntaxes/*.{plist,tmLanguage,tmlanguage,YAML-tmLanguage}"]
end
end
class GitHubPackage
def self.parse_url(url)
url, ref = url.split("@", 2)
path = URI.parse(url).path.split('/')
[path[1], path[2].chomp('.git'), ref || "master"]
end
attr_reader :user
attr_reader :repo
attr_reader :ref
def initialize(url)
@user, @repo, @ref = self.class.parse_url(url)
end
def url
suffix = "@#{ref}" unless ref == "master"
"https://github.com/#{user}/#{repo}#{suffix}"
end
def fetch(tmp_dir)
url = "https://github.com/#{user}/#{repo}/archive/#{ref}.tar.gz"
TarballPackage.fetch(tmp_dir, url)
end
end
def load_grammar(path)
case File.extname(path.downcase)
when '.plist', '.tmlanguage'
Plist::parse_xml(path)
when '.yaml-tmlanguage'
content = File.read(path)
# Attempt to parse YAML file even if it has a YAML 1.2 header
if content.lines[0] =~ /^%YAML[ :]1\.2/
content = content.lines[1..-1].join
end
begin
YAML.load(content)
rescue Psych::SyntaxError => e
$stderr.puts "Failed to parse YAML grammar '#{path}'"
end
when '.cson'
cson = `"#{CSONC}" "#{path}"`
raise "Failed to convert CSON grammar '#{path}': #{$?.to_s}" unless $?.success?
JSON.parse(cson)
when '.json'
JSON.parse(File.read(path))
else
raise "Invalid document type #{path}"
end
end
def load_grammars(tmp_dir, source, all_scopes)
is_url = source.start_with?("http:", "https:")
return [] if is_url && !$options[:remote]
return [] if !is_url && !File.exist?(source)
p = if !is_url
if File.directory?(source)
DirectoryPackage.new(source)
else
SingleFile.new(source)
end
elsif source.end_with?('.tmLanguage', '.plist', '.YAML-tmLanguage')
SingleGrammar.new(source)
elsif source.start_with?('https://github.com')
GitHubPackage.new(source)
elsif source.start_with?('http://svn.textmate.org')
SVNPackage.new(source)
elsif source.end_with?('.tar.gz')
TarballPackage.new(source)
else
nil
end
raise "Unsupported source: #{source}" unless p
p.fetch(tmp_dir).map do |path|
grammar = load_grammar(path)
scope = grammar['scopeName'] || grammar['scope']
if all_scopes.key?(scope)
unless all_scopes[scope] == p.url
$stderr.puts "WARN: Duplicated scope #{scope}\n" +
" Current package: #{p.url}\n" +
" Previous package: #{all_scopes[scope]}"
end
next
end
all_scopes[scope] = p.url
grammar
end.compact
end
def install_grammars(grammars, path)
installed = []
grammars.each do |grammar|
scope = grammar['scopeName'] || grammar['scope']
File.write(File.join(GRAMMARS_PATH, "#{scope}.json"), JSON.pretty_generate(grammar))
installed << scope
end
$stderr.puts("OK #{path} (#{installed.join(', ')})")
end
def run_thread(queue, all_scopes)
Dir.mktmpdir do |tmpdir|
loop do
source, index = begin
queue.pop(true)
rescue ThreadError
# The queue is empty.
break
end
dir = "#{tmpdir}/#{index}"
Dir.mkdir(dir)
grammars = load_grammars(dir, source, all_scopes)
install_grammars(grammars, source) if $options[:install]
end
end
end
def generate_yaml(all_scopes, base)
yaml = all_scopes.each_with_object(base) do |(key,value),out|
out[value] ||= []
out[value] << key
end
yaml = Hash[yaml.sort]
yaml.each { |k, v| v.sort! }
yaml
end
def main(sources)
begin
Dir.mkdir(GRAMMARS_PATH)
rescue Errno::EEXIST
end
`npm install`
all_scopes = {}
if source = $options[:add]
Dir.mktmpdir do |tmpdir|
grammars = load_grammars(tmpdir, source, all_scopes)
install_grammars(grammars, source) if $options[:install]
end
generate_yaml(all_scopes, sources)
else
queue = Queue.new
sources.each do |url, scopes|
queue.push([url, queue.length])
end
threads = 8.times.map do
Thread.new { run_thread(queue, all_scopes) }
end
threads.each(&:join)
generate_yaml(all_scopes, {})
end
end
OptionParser.new do |opts|
opts.banner = "Usage: #{$0} [options]"
opts.on("--add GRAMMAR", "Add a new grammar. GRAMMAR may be a file path or URL.") do |a|
$options[:add] = a
end
opts.on("--[no-]install", "Install grammars into grammars/ directory.") do |i|
$options[:install] = i
end
opts.on("--output FILE", "Write output to FILE. Use - for stdout.") do |o|
$options[:output] = o == "-" ? $stdout : o
end
opts.on("--[no-]remote", "Download remote grammars.") do |r|
$options[:remote] = r
end
end.parse!
sources = File.open(SOURCES_FILE) do |file|
YAML.load(file)
end
yaml = main(sources)
if $options[:output].is_a?(IO)
$options[:output].write(YAML.dump(yaml))
else
File.write($options[:output], YAML.dump(yaml))
end

12
script/grammar-compiler Executable file
View File

@@ -0,0 +1,12 @@
#!/bin/sh
set -e
cd "$(dirname "$0")/.."
image="linguist/grammar-compiler:latest"
mkdir -p grammars
exec docker run --rm \
-u $(id -u $USER):$(id -g $USER) \
-v $PWD:/src/linguist \
-w /src/linguist $image "$@"

View File

@@ -99,4 +99,8 @@ class GrammarList
end
list = GrammarList.new
list.update_readme()
if ARGV.include? "--print"
puts list.to_markdown
else
list.update_readme
end

View File

@@ -1,60 +0,0 @@
#!/usr/bin/env ruby
require "bundler/setup"
require "json"
require "linguist"
require "set"
require "yaml"
ROOT = File.expand_path("../../", __FILE__)
def find_includes(json)
case json
when Hash
result = []
if inc = json["include"]
result << inc.split("#", 2).first unless inc.start_with?("#", "$")
end
result + json.values.flat_map { |v| find_includes(v) }
when Array
json.flat_map { |v| find_includes(v) }
else
[]
end
end
def transitive_includes(scope, includes)
scopes = Set.new
queue = includes[scope] || []
while s = queue.shift
next if scopes.include?(s)
scopes << s
queue += includes[s] || []
end
scopes
end
includes = {}
Dir[File.join(ROOT, "grammars/*.json")].each do |path|
scope = File.basename(path).sub(/\.json/, '')
json = JSON.load(File.read(path))
incs = find_includes(json)
next if incs.empty?
includes[scope] ||= []
includes[scope] += incs
end
yaml = YAML.load(File.read(File.join(ROOT, "grammars.yml")))
language_scopes = Linguist::Language.all.map(&:tm_scope).to_set
# The set of used scopes is the scopes for each language, plus all the scopes
# they include, transitively.
used_scopes = language_scopes + language_scopes.flat_map { |s| transitive_includes(s, includes).to_a }.to_set
unused = yaml.reject { |repo, scopes| scopes.any? { |scope| used_scopes.include?(scope) } }
puts "Unused grammar repos"
puts unused.map { |repo, scopes| sprintf("%-100s %s", repo, scopes.join(", ")) }.sort.join("\n")
yaml.delete_if { |k| unused.key?(k) }
File.write(File.join(ROOT, "grammars.yml"), YAML.dump(yaml))

8
test/fixtures/Perl/Module.pm vendored Normal file
View File

@@ -0,0 +1,8 @@
use 5.006;
use strict;
=head1
module
=cut

View File

@@ -188,6 +188,17 @@ class TestFileBlob < Minitest::Test
assert fixture_blob("Binary/MainMenu.nib").generated?
assert !sample_blob("XML/project.pbxproj").generated?
# Cocoapods
assert sample_blob('Pods/blah').generated?
assert !sample_blob('My-Pods/blah').generated?
# Carthage
assert sample_blob('Carthage/Build/blah').generated?
assert !sample_blob('Carthage/blah').generated?
assert !sample_blob('Carthage/Checkout/blah').generated?
assert !sample_blob('My-Carthage/Build/blah').generated?
assert !sample_blob('My-Carthage/Build/blah').generated?
# Gemfile.lock is NOT generated
assert !sample_blob("Gemfile.lock").generated?
@@ -313,8 +324,6 @@ class TestFileBlob < Minitest::Test
assert sample_blob("deps/http_parser/http_parser.c").vendored?
assert sample_blob("deps/v8/src/v8.h").vendored?
assert sample_blob("tools/something/else.c").vendored?
# Chart.js
assert sample_blob("some/vendored/path/Chart.js").vendored?
assert !sample_blob("some/vendored/path/chart.js").vendored?
@@ -490,9 +499,9 @@ class TestFileBlob < Minitest::Test
# Carthage
assert sample_blob('Carthage/blah').vendored?
# Cocoapods
assert sample_blob('Pods/blah').vendored?
assert sample_blob('iOS/Carthage/blah').vendored?
assert !sample_blob('My-Carthage/blah').vendored?
assert !sample_blob('iOS/My-Carthage/blah').vendored?
# Html5shiv
assert sample_blob("Scripts/html5shiv.js").vendored?

View File

@@ -42,6 +42,24 @@ class TestGenerated < Minitest::Test
generated_sample_without_loading_data("Dummy/foo.xcworkspacedata")
generated_sample_without_loading_data("Dummy/foo.xcuserstate")
# Cocoapods
generated_sample_without_loading_data("Pods/Pods.xcodeproj")
generated_sample_without_loading_data("Pods/SwiftDependency/foo.swift")
generated_sample_without_loading_data("Pods/ObjCDependency/foo.h")
generated_sample_without_loading_data("Pods/ObjCDependency/foo.m")
generated_sample_without_loading_data("Dummy/Pods/Pods.xcodeproj")
generated_sample_without_loading_data("Dummy/Pods/SwiftDependency/foo.swift")
generated_sample_without_loading_data("Dummy/Pods/ObjCDependency/foo.h")
generated_sample_without_loading_data("Dummy/Pods/ObjCDependency/foo.m")
# Carthage
generated_sample_without_loading_data("Carthage/Build/.Dependency.version")
generated_sample_without_loading_data("Carthage/Build/iOS/Dependency.framework")
generated_sample_without_loading_data("Carthage/Build/Mac/Dependency.framework")
generated_sample_without_loading_data("src/Carthage/Build/.Dependency.version")
generated_sample_without_loading_data("src/Carthage/Build/iOS/Dependency.framework")
generated_sample_without_loading_data("src/Carthage/Build/Mac/Dependency.framework")
# Go-specific vendored paths
generated_sample_without_loading_data("go/vendor/github.com/foo.go")
generated_sample_without_loading_data("go/vendor/golang.org/src/foo.c")

View File

@@ -23,7 +23,6 @@ class TestGrammars < Minitest::Test
"8653305b358375d0fced85dc24793b99919b11ef", # language-shellscript
"9f0c0b0926a18f5038e455e8df60221125fc3111", # elixir-tmbundle
"a4dadb2374282098c5b8b14df308906f5347d79a", # mako-tmbundle
"b9b24778619dce325b651f0d77cbc72e7ae0b0a3", # Julia.tmbundle
"e06722add999e7428048abcc067cd85f1f7ca71c", # r.tmbundle
"50b14a0e3f03d7ca754dac42ffb33302b5882b78", # smalltalk-tmbundle
"eafbc4a2f283752858e6908907f3c0c90188785b", # gap-tmbundle
@@ -43,6 +42,8 @@ class TestGrammars < Minitest::Test
"82c356d6ecb143a8a20e1658b0d6a2d77ea8126f", # idl.tmbundle
"9dafd4e2a79cb13a6793b93877a254bc4d351e74", # sublime-text-ox
"8e111741d97ba2e27b3d18a309d426b4a37e604f", # sublime-varnish
"23d2538e33ce62d58abda2c039364b92f64ea6bc", # sublime-angelscript
"53714285caad3c480ebd248c490509695d10404b", # atom-language-julia
].freeze
# List of allowed SPDX license names
@@ -90,34 +91,27 @@ class TestGrammars < Minitest::Test
message << unlisted_submodules.sort.join("\n")
end
assert nonexistent_submodules.empty? && unlisted_submodules.empty?, message
assert nonexistent_submodules.empty? && unlisted_submodules.empty?, message.sub(/\.\Z/, "")
end
def test_local_scopes_are_in_sync
actual = YAML.load(`"#{File.join(ROOT, "script", "convert-grammars")}" --output - --no-install --no-remote`)
assert $?.success?, "script/convert-grammars failed"
# We're not checking remote grammars. That can take a long time and make CI
# flaky if network conditions are poor.
@grammars.delete_if { |k, v| k.start_with?("http:", "https:") }
@grammars.each do |k, v|
assert_equal v, actual[k], "The scopes listed for #{k} in grammars.yml don't match the scopes found in that repository"
end
def test_readme_file_is_in_sync
current_data = File.read("#{ROOT}/vendor/README.md").to_s.sub(/\A.+?<!--.+?-->\n/ms, "")
updated_data = `script/list-grammars --print`
assert_equal current_data, updated_data, "Grammar list is out-of-date. Run `script/list-grammars`"
end
def test_submodules_have_recognized_licenses
unrecognized = submodule_licenses.select { |k,v| v.nil? && Licensee::FSProject.new(k).license_file }
unrecognized.reject! { |k,v| PROJECT_WHITELIST.include?(k) }
message = "The following submodules have unrecognized licenses:\n* #{unrecognized.keys.join("\n* ")}\n"
message << "Please ensure that the project's LICENSE file contains the full text of the license."
message << "Please ensure that the project's LICENSE file contains the full text of the license"
assert_equal Hash.new, unrecognized, message
end
def test_submodules_have_licenses
unlicensed = submodule_licenses.select { |k,v| v.nil? }.reject { |k,v| PROJECT_WHITELIST.include?(k) }
message = "The following submodules don't have licenses:\n* #{unlicensed.keys.join("\n* ")}\n"
message << "Please ensure that the project has a LICENSE file, and that the LICENSE file contains the full text of the license."
message << "Please ensure that the project has a LICENSE file, and that the LICENSE file contains the full text of the license"
assert_equal Hash.new, unlicensed, message
end
@@ -127,14 +121,14 @@ class TestGrammars < Minitest::Test
HASH_WHITELIST.include?(v) }
.map { |k,v| "#{k}: #{v}"}
message = "The following submodules have unapproved licenses:\n* #{unapproved.join("\n* ")}\n"
message << "The license must be added to the LICENSE_WHITELIST in /test/test_grammars.rb once approved."
message << "The license must be added to the LICENSE_WHITELIST in /test/test_grammars.rb once approved"
assert_equal [], unapproved, message
end
def test_whitelisted_submodules_dont_have_licenses
licensed = submodule_licenses.reject { |k,v| v.nil? }.select { |k,v| PROJECT_WHITELIST.include?(k) }
message = "The following whitelisted submodules have a license:\n* #{licensed.keys.join("\n* ")}\n"
message << "Please remove them from the project whitelist."
message << "Please remove them from the project whitelist"
assert_equal Hash.new, licensed, message
end
@@ -142,7 +136,7 @@ class TestGrammars < Minitest::Test
used_hashes = submodule_licenses.values.reject { |v| v.nil? || LICENSE_WHITELIST.include?(v) }
unused_hashes = HASH_WHITELIST - used_hashes
message = "The following whitelisted license hashes are unused:\n* #{unused_hashes.join("\n* ")}\n"
message << "Please remove them from the hash whitelist."
message << "Please remove them from the hash whitelist"
assert_equal Array.new, unused_hashes, message
end

View File

@@ -1,6 +1,6 @@
require_relative "./helper"
class TestHeuristcs < Minitest::Test
class TestHeuristics < Minitest::Test
include Linguist
def fixture(name)
@@ -44,6 +44,13 @@ class TestHeuristcs < Minitest::Test
assert_equal Language["Objective-C"], match
end
def test_as_by_heuristics
assert_heuristics({
"ActionScript" => all_fixtures("ActionScript", "*.as"),
"AngelScript" => all_fixtures("AngelScript", "*.as")
})
end
# Candidate languages = ["AGS Script", "AsciiDoc", "Public Key"]
def test_asc_by_heuristics
assert_heuristics({
@@ -230,14 +237,6 @@ class TestHeuristcs < Minitest::Test
})
end
# Candidate languages = ["Pod", "Perl"]
def test_pod_by_heuristics
assert_heuristics({
"Perl" => all_fixtures("Perl", "*.pod"),
"Pod" => all_fixtures("Pod", "*.pod")
})
end
# Candidate languages = ["IDL", "Prolog", "QMake", "INI"]
def test_pro_by_heuristics
assert_heuristics({

View File

@@ -470,5 +470,7 @@ class TestLanguage < Minitest::Test
def test_non_crash_on_comma
assert_nil Language[',']
assert_nil Language.find_by_name(',')
assert_nil Language.find_by_alias(',')
end
end

1
tools/grammars/.gitignore vendored Normal file
View File

@@ -0,0 +1 @@
/vendor

35
tools/grammars/Dockerfile Normal file
View File

@@ -0,0 +1,35 @@
FROM golang:1.9.2
RUN apt-get update
RUN apt-get upgrade -y
RUN apt-get install -y curl gnupg
RUN curl -sL https://deb.nodesource.com/setup_6.x | bash -
RUN apt-get install -y nodejs
RUN npm install -g season
RUN apt-get install -y cmake
RUN cd /tmp && git clone https://github.com/vmg/pcre
RUN mkdir -p /tmp/pcre/build && cd /tmp/pcre/build && \
cmake .. \
-DPCRE_SUPPORT_JIT=ON \
-DPCRE_SUPPORT_UTF=ON \
-DPCRE_SUPPORT_UNICODE_PROPERTIES=ON \
-DBUILD_SHARED_LIBS=OFF \
-DCMAKE_C_FLAGS="-fPIC $(EXTRA_PCRE_CFLAGS)" \
-DCMAKE_BUILD_TYPE=RelWithDebInfo \
-DPCRE_BUILD_PCRECPP=OFF \
-DPCRE_BUILD_PCREGREP=OFF \
-DPCRE_BUILD_TESTS=OFF \
-G "Unix Makefiles" && \
make && make install
RUN rm -rf /tmp/pcre
RUN go get -u github.com/golang/dep/cmd/dep
WORKDIR /go/src/github.com/github/linguist/tools/grammars
COPY . .
RUN dep ensure
RUN go install ./cmd/grammar-compiler
ENTRYPOINT ["grammar-compiler"]

51
tools/grammars/Gopkg.lock generated Normal file
View File

@@ -0,0 +1,51 @@
# This file is autogenerated, do not edit; changes may be undone by the next 'dep ensure'.
[[projects]]
branch = "master"
name = "github.com/golang/protobuf"
packages = ["proto"]
revision = "1e59b77b52bf8e4b449a57e6f79f21226d571845"
[[projects]]
branch = "master"
name = "github.com/groob/plist"
packages = ["."]
revision = "7b367e0aa692e62a223e823f3288c0c00f519a36"
[[projects]]
name = "github.com/mattn/go-runewidth"
packages = ["."]
revision = "9e777a8366cce605130a531d2cd6363d07ad7317"
version = "v0.0.2"
[[projects]]
branch = "master"
name = "github.com/mitchellh/mapstructure"
packages = ["."]
revision = "06020f85339e21b2478f756a78e295255ffa4d6a"
[[projects]]
name = "github.com/urfave/cli"
packages = ["."]
revision = "cfb38830724cc34fedffe9a2a29fb54fa9169cd1"
version = "v1.20.0"
[[projects]]
name = "gopkg.in/cheggaaa/pb.v1"
packages = ["."]
revision = "657164d0228d6bebe316fdf725c69f131a50fb10"
version = "v1.0.18"
[[projects]]
branch = "v2"
name = "gopkg.in/yaml.v2"
packages = ["."]
revision = "287cf08546ab5e7e37d55a84f7ed3fd1db036de5"
[solve-meta]
analyzer-name = "dep"
analyzer-version = 1
inputs-digest = "ba2e3150d728692b49e3e2d652b6ea23db82777c340e0c432cd4af6f0eef9f55"
solver-name = "gps-cdcl"
solver-version = 1

23
tools/grammars/Gopkg.toml Normal file
View File

@@ -0,0 +1,23 @@
[[constraint]]
branch = "v2"
name = "gopkg.in/yaml.v2"
[[constraint]]
branch = "master"
name = "github.com/groob/plist"
[[constraint]]
branch = "master"
name = "github.com/golang/protobuf"
[[constraint]]
branch = "master"
name = "github.com/mitchellh/mapstructure"
[[constraint]]
name = "gopkg.in/cheggaaa/pb.v1"
version = "1.0.18"
[[constraint]]
name = "github.com/urfave/cli"
version = "1.20.0"

View File

@@ -0,0 +1,120 @@
package main
import (
"os"
"github.com/github/linguist/tools/grammars/compiler"
"github.com/urfave/cli"
)
func cwd() string {
cwd, _ := os.Getwd()
return cwd
}
func wrap(err error) error {
return cli.NewExitError(err, 255)
}
func main() {
app := cli.NewApp()
app.Name = "Linguist Grammars Compiler"
app.Usage = "Compile user-submitted grammars and check them for errors"
app.Flags = []cli.Flag{
cli.StringFlag{
Name: "linguist-path",
Value: cwd(),
Usage: "path to Linguist root",
},
}
app.Commands = []cli.Command{
{
Name: "add",
Usage: "add a new grammar source",
Flags: []cli.Flag{
cli.BoolFlag{
Name: "force, f",
Usage: "ignore compilation errors",
},
},
Action: func(c *cli.Context) error {
conv, err := compiler.NewConverter(c.String("linguist-path"))
if err != nil {
return wrap(err)
}
if err := conv.AddGrammar(c.Args().First()); err != nil {
if !c.Bool("force") {
return wrap(err)
}
}
if err := conv.WriteGrammarList(); err != nil {
return wrap(err)
}
return nil
},
},
{
Name: "update",
Usage: "update grammars.yml with the contents of the grammars library",
Flags: []cli.Flag{
cli.BoolFlag{
Name: "force, f",
Usage: "write grammars.yml even if grammars fail to compile",
},
},
Action: func(c *cli.Context) error {
conv, err := compiler.NewConverter(c.String("linguist-path"))
if err != nil {
return wrap(err)
}
if err := conv.ConvertGrammars(true); err != nil {
return wrap(err)
}
if err := conv.Report(); err != nil {
if !c.Bool("force") {
return wrap(err)
}
}
if err := conv.WriteGrammarList(); err != nil {
return wrap(err)
}
return nil
},
},
{
Name: "compile",
Usage: "convert the grammars from the library",
Flags: []cli.Flag{
cli.StringFlag{Name: "proto-out, P"},
cli.StringFlag{Name: "out, o"},
},
Action: func(c *cli.Context) error {
conv, err := compiler.NewConverter(c.String("linguist-path"))
if err != nil {
return cli.NewExitError(err, 1)
}
if err := conv.ConvertGrammars(false); err != nil {
return cli.NewExitError(err, 1)
}
if out := c.String("proto-out"); out != "" {
if err := conv.WriteProto(out); err != nil {
return cli.NewExitError(err, 1)
}
}
if out := c.String("out"); out != "" {
if err := conv.WriteJSON(out); err != nil {
return cli.NewExitError(err, 1)
}
}
if err := conv.Report(); err != nil {
return wrap(err)
}
return nil
},
},
}
app.Run(os.Args)
}

View File

@@ -0,0 +1,261 @@
package compiler
import (
"encoding/json"
"fmt"
"io/ioutil"
"os"
"path"
"runtime"
"sort"
"strings"
"sync"
grammar "github.com/github/linguist/tools/grammars/proto"
"github.com/golang/protobuf/proto"
pb "gopkg.in/cheggaaa/pb.v1"
yaml "gopkg.in/yaml.v2"
)
type Converter struct {
root string
modified bool
grammars map[string][]string
Loaded map[string]*Repository
progress *pb.ProgressBar
wg sync.WaitGroup
queue chan string
mu sync.Mutex
}
func (conv *Converter) Load(src string) *Repository {
if strings.HasPrefix(src, "http://") || strings.HasPrefix(src, "https://") {
return LoadFromURL(src)
}
return LoadFromFilesystem(conv.root, src)
}
func (conv *Converter) work() {
for source := range conv.queue {
repo := conv.Load(source)
conv.mu.Lock()
conv.Loaded[source] = repo
conv.mu.Unlock()
conv.progress.Increment()
}
conv.wg.Done()
}
func (conv *Converter) tmpScopes() map[string]bool {
scopes := make(map[string]bool)
for _, ary := range conv.grammars {
for _, s := range ary {
scopes[s] = true
}
}
return scopes
}
func (conv *Converter) AddGrammar(source string) error {
repo := conv.Load(source)
if len(repo.Files) == 0 {
return fmt.Errorf("source '%s' contains no grammar files", source)
}
conv.grammars[source] = repo.Scopes()
conv.modified = true
knownScopes := conv.tmpScopes()
repo.FixRules(knownScopes)
if len(repo.Errors) > 0 {
fmt.Fprintf(os.Stderr, "The new grammar %s contains %d errors:\n",
repo, len(repo.Errors))
for _, err := range repo.Errors {
fmt.Fprintf(os.Stderr, " - %s\n", err)
}
fmt.Fprintf(os.Stderr, "\n")
return fmt.Errorf("failed to compile the given grammar")
}
fmt.Printf("OK! added grammar source '%s'\n", source)
for scope := range repo.Files {
fmt.Printf("\tnew scope: %s\n", scope)
}
return nil
}
func (conv *Converter) AllScopes() map[string]bool {
// Map from scope -> Repository first to error check
// possible duplicates
allScopes := make(map[string]*Repository)
for _, repo := range conv.Loaded {
for scope := range repo.Files {
if original := allScopes[scope]; original != nil {
repo.Fail(&DuplicateScopeError{original, scope})
} else {
allScopes[scope] = repo
}
}
}
// Convert to scope -> bool
scopes := make(map[string]bool)
for s := range allScopes {
scopes[s] = true
}
return scopes
}
func (conv *Converter) ConvertGrammars(update bool) error {
conv.Loaded = make(map[string]*Repository)
conv.queue = make(chan string, 128)
conv.progress = pb.New(len(conv.grammars))
conv.progress.Start()
for i := 0; i < runtime.NumCPU(); i++ {
conv.wg.Add(1)
go conv.work()
}
for src := range conv.grammars {
conv.queue <- src
}
close(conv.queue)
conv.wg.Wait()
done := fmt.Sprintf("done! processed %d grammars\n", len(conv.Loaded))
conv.progress.FinishPrint(done)
if update {
conv.grammars = make(map[string][]string)
conv.modified = true
}
knownScopes := conv.AllScopes()
for source, repo := range conv.Loaded {
repo.FixRules(knownScopes)
if update {
conv.grammars[source] = repo.Scopes()
} else {
expected := conv.grammars[source]
repo.CompareScopes(expected)
}
}
return nil
}
func (conv *Converter) WriteProto(path string) error {
library := grammar.Library{
Grammars: make(map[string]*grammar.Rule),
}
for _, repo := range conv.Loaded {
for scope, file := range repo.Files {
library.Grammars[scope] = file.Rule
}
}
pb, err := proto.Marshal(&library)
if err != nil {
return err
}
return ioutil.WriteFile(path, pb, 0666)
}
func (conv *Converter) writeJSONFile(path string, rule *grammar.Rule) error {
j, err := os.Create(path)
if err != nil {
return err
}
defer j.Close()
enc := json.NewEncoder(j)
enc.SetIndent("", " ")
return enc.Encode(rule)
}
func (conv *Converter) WriteJSON(rulePath string) error {
if err := os.MkdirAll(rulePath, os.ModePerm); err != nil {
return err
}
for _, repo := range conv.Loaded {
for scope, file := range repo.Files {
p := path.Join(rulePath, scope+".json")
if err := conv.writeJSONFile(p, file.Rule); err != nil {
return err
}
}
}
return nil
}
func (conv *Converter) WriteGrammarList() error {
if !conv.modified {
return nil
}
outyml, err := yaml.Marshal(conv.grammars)
if err != nil {
return err
}
ymlpath := path.Join(conv.root, "grammars.yml")
return ioutil.WriteFile(ymlpath, outyml, 0666)
}
func (conv *Converter) Report() error {
var failed []*Repository
for _, repo := range conv.Loaded {
if len(repo.Errors) > 0 {
failed = append(failed, repo)
}
}
sort.Slice(failed, func(i, j int) bool {
return failed[i].Source < failed[j].Source
})
total := 0
for _, repo := range failed {
fmt.Fprintf(os.Stderr, "- [ ] %s (%d errors)\n", repo, len(repo.Errors))
for _, err := range repo.Errors {
fmt.Fprintf(os.Stderr, " - [ ] %s\n", err)
}
fmt.Fprintf(os.Stderr, "\n")
total += len(repo.Errors)
}
if total > 0 {
return fmt.Errorf("the grammar library contains %d errors", total)
}
return nil
}
func NewConverter(root string) (*Converter, error) {
yml, err := ioutil.ReadFile(path.Join(root, "grammars.yml"))
if err != nil {
return nil, err
}
conv := &Converter{root: root}
if err := yaml.Unmarshal(yml, &conv.grammars); err != nil {
return nil, err
}
return conv, nil
}

View File

@@ -0,0 +1,21 @@
package compiler
import (
"bytes"
"os/exec"
)
func ConvertCSON(data []byte) ([]byte, error) {
stdin := bytes.NewBuffer(data)
stdout := &bytes.Buffer{}
cmd := exec.Command("csonc")
cmd.Stdin = stdin
cmd.Stdout = stdout
if err := cmd.Run(); err != nil {
return nil, err
}
return stdout.Bytes(), nil
}

View File

@@ -0,0 +1,29 @@
package compiler
var GrammarAliases = map[string]string{
"source.erb": "text.html.erb",
"source.cpp": "source.c++",
"source.less": "source.css.less",
"text.html.markdown": "source.gfm",
"text.md": "source.gfm",
"source.php": "text.html.php",
"text.plain": "",
"source.asciidoc": "text.html.asciidoc",
"source.perl6": "source.perl6fe",
"source.css.scss": "source.scss",
}
var KnownFields = map[string]bool{
"comment": true,
"uuid": true,
"author": true,
"comments": true,
"macros": true,
"fileTypes": true,
"firstLineMatch": true,
"keyEquivalent": true,
"foldingStopMarker": true,
"foldingStartMarker": true,
"foldingEndMarker": true,
"limitLineLength": true,
}

View File

@@ -0,0 +1,85 @@
package compiler
import "fmt"
import "strings"
type ConversionError struct {
Path string
Err error
}
func (err *ConversionError) Error() string {
return fmt.Sprintf(
"Grammar conversion failed. File `%s` failed to parse: %s",
err.Path, err.Err)
}
type DuplicateScopeError struct {
Original *Repository
Duplicate string
}
func (err *DuplicateScopeError) Error() string {
return fmt.Sprintf(
"Duplicate scope in repository: scope `%s` was already defined in %s",
err.Duplicate, err.Original)
}
type MissingScopeError struct {
Scope string
}
func (err *MissingScopeError) Error() string {
return fmt.Sprintf(
"Missing scope in repository: `%s` is listed in grammars.yml but cannot be found",
err.Scope)
}
type UnexpectedScopeError struct {
File *LoadedFile
Scope string
}
func (err *UnexpectedScopeError) Error() string {
return fmt.Sprintf(
"Unexpected scope in repository: `%s` declared in %s was not listed in grammars.yml",
err.Scope, err.File)
}
type MissingIncludeError struct {
File *LoadedFile
Include string
}
func (err *MissingIncludeError) Error() string {
return fmt.Sprintf(
"Missing include in grammar: %s attempts to include `%s` but the scope cannot be found",
err.File, err.Include)
}
type UnknownKeysError struct {
File *LoadedFile
Keys []string
}
func (err *UnknownKeysError) Error() string {
var keys []string
for _, k := range err.Keys {
keys = append(keys, fmt.Sprintf("`%s`", k))
}
return fmt.Sprintf(
"Unknown keys in grammar: %s contains invalid keys (%s)",
err.File, strings.Join(keys, ", "))
}
type InvalidRegexError struct {
File *LoadedFile
Err error
}
func (err *InvalidRegexError) Error() string {
return fmt.Sprintf(
"Invalid regex in grammar: %s contains a malformed regex (%s)",
err.File, err.Err)
}

View File

@@ -0,0 +1,124 @@
package compiler
import (
"fmt"
"os"
"path/filepath"
"sort"
"strings"
grammar "github.com/github/linguist/tools/grammars/proto"
)
type LoadedFile struct {
Path string
Rule *grammar.Rule
}
func (f *LoadedFile) String() string {
return fmt.Sprintf("`%s` (in `%s`)", f.Rule.ScopeName, f.Path)
}
type Repository struct {
Source string
Upstream string
Files map[string]*LoadedFile
Errors []error
}
func newRepository(src string) *Repository {
return &Repository{
Source: src,
Files: make(map[string]*LoadedFile),
}
}
func (repo *Repository) String() string {
str := fmt.Sprintf("repository `%s`", repo.Source)
if repo.Upstream != "" {
str = str + fmt.Sprintf(" (from %s)", repo.Upstream)
}
return str
}
func (repo *Repository) Fail(err error) {
repo.Errors = append(repo.Errors, err)
}
func (repo *Repository) AddFile(path string, rule *grammar.Rule, uk []string) {
file := &LoadedFile{
Path: path,
Rule: rule,
}
repo.Files[rule.ScopeName] = file
if len(uk) > 0 {
repo.Fail(&UnknownKeysError{file, uk})
}
}
func toMap(slice []string) map[string]bool {
m := make(map[string]bool)
for _, s := range slice {
m[s] = true
}
return m
}
func (repo *Repository) CompareScopes(scopes []string) {
expected := toMap(scopes)
for scope, file := range repo.Files {
if !expected[scope] {
repo.Fail(&UnexpectedScopeError{file, scope})
}
}
for scope := range expected {
if _, ok := repo.Files[scope]; !ok {
repo.Fail(&MissingScopeError{scope})
}
}
}
func (repo *Repository) FixRules(knownScopes map[string]bool) {
for _, file := range repo.Files {
w := walker{
File: file,
Known: knownScopes,
Missing: make(map[string]bool),
}
w.walk(file.Rule)
repo.Errors = append(repo.Errors, w.Errors...)
}
}
func (repo *Repository) Scopes() (scopes []string) {
for s := range repo.Files {
scopes = append(scopes, s)
}
sort.Strings(scopes)
return
}
func isValidGrammar(path string, info os.FileInfo) bool {
if info.IsDir() {
return false
}
dir := filepath.Dir(path)
ext := filepath.Ext(path)
switch strings.ToLower(ext) {
case ".plist":
return strings.HasSuffix(dir, "/Syntaxes")
case ".tmlanguage", ".yaml-tmlanguage":
return true
case ".cson", ".json":
return strings.HasSuffix(dir, "/grammars")
default:
return false
}
}

View File

@@ -0,0 +1,80 @@
package compiler
import (
"io/ioutil"
"os"
"os/exec"
"path"
"path/filepath"
"strings"
)
type fsLoader struct {
*Repository
abspath string
}
func (l *fsLoader) findGrammars() (files []string, err error) {
err = filepath.Walk(l.abspath,
func(path string, info os.FileInfo, err error) error {
if err == nil && isValidGrammar(path, info) {
files = append(files, path)
}
return nil
})
return
}
func (l *fsLoader) load() {
grammars, err := l.findGrammars()
if err != nil {
l.Fail(err)
return
}
for _, path := range grammars {
data, err := ioutil.ReadFile(path)
if err != nil {
l.Fail(err)
continue
}
if rel, err := filepath.Rel(l.abspath, path); err == nil {
path = rel
}
rule, unknown, err := ConvertProto(filepath.Ext(path), data)
if err != nil {
l.Fail(&ConversionError{path, err})
continue
}
if _, ok := l.Files[rule.ScopeName]; ok {
continue
}
l.AddFile(path, rule, unknown)
}
}
func gitRemoteName(path string) (string, error) {
remote, err := exec.Command("git", "-C", path, "remote", "get-url", "origin").Output()
if err != nil {
return "", err
}
return strings.TrimSpace(string(remote)), nil
}
func LoadFromFilesystem(root, src string) *Repository {
loader := fsLoader{
Repository: newRepository(src),
abspath: path.Join(root, src),
}
loader.load()
if ups, err := gitRemoteName(loader.abspath); err == nil {
loader.Repository.Upstream = ups
}
return loader.Repository
}

View File

@@ -0,0 +1,93 @@
package compiler
import (
"archive/tar"
"compress/gzip"
"io"
"io/ioutil"
"net/http"
"path/filepath"
"strings"
)
type urlLoader struct {
*Repository
}
func (l *urlLoader) loadTarball(r io.Reader) {
gzf, err := gzip.NewReader(r)
if err != nil {
l.Fail(err)
return
}
defer gzf.Close()
tarReader := tar.NewReader(gzf)
for true {
header, err := tarReader.Next()
if err != nil {
if err != io.EOF {
l.Fail(err)
}
return
}
if isValidGrammar(header.Name, header.FileInfo()) {
data, err := ioutil.ReadAll(tarReader)
if err != nil {
l.Fail(err)
return
}
ext := filepath.Ext(header.Name)
rule, unknown, err := ConvertProto(ext, data)
if err != nil {
l.Fail(&ConversionError{header.Name, err})
continue
}
if _, ok := l.Files[rule.ScopeName]; ok {
continue
}
l.AddFile(header.Name, rule, unknown)
}
}
}
func (l *urlLoader) load() {
res, err := http.Get(l.Source)
if err != nil {
l.Fail(err)
return
}
defer res.Body.Close()
if strings.HasSuffix(l.Source, ".tar.gz") {
l.loadTarball(res.Body)
return
}
data, err := ioutil.ReadAll(res.Body)
if err != nil {
l.Fail(err)
return
}
ext := filepath.Ext(l.Source)
filename := filepath.Base(l.Source)
rule, unknown, err := ConvertProto(ext, data)
if err != nil {
l.Fail(&ConversionError{filename, err})
return
}
l.AddFile(filename, rule, unknown)
}
func LoadFromURL(src string) *Repository {
loader := urlLoader{newRepository(src)}
loader.load()
return loader.Repository
}

View File

@@ -0,0 +1,68 @@
package compiler
import (
"fmt"
"github.com/github/linguist/tools/grammars/pcre"
)
type replacement struct {
pos int
len int
val string
}
func fixRegex(re string) (string, bool) {
var (
replace []replacement
escape = false
hasBackRefs = false
)
for i, ch := range re {
if escape {
if ch == 'h' {
replace = append(replace, replacement{i - 1, 2, "[[:xdigit:]]"})
}
if '0' <= ch && ch <= '9' {
hasBackRefs = true
}
}
escape = !escape && ch == '\\'
}
if len(replace) > 0 {
reb := []byte(re)
offset := 0
for _, repl := range replace {
reb = append(
reb[:offset+repl.pos],
append([]byte(repl.val), reb[offset+repl.pos+repl.len:]...)...)
offset += len(repl.val) - repl.len
}
return string(reb), hasBackRefs
}
return re, hasBackRefs
}
func CheckPCRE(re string) (string, error) {
hasBackRefs := false
if re == "" {
return "", nil
}
if len(re) > 32*1024 {
return "", fmt.Errorf(
"regex %s: definition too long (%d bytes)",
pcre.RegexPP(re), len(re))
}
re, hasBackRefs = fixRegex(re)
if !hasBackRefs {
if err := pcre.CheckRegexp(re, pcre.DefaultFlags); err != nil {
return "", err
}
}
return re, nil
}

View File

@@ -0,0 +1,27 @@
package compiler
import (
"testing"
)
func Test_fixRegex(t *testing.T) {
tests := []struct {
re string
want string
}{
{"foobar", "foobar"},
{`testing\h`, "testing[[:xdigit:]]"},
{`\htest`, `[[:xdigit:]]test`},
{`abc\hdef`, `abc[[:xdigit:]]def`},
{`\\\htest`, `\\[[:xdigit:]]test`},
{`\\htest`, `\\htest`},
{`\h\h\h\h`, `[[:xdigit:]][[:xdigit:]][[:xdigit:]][[:xdigit:]]`},
{`abc\hdef\hghi\h`, `abc[[:xdigit:]]def[[:xdigit:]]ghi[[:xdigit:]]`},
}
for _, tt := range tests {
got, _ := fixRegex(tt.re)
if got != tt.want {
t.Errorf("fixRegex() got = %v, want %v", got, tt.want)
}
}
}

View File

@@ -0,0 +1,96 @@
package compiler
import (
"encoding/json"
"fmt"
"reflect"
"strings"
grammar "github.com/github/linguist/tools/grammars/proto"
"github.com/groob/plist"
"github.com/mitchellh/mapstructure"
yaml "gopkg.in/yaml.v2"
)
func looseDecoder(f reflect.Kind, t reflect.Kind, data interface{}) (interface{}, error) {
dataVal := reflect.ValueOf(data)
switch t {
case reflect.Bool:
switch f {
case reflect.Bool:
return dataVal.Bool(), nil
case reflect.Float32, reflect.Float64:
return (int(dataVal.Float()) != 0), nil
case reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64:
return (dataVal.Int() != 0), nil
case reflect.Uint, reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64:
return (dataVal.Uint() != 0), nil
case reflect.String:
switch dataVal.String() {
case "1":
return true, nil
case "0":
return false, nil
}
}
}
return data, nil
}
func filterUnusedKeys(keys []string) (out []string) {
for _, k := range keys {
parts := strings.Split(k, ".")
field := parts[len(parts)-1]
if !KnownFields[field] {
out = append(out, k)
}
}
return
}
func ConvertProto(ext string, data []byte) (*grammar.Rule, []string, error) {
var (
raw map[string]interface{}
out grammar.Rule
err error
md mapstructure.Metadata
)
switch strings.ToLower(ext) {
case ".plist", ".tmlanguage":
err = plist.Unmarshal(data, &raw)
case ".yaml-tmlanguage":
err = yaml.Unmarshal(data, &raw)
case ".cson":
data, err = ConvertCSON(data)
if err == nil {
err = json.Unmarshal(data, &raw)
}
case ".json":
err = json.Unmarshal(data, &raw)
default:
err = fmt.Errorf("grammars: unsupported extension '%s'", ext)
}
if err != nil {
return nil, nil, err
}
config := mapstructure.DecoderConfig{
Result: &out,
Metadata: &md,
DecodeHook: looseDecoder,
}
decoder, err := mapstructure.NewDecoder(&config)
if err != nil {
return nil, nil, err
}
if err := decoder.Decode(raw); err != nil {
return nil, nil, err
}
return &out, filterUnusedKeys(md.Unused), nil
}

View File

@@ -0,0 +1,79 @@
package compiler
import (
"strings"
grammar "github.com/github/linguist/tools/grammars/proto"
)
func (w *walker) checkInclude(rule *grammar.Rule) {
include := rule.Include
if include == "" || include[0] == '#' || include[0] == '$' {
return
}
if alias, ok := GrammarAliases[include]; ok {
rule.Include = alias
return
}
include = strings.Split(include, "#")[0]
ok := w.Known[include]
if !ok {
if !w.Missing[include] {
w.Missing[include] = true
w.Errors = append(w.Errors, &MissingIncludeError{w.File, include})
}
rule.Include = ""
}
}
func (w *walker) checkRegexps(rule *grammar.Rule) {
check := func(re string) string {
re2, err := CheckPCRE(re)
if err != nil {
w.Errors = append(w.Errors, &InvalidRegexError{w.File, err})
}
return re2
}
rule.Match = check(rule.Match)
rule.Begin = check(rule.Begin)
rule.While = check(rule.While)
rule.End = check(rule.End)
}
func (w *walker) walk(rule *grammar.Rule) {
w.checkInclude(rule)
w.checkRegexps(rule)
for _, rule := range rule.Patterns {
w.walk(rule)
}
for _, rule := range rule.Captures {
w.walk(rule)
}
for _, rule := range rule.BeginCaptures {
w.walk(rule)
}
for _, rule := range rule.WhileCaptures {
w.walk(rule)
}
for _, rule := range rule.EndCaptures {
w.walk(rule)
}
for _, rule := range rule.Repository {
w.walk(rule)
}
for _, rule := range rule.Injections {
w.walk(rule)
}
}
type walker struct {
File *LoadedFile
Known map[string]bool
Missing map[string]bool
Errors []error
}

11
tools/grammars/docker/build Executable file
View File

@@ -0,0 +1,11 @@
#!/bin/sh
set -ex
cd "$(dirname "$0")/.."
image=linguist/grammar-compiler
docker build -t $image .
if [ "$1" = "--push" ]; then
docker push $image
fi

View File

@@ -0,0 +1,53 @@
package pcre
/*
#cgo LDFLAGS: -lpcre
#include <pcre.h>
*/
import "C"
import (
"fmt"
"strings"
"unsafe"
)
func RegexPP(re string) string {
if len(re) > 32 {
re = fmt.Sprintf("\"`%s`...\"", re[:32])
} else {
re = fmt.Sprintf("\"`%s`\"", re)
}
return strings.Replace(re, "\n", "", -1)
}
type CompileError struct {
Pattern string
Message string
Offset int
}
func (e *CompileError) Error() string {
return fmt.Sprintf("regex %s: %s (at offset %d)",
RegexPP(e.Pattern), e.Message, e.Offset)
}
const DefaultFlags = int(C.PCRE_DUPNAMES | C.PCRE_UTF8 | C.PCRE_NEWLINE_ANYCRLF)
func CheckRegexp(pattern string, flags int) error {
pattern1 := C.CString(pattern)
defer C.free(unsafe.Pointer(pattern1))
var errptr *C.char
var erroffset C.int
ptr := C.pcre_compile(pattern1, C.int(flags), &errptr, &erroffset, nil)
if ptr == nil {
return &CompileError{
Pattern: pattern,
Message: C.GoString(errptr),
Offset: int(erroffset),
}
}
C.free(unsafe.Pointer(ptr))
return nil
}

Some files were not shown because too many files have changed in this diff Show More