Compare commits

..

5 Commits

Author SHA1 Message Date
Seppe Stas
8f86998866 Add some linguist-detectable attributes (#3806)
This allows to test a use-case where markdown should be reported
in the language stats (e.g because the main purpose of the repo is to
hold prose in markdown format) while HTML should be excluded from the
language stats (e.g because it's example output).
2017-09-19 18:02:05 +01:00
Adam Roben
d4c8fb8a28 Add some linguist-documentation attributes 2015-02-12 10:11:53 -05:00
Arfon Smith
351c1cc8fd Test fixture 2014-09-29 16:25:50 -05:00
Arfon Smith
7ee006cbcb Update .gitattributes
Adding some additional attributes for testing.
2014-09-29 15:05:20 -05:00
The rugged tests are fragile
525304738e Fake Gitattributes for testing 2014-09-11 13:26:26 +02:00
1961 changed files with 86999 additions and 323419 deletions

9
.gitattributes vendored
View File

@@ -0,0 +1,9 @@
Gemfile linguist-vendored=true
lib/linguist.rb linguist-language=Java
test/*.rb linguist-language=Java
Rakefile linguist-generated
test/fixtures/* linguist-vendored=false
README.md linguist-documentation=false
samples/Arduino/* linguist-documentation
samples/Markdown/*.md linguist-detectable=true
samples/HTML/*.html linguist-detectable=false

View File

@@ -1,26 +0,0 @@
<!--- Provide a general summary of the issue in the Title above -->
## Preliminary Steps
Please confirm you have...
- [ ] reviewed [How Linguist Works](https://github.com/github/linguist#how-linguist-works),
- [ ] reviewed the [Troubleshooting](https://github.com/github/linguist#troubleshooting) docs,
- [ ] considered implementing an [override](https://github.com/github/linguist#overrides),
- [ ] verified an issue has not already been logged for your issue ([linguist issues](https://github.com/issues?utf8=%E2%9C%93&q=is%3Aissue+repo%3Agithub/linguist)).
<!-- Please review these preliminary steps before logging your issue. You may find the information referenced may answer or explain the behaviour you are seeing. It'll help us to know you've reviewed this information. -->
## Problem Description
<!--- Provide a more detailed introduction to the issue itself, and why you consider it to be a bug -->
### URL of the affected repository:
### Last modified on:
<!-- YYYY-MM-DD -->
### Expected language:
<!-- expected language -->
### Detected language:
<!-- detected language -->

View File

@@ -1,46 +0,0 @@
<!--- Briefly describe what you're changing. -->
## Description
<!--- If necessary, go into depth of what this pull request is doing. -->
## Checklist:
<!--- Go over all the following points, and put an `x` in all the boxes that apply. -->
<!--- If you're unsure about any of these, don't hesitate to ask. We're here to help! -->
- [ ] **I am associating a language with a new file extension.**
- [ ] The new extension is used in hundreds of repositories on GitHub.com
- Search results for each extension:
<!-- Replace FOOBAR with the new extension, and KEYWORDS with keywords unique to the language. Repeat for each extension added. -->
- https://github.com/search?utf8=%E2%9C%93&type=Code&ref=searchresults&q=extension%3AFOOBAR+KEYWORDS+NOT+nothack
- [ ] I have included a real-world usage sample for all extensions added in this PR:
- Sample source(s):
- [URL to each sample source, if applicable]
- Sample license(s):
- [ ] I have included a change to the heuristics to distinguish my language from others using the same extension.
- [ ] **I am adding a new language.**
- [ ] The extension of the new language is used in hundreds of repositories on GitHub.com.
- Search results for each extension:
<!-- Replace FOOBAR with the new extension, and KEYWORDS with keywords unique to the language. Repeat for each extension added. -->
- https://github.com/search?utf8=%E2%9C%93&type=Code&ref=searchresults&q=extension%3AFOOBAR+KEYWORDS+NOT+nothack
- [ ] I have included a real-world usage sample for all extensions added in this PR:
- Sample source(s):
- [URL to each sample source, if applicable]
- Sample license(s):
- [ ] I have included a syntax highlighting grammar.
- [ ] I have included a change to the heuristics to distinguish my language from others using the same extension.
- [ ] **I am fixing a misclassified language**
- [ ] I have included a new sample for the misclassified language:
- Sample source(s):
- [URL to each sample source, if applicable]
- Sample license(s):
- [ ] I have included a change to the heuristics to distinguish my language from others using the same extension.
- [ ] **I am changing the source of a syntax highlighting grammar**
<!-- Update the Lightshow URLs below to show the new and old grammars in action. -->
- Old: https://github-lightshow.herokuapp.com/
- New: https://github-lightshow.herokuapp.com/
- [ ] **I am adding new or changing current functionality**
<!-- This includes modifying the vendor, documentation, and generated lists. -->
- [ ] I have added or updated the tests for the new or changed functionality.

14
.gitignore vendored
View File

@@ -1,13 +1,3 @@
*.gem
/Gemfile.lock
Gemfile.lock
.bundle/
.idea
benchmark/
lib/linguist/samples.json
/grammars
/node_modules
test/fixtures/ace_modes.json
/vendor/gems/
/tmp
*.bundle
*.so
vendor/

900
.gitmodules vendored
View File

@@ -1,900 +0,0 @@
[submodule "vendor/grammars/go-tmbundle"]
path = vendor/grammars/go-tmbundle
url = https://github.com/AlanQuatermain/go-tmbundle
[submodule "vendor/grammars/PHP-Twig.tmbundle"]
path = vendor/grammars/PHP-Twig.tmbundle
url = https://github.com/Anomareh/PHP-Twig.tmbundle
[submodule "vendor/grammars/sublime-cirru"]
path = vendor/grammars/sublime-cirru
url = https://github.com/Cirru/sublime-cirru
[submodule "vendor/grammars/SublimeBrainfuck"]
path = vendor/grammars/SublimeBrainfuck
url = https://github.com/Drako/SublimeBrainfuck
[submodule "vendor/grammars/awk-sublime"]
path = vendor/grammars/awk-sublime
url = https://github.com/github-linguist/awk-sublime
[submodule "vendor/grammars/Sublime-SQF-Language"]
path = vendor/grammars/Sublime-SQF-Language
url = https://github.com/JonBons/Sublime-SQF-Language
[submodule "vendor/grammars/SCSS.tmbundle"]
path = vendor/grammars/SCSS.tmbundle
url = https://github.com/MarioRicalde/SCSS.tmbundle
[submodule "vendor/grammars/Sublime-REBOL"]
path = vendor/grammars/Sublime-REBOL
url = https://github.com/Oldes/Sublime-REBOL
[submodule "vendor/grammars/language-viml"]
path = vendor/grammars/language-viml
url = https://github.com/Alhadis/language-viml
[submodule "vendor/grammars/ColdFusion"]
path = vendor/grammars/ColdFusion
url = https://github.com/SublimeText/ColdFusion
[submodule "vendor/grammars/NSIS"]
path = vendor/grammars/NSIS
url = https://github.com/github-linguist/NSIS
[submodule "vendor/grammars/NimLime"]
path = vendor/grammars/NimLime
url = https://github.com/Varriount/NimLime
[submodule "vendor/grammars/gradle.tmbundle"]
path = vendor/grammars/gradle.tmbundle
url = https://github.com/alkemist/gradle.tmbundle
[submodule "vendor/grammars/Sublime-Loom"]
path = vendor/grammars/Sublime-Loom
url = https://github.com/ambethia/Sublime-Loom
[submodule "vendor/grammars/VBDotNetSyntax"]
path = vendor/grammars/VBDotNetSyntax
url = https://github.com/angryant0007/VBDotNetSyntax
[submodule "vendor/grammars/cool-tmbundle"]
path = vendor/grammars/cool-tmbundle
url = https://github.com/anunayk/cool-tmbundle
[submodule "vendor/grammars/Docker.tmbundle"]
path = vendor/grammars/Docker.tmbundle
url = https://github.com/asbjornenge/Docker.tmbundle
[submodule "vendor/grammars/jasmin-sublime"]
path = vendor/grammars/jasmin-sublime
url = https://github.com/atmarksharp/jasmin-sublime
[submodule "vendor/grammars/language-clojure"]
path = vendor/grammars/language-clojure
url = https://github.com/atom/language-clojure
[submodule "vendor/grammars/language-coffee-script"]
path = vendor/grammars/language-coffee-script
url = https://github.com/atom/language-coffee-script
[submodule "vendor/grammars/language-csharp"]
path = vendor/grammars/language-csharp
url = https://github.com/atom/language-csharp
[submodule "vendor/grammars/language-gfm"]
path = vendor/grammars/language-gfm
url = https://github.com/atom/language-gfm
[submodule "vendor/grammars/language-javascript"]
path = vendor/grammars/language-javascript
url = https://github.com/atom/language-javascript
[submodule "vendor/grammars/language-shellscript"]
path = vendor/grammars/language-shellscript
url = https://github.com/atom/language-shellscript
[submodule "vendor/grammars/language-supercollider"]
path = vendor/grammars/language-supercollider
url = https://github.com/supercollider/language-supercollider
[submodule "vendor/grammars/language-yaml"]
path = vendor/grammars/language-yaml
url = https://github.com/atom/language-yaml
[submodule "vendor/grammars/Sublime-Lasso"]
path = vendor/grammars/Sublime-Lasso
url = https://github.com/bfad/Sublime-Lasso
[submodule "vendor/grammars/chapel-tmbundle"]
path = vendor/grammars/chapel-tmbundle
url = https://github.com/chapel-lang/chapel-tmbundle
[submodule "vendor/grammars/sublime-nginx"]
path = vendor/grammars/sublime-nginx
url = https://github.com/brandonwamboldt/sublime-nginx
[submodule "vendor/grammars/bro-sublime"]
path = vendor/grammars/bro-sublime
url = https://github.com/bro/bro-sublime
[submodule "vendor/grammars/sublime-MuPAD"]
path = vendor/grammars/sublime-MuPAD
url = https://github.com/ccreutzig/sublime-MuPAD
[submodule "vendor/grammars/haxe-sublime-bundle"]
path = vendor/grammars/haxe-sublime-bundle
url = https://github.com/clemos/haxe-sublime-bundle
[submodule "vendor/grammars/cucumber-tmbundle"]
path = vendor/grammars/cucumber-tmbundle
url = https://github.com/cucumber/cucumber-tmbundle
[submodule "vendor/grammars/powershell"]
path = vendor/grammars/powershell
url = https://github.com/SublimeText/PowerShell
[submodule "vendor/grammars/jade-tmbundle"]
path = vendor/grammars/jade-tmbundle
url = https://github.com/davidrios/jade-tmbundle
[submodule "vendor/grammars/elixir-tmbundle"]
path = vendor/grammars/elixir-tmbundle
url = https://github.com/elixir-lang/elixir-tmbundle
[submodule "vendor/grammars/sublime-glsl"]
path = vendor/grammars/sublime-glsl
url = https://github.com/euler0/sublime-glsl
[submodule "vendor/grammars/fancy-tmbundle"]
path = vendor/grammars/fancy-tmbundle
url = https://github.com/fancy-lang/fancy-tmbundle
[submodule "vendor/grammars/sublimetext-cuda-cpp"]
path = vendor/grammars/sublimetext-cuda-cpp
url = https://github.com/harrism/sublimetext-cuda-cpp
[submodule "vendor/grammars/pike-textmate"]
path = vendor/grammars/pike-textmate
url = https://github.com/hww3/pike-textmate
[submodule "vendor/grammars/ceylon-sublimetext"]
path = vendor/grammars/ceylon-sublimetext
url = https://github.com/jeancharles-roger/ceylon-sublimetext
[submodule "vendor/grammars/Sublime-Text-2-OpenEdge-ABL"]
path = vendor/grammars/Sublime-Text-2-OpenEdge-ABL
url = https://github.com/jfairbank/Sublime-Text-2-OpenEdge-ABL
[submodule "vendor/grammars/sublime-befunge"]
path = vendor/grammars/sublime-befunge
url = https://github.com/johanasplund/sublime-befunge
[submodule "vendor/grammars/RDoc.tmbundle"]
path = vendor/grammars/RDoc.tmbundle
url = https://github.com/joshaven/RDoc.tmbundle
[submodule "vendor/grammars/Textmate-Gosu-Bundle"]
path = vendor/grammars/Textmate-Gosu-Bundle
url = https://github.com/jpcamara/Textmate-Gosu-Bundle
[submodule "vendor/grammars/fish-tmbundle"]
path = vendor/grammars/fish-tmbundle
url = https://github.com/l15n/fish-tmbundle
[submodule "vendor/grammars/moonscript-tmbundle"]
path = vendor/grammars/moonscript-tmbundle
url = https://github.com/leafo/moonscript-tmbundle
[submodule "vendor/grammars/Isabelle.tmbundle"]
path = vendor/grammars/Isabelle.tmbundle
url = https://github.com/lsf37/Isabelle.tmbundle
[submodule "vendor/grammars/Alloy.tmbundle"]
path = vendor/grammars/Alloy.tmbundle
url = https://github.com/macekond/Alloy.tmbundle
[submodule "vendor/grammars/opa.tmbundle"]
path = vendor/grammars/opa.tmbundle
url = https://github.com/mads379/opa.tmbundle
[submodule "vendor/grammars/scala.tmbundle"]
path = vendor/grammars/scala.tmbundle
url = https://github.com/mads379/scala.tmbundle
[submodule "vendor/grammars/mako-tmbundle"]
path = vendor/grammars/mako-tmbundle
url = https://github.com/marconi/mako-tmbundle
[submodule "vendor/grammars/gnuplot-tmbundle"]
path = vendor/grammars/gnuplot-tmbundle
url = https://github.com/mattfoster/gnuplot-tmbundle
[submodule "vendor/grammars/idl.tmbundle"]
path = vendor/grammars/idl.tmbundle
url = https://github.com/mgalloy/idl.tmbundle
[submodule "vendor/grammars/protobuf-tmbundle"]
path = vendor/grammars/protobuf-tmbundle
url = https://github.com/michaeledgar/protobuf-tmbundle
[submodule "vendor/grammars/Sublime-Coq"]
path = vendor/grammars/Sublime-Coq
url = https://github.com/mkolosick/Sublime-Coq
[submodule "vendor/grammars/Agda.tmbundle"]
path = vendor/grammars/Agda.tmbundle
url = https://github.com/mokus0/Agda.tmbundle
[submodule "vendor/grammars/ooc.tmbundle"]
path = vendor/grammars/ooc.tmbundle
url = https://github.com/nilium/ooc.tmbundle
[submodule "vendor/grammars/LiveScript.tmbundle"]
path = vendor/grammars/LiveScript.tmbundle
url = https://github.com/paulmillr/LiveScript.tmbundle
[submodule "vendor/grammars/sublime-tea"]
path = vendor/grammars/sublime-tea
url = https://github.com/pferruggiaro/sublime-tea
[submodule "vendor/grammars/abap.tmbundle"]
path = vendor/grammars/abap.tmbundle
url = https://github.com/pvl/abap.tmbundle
[submodule "vendor/grammars/mercury-tmlanguage"]
path = vendor/grammars/mercury-tmlanguage
url = https://github.com/sebgod/mercury-tmlanguage
[submodule "vendor/grammars/mathematica-tmbundle"]
path = vendor/grammars/mathematica-tmbundle
url = https://github.com/shadanan/mathematica-tmbundle
[submodule "vendor/grammars/sublime-robot-plugin"]
path = vendor/grammars/sublime-robot-plugin
url = https://github.com/shellderp/sublime-robot-plugin
[submodule "vendor/grammars/Sublime-QML"]
path = vendor/grammars/Sublime-QML
url = https://github.com/skozlovf/Sublime-QML
[submodule "vendor/grammars/Slash.tmbundle"]
path = vendor/grammars/Slash.tmbundle
url = https://github.com/slash-lang/Slash.tmbundle
[submodule "vendor/grammars/factor"]
path = vendor/grammars/factor
url = https://github.com/slavapestov/factor
[submodule "vendor/grammars/ruby-slim.tmbundle"]
path = vendor/grammars/ruby-slim.tmbundle
url = https://github.com/slim-template/ruby-slim.tmbundle
[submodule "vendor/grammars/SublimeXtend"]
path = vendor/grammars/SublimeXtend
url = https://github.com/staltz/SublimeXtend
[submodule "vendor/grammars/Vala-TMBundle"]
path = vendor/grammars/Vala-TMBundle
url = https://github.com/technosophos/Vala-TMBundle
[submodule "vendor/grammars/ant.tmbundle"]
path = vendor/grammars/ant.tmbundle
url = https://github.com/textmate/ant.tmbundle
[submodule "vendor/grammars/antlr.tmbundle"]
path = vendor/grammars/antlr.tmbundle
url = https://github.com/textmate/antlr.tmbundle
[submodule "vendor/grammars/apache.tmbundle"]
path = vendor/grammars/apache.tmbundle
url = https://github.com/textmate/apache.tmbundle
[submodule "vendor/grammars/applescript.tmbundle"]
path = vendor/grammars/applescript.tmbundle
url = https://github.com/textmate/applescript.tmbundle
[submodule "vendor/grammars/asp.tmbundle"]
path = vendor/grammars/asp.tmbundle
url = https://github.com/textmate/asp.tmbundle
[submodule "vendor/grammars/bison.tmbundle"]
path = vendor/grammars/bison.tmbundle
url = https://github.com/textmate/bison.tmbundle
[submodule "vendor/grammars/capnproto.tmbundle"]
path = vendor/grammars/capnproto.tmbundle
url = https://github.com/textmate/capnproto.tmbundle
[submodule "vendor/grammars/cmake.tmbundle"]
path = vendor/grammars/cmake.tmbundle
url = https://github.com/textmate/cmake.tmbundle
[submodule "vendor/grammars/cpp-qt.tmbundle"]
path = vendor/grammars/cpp-qt.tmbundle
url = https://github.com/textmate/cpp-qt.tmbundle
[submodule "vendor/grammars/d.tmbundle"]
path = vendor/grammars/d.tmbundle
url = https://github.com/textmate/d.tmbundle
[submodule "vendor/grammars/diff.tmbundle"]
path = vendor/grammars/diff.tmbundle
url = https://github.com/textmate/diff.tmbundle
[submodule "vendor/grammars/dylan.tmbundle"]
path = vendor/grammars/dylan.tmbundle
url = https://github.com/textmate/dylan.tmbundle
[submodule "vendor/grammars/eiffel.tmbundle"]
path = vendor/grammars/eiffel.tmbundle
url = https://github.com/textmate/eiffel.tmbundle
[submodule "vendor/grammars/erlang.tmbundle"]
path = vendor/grammars/erlang.tmbundle
url = https://github.com/textmate/erlang.tmbundle
[submodule "vendor/grammars/fortran.tmbundle"]
path = vendor/grammars/fortran.tmbundle
url = https://github.com/textmate/fortran.tmbundle
[submodule "vendor/grammars/gettext.tmbundle"]
path = vendor/grammars/gettext.tmbundle
url = https://github.com/textmate/gettext.tmbundle
[submodule "vendor/grammars/graphviz.tmbundle"]
path = vendor/grammars/graphviz.tmbundle
url = https://github.com/textmate/graphviz.tmbundle
[submodule "vendor/grammars/groovy.tmbundle"]
path = vendor/grammars/groovy.tmbundle
url = https://github.com/textmate/groovy.tmbundle
[submodule "vendor/grammars/html.tmbundle"]
path = vendor/grammars/html.tmbundle
url = https://github.com/textmate/html.tmbundle
[submodule "vendor/grammars/ini.tmbundle"]
path = vendor/grammars/ini.tmbundle
url = https://github.com/textmate/ini.tmbundle
[submodule "vendor/grammars/desktop.tmbundle"]
path = vendor/grammars/desktop.tmbundle
url = https://github.com/Mailaender/desktop.tmbundle.git
[submodule "vendor/grammars/io.tmbundle"]
path = vendor/grammars/io.tmbundle
url = https://github.com/textmate/io.tmbundle
[submodule "vendor/grammars/java.tmbundle"]
path = vendor/grammars/java.tmbundle
url = https://github.com/textmate/java.tmbundle
[submodule "vendor/grammars/javascript-objective-j.tmbundle"]
path = vendor/grammars/javascript-objective-j.tmbundle
url = https://github.com/textmate/javascript-objective-j.tmbundle
[submodule "vendor/grammars/json.tmbundle"]
path = vendor/grammars/json.tmbundle
url = https://github.com/textmate/json.tmbundle
[submodule "vendor/grammars/latex.tmbundle"]
path = vendor/grammars/latex.tmbundle
url = https://github.com/textmate/latex.tmbundle
[submodule "vendor/grammars/lilypond.tmbundle"]
path = vendor/grammars/lilypond.tmbundle
url = https://github.com/textmate/lilypond.tmbundle
[submodule "vendor/grammars/lisp.tmbundle"]
path = vendor/grammars/lisp.tmbundle
url = https://github.com/textmate/lisp.tmbundle
[submodule "vendor/grammars/logtalk.tmbundle"]
path = vendor/grammars/logtalk.tmbundle
url = https://github.com/textmate/logtalk.tmbundle
[submodule "vendor/grammars/lua.tmbundle"]
path = vendor/grammars/lua.tmbundle
url = https://github.com/textmate/lua.tmbundle
[submodule "vendor/grammars/make.tmbundle"]
path = vendor/grammars/make.tmbundle
url = https://github.com/textmate/make.tmbundle
[submodule "vendor/grammars/maven.tmbundle"]
path = vendor/grammars/maven.tmbundle
url = https://github.com/textmate/maven.tmbundle
[submodule "vendor/grammars/nemerle.tmbundle"]
path = vendor/grammars/nemerle.tmbundle
url = https://github.com/textmate/nemerle.tmbundle
[submodule "vendor/grammars/objective-c.tmbundle"]
path = vendor/grammars/objective-c.tmbundle
url = https://github.com/textmate/objective-c.tmbundle
[submodule "vendor/grammars/ocaml.tmbundle"]
path = vendor/grammars/ocaml.tmbundle
url = https://github.com/textmate/ocaml.tmbundle
[submodule "vendor/grammars/pascal.tmbundle"]
path = vendor/grammars/pascal.tmbundle
url = https://github.com/textmate/pascal.tmbundle
[submodule "vendor/grammars/php-smarty.tmbundle"]
path = vendor/grammars/php-smarty.tmbundle
url = https://github.com/textmate/php-smarty.tmbundle
[submodule "vendor/grammars/php.tmbundle"]
path = vendor/grammars/php.tmbundle
url = https://github.com/textmate/php.tmbundle
[submodule "vendor/grammars/postscript.tmbundle"]
path = vendor/grammars/postscript.tmbundle
url = https://github.com/textmate/postscript.tmbundle
[submodule "vendor/grammars/processing.tmbundle"]
path = vendor/grammars/processing.tmbundle
url = https://github.com/textmate/processing.tmbundle
[submodule "vendor/grammars/python-django.tmbundle"]
path = vendor/grammars/python-django.tmbundle
url = https://github.com/textmate/python-django.tmbundle
[submodule "vendor/grammars/r.tmbundle"]
path = vendor/grammars/r.tmbundle
url = https://github.com/textmate/r.tmbundle
[submodule "vendor/grammars/scheme.tmbundle"]
path = vendor/grammars/scheme.tmbundle
url = https://github.com/textmate/scheme.tmbundle
[submodule "vendor/grammars/scilab.tmbundle"]
path = vendor/grammars/scilab.tmbundle
url = https://github.com/textmate/scilab.tmbundle
[submodule "vendor/grammars/sql.tmbundle"]
path = vendor/grammars/sql.tmbundle
url = https://github.com/textmate/sql.tmbundle
[submodule "vendor/grammars/standard-ml.tmbundle"]
path = vendor/grammars/standard-ml.tmbundle
url = https://github.com/textmate/standard-ml.tmbundle
[submodule "vendor/grammars/swift.tmbundle"]
path = vendor/grammars/swift.tmbundle
url = https://github.com/textmate/swift.tmbundle
[submodule "vendor/grammars/tcl.tmbundle"]
path = vendor/grammars/tcl.tmbundle
url = https://github.com/textmate/tcl.tmbundle
[submodule "vendor/grammars/thrift.tmbundle"]
path = vendor/grammars/thrift.tmbundle
url = https://github.com/textmate/thrift.tmbundle
[submodule "vendor/grammars/toml.tmbundle"]
path = vendor/grammars/toml.tmbundle
url = https://github.com/textmate/toml.tmbundle
[submodule "vendor/grammars/verilog.tmbundle"]
path = vendor/grammars/verilog.tmbundle
url = https://github.com/textmate/verilog.tmbundle
[submodule "vendor/grammars/xml.tmbundle"]
path = vendor/grammars/xml.tmbundle
url = https://github.com/textmate/xml.tmbundle
[submodule "vendor/grammars/smalltalk-tmbundle"]
path = vendor/grammars/smalltalk-tmbundle
url = https://github.com/tomas-stefano/smalltalk-tmbundle
[submodule "vendor/grammars/ioke-outdated"]
path = vendor/grammars/ioke-outdated
url = https://github.com/vic/ioke-outdated
[submodule "vendor/grammars/kotlin-sublime-package"]
path = vendor/grammars/kotlin-sublime-package
url = https://github.com/vkostyukov/kotlin-sublime-package
[submodule "vendor/grammars/c.tmbundle"]
path = vendor/grammars/c.tmbundle
url = https://github.com/textmate/c.tmbundle
[submodule "vendor/grammars/zephir-sublime"]
path = vendor/grammars/zephir-sublime
url = https://github.com/phalcon/zephir-sublime
[submodule "vendor/grammars/llvm.tmbundle"]
path = vendor/grammars/llvm.tmbundle
url = https://github.com/whitequark/llvm.tmbundle
[submodule "vendor/grammars/oz-tmbundle"]
path = vendor/grammars/oz-tmbundle
url = https://github.com/eregon/oz-tmbundle
[submodule "vendor/grammars/language-batchfile"]
path = vendor/grammars/language-batchfile
url = https://github.com/mmims/language-batchfile
[submodule "vendor/grammars/sublime-mask"]
path = vendor/grammars/sublime-mask
url = https://github.com/tenbits/sublime-mask
[submodule "vendor/grammars/sublime_cobol"]
path = vendor/grammars/sublime_cobol
url = https://bitbucket.org/bitlang/sublime_cobol
[submodule "vendor/grammars/IDL-Syntax"]
path = vendor/grammars/IDL-Syntax
url = https://github.com/andik/IDL-Syntax
[submodule "vendor/grammars/sas.tmbundle"]
path = vendor/grammars/sas.tmbundle
url = https://github.com/rpardee/sas.tmbundle
[submodule "vendor/grammars/atom-salt"]
path = vendor/grammars/atom-salt
url = https://github.com/saltstack/atom-salt
[submodule "vendor/grammars/Scalate.tmbundle"]
path = vendor/grammars/Scalate.tmbundle
url = https://github.com/scalate/Scalate.tmbundle
[submodule "vendor/grammars/sublime-bsv"]
path = vendor/grammars/sublime-bsv
url = https://github.com/thotypous/sublime-bsv
[submodule "vendor/grammars/sass-textmate-bundle"]
path = vendor/grammars/sass-textmate-bundle
url = https://github.com/nathos/sass-textmate-bundle
[submodule "vendor/grammars/carto-atom"]
path = vendor/grammars/carto-atom
url = https://github.com/yohanboniface/carto-atom
[submodule "vendor/grammars/Sublime-Nit"]
path = vendor/grammars/Sublime-Nit
url = https://github.com/R4PaSs/Sublime-Nit
[submodule "vendor/grammars/Racket"]
path = vendor/grammars/Racket
url = https://github.com/soegaard/racket-highlight-for-github
[submodule "vendor/grammars/turtle.tmbundle"]
path = vendor/grammars/turtle.tmbundle
url = https://github.com/peta/turtle.tmbundle
[submodule "vendor/grammars/liquid.tmbundle"]
path = vendor/grammars/liquid.tmbundle
url = https://github.com/bastilian/validcode-textmate-bundles
[submodule "vendor/grammars/Modelica"]
path = vendor/grammars/Modelica
url = https://github.com/BorisChumichev/modelicaSublimeTextPackage
[submodule "vendor/grammars/sublime-golo"]
path = vendor/grammars/sublime-golo
url = https://github.com/TypeUnsafe/sublime-golo
[submodule "vendor/grammars/TXL"]
path = vendor/grammars/TXL
url = https://github.com/MikeHoffert/Sublime-Text-TXL-syntax
[submodule "vendor/grammars/G-Code"]
path = vendor/grammars/G-Code
url = https://github.com/robotmaster/sublime-text-syntax-highlighting
[submodule "vendor/grammars/sublime-text-ox"]
path = vendor/grammars/sublime-text-ox
url = https://github.com/andreashetland/sublime-text-ox
[submodule "vendor/grammars/AutoHotkey"]
path = vendor/grammars/AutoHotkey
url = https://github.com/ahkscript/SublimeAutoHotkey
[submodule "vendor/grammars/ec.tmbundle"]
path = vendor/grammars/ec.tmbundle
url = https://github.com/ecere/ec.tmbundle
[submodule "vendor/grammars/gap-tmbundle"]
path = vendor/grammars/gap-tmbundle
url = https://github.com/dhowden/gap-tmbundle
[submodule "vendor/grammars/SublimePapyrus"]
path = vendor/grammars/SublimePapyrus
url = https://github.com/Kapiainen/SublimePapyrus
[submodule "vendor/grammars/sublime-spintools"]
path = vendor/grammars/sublime-spintools
url = https://github.com/bitbased/sublime-spintools
[submodule "vendor/grammars/PogoScript.tmbundle"]
path = vendor/grammars/PogoScript.tmbundle
url = https://github.com/featurist/PogoScript.tmbundle
[submodule "vendor/grammars/sublime-opal"]
path = vendor/grammars/sublime-opal
url = https://github.com/artifactz/sublime-opal
[submodule "vendor/grammars/mediawiki.tmbundle"]
path = vendor/grammars/mediawiki.tmbundle
url = https://github.com/textmate/mediawiki.tmbundle
[submodule "vendor/grammars/SublimeClarion"]
path = vendor/grammars/SublimeClarion
url = https://github.com/fushnisoft/SublimeClarion
[submodule "vendor/grammars/BrightScript.tmbundle"]
path = vendor/grammars/BrightScript.tmbundle
url = https://github.com/cmink/BrightScript.tmbundle
[submodule "vendor/grammars/Stylus"]
path = vendor/grammars/Stylus
url = https://github.com/billymoon/Stylus
[submodule "vendor/grammars/asciidoc.tmbundle"]
path = vendor/grammars/asciidoc.tmbundle
url = https://github.com/zuckschwerdt/asciidoc.tmbundle
[submodule "vendor/grammars/Lean.tmbundle"]
path = vendor/grammars/Lean.tmbundle
url = https://github.com/leanprover/Lean.tmbundle
[submodule "vendor/grammars/ampl"]
path = vendor/grammars/ampl
url = https://github.com/ampl/sublime-ampl
[submodule "vendor/grammars/sublime-varnish"]
path = vendor/grammars/sublime-varnish
url = https://github.com/brandonwamboldt/sublime-varnish
[submodule "vendor/grammars/xc.tmbundle"]
path = vendor/grammars/xc.tmbundle
url = https://github.com/graymalkin/xc.tmbundle
[submodule "vendor/grammars/perl.tmbundle"]
path = vendor/grammars/perl.tmbundle
url = https://github.com/textmate/perl.tmbundle
[submodule "vendor/grammars/sublime-netlinx"]
path = vendor/grammars/sublime-netlinx
url = https://github.com/amclain/sublime-netlinx
[submodule "vendor/grammars/Sublime-Red"]
path = vendor/grammars/Sublime-Red
url = https://github.com/Oldes/Sublime-Red
[submodule "vendor/grammars/jflex.tmbundle"]
path = vendor/grammars/jflex.tmbundle
url = https://github.com/jflex-de/jflex.tmbundle.git
[submodule "vendor/grammars/Sublime-Modula-2"]
path = vendor/grammars/Sublime-Modula-2
url = https://github.com/harogaston/Sublime-Modula-2
[submodule "vendor/grammars/ada.tmbundle"]
path = vendor/grammars/ada.tmbundle
url = https://github.com/textmate/ada.tmbundle
[submodule "vendor/grammars/api-blueprint-sublime-plugin"]
path = vendor/grammars/api-blueprint-sublime-plugin
url = https://github.com/apiaryio/api-blueprint-sublime-plugin
[submodule "vendor/grammars/Handlebars"]
path = vendor/grammars/Handlebars
url = https://github.com/daaain/Handlebars
[submodule "vendor/grammars/smali-sublime"]
path = vendor/grammars/smali-sublime
url = https://github.com/ShaneWilton/sublime-smali
[submodule "vendor/grammars/language-jsoniq"]
path = vendor/grammars/language-jsoniq
url = https://github.com/wcandillon/language-jsoniq
[submodule "vendor/grammars/atom-fsharp"]
path = vendor/grammars/atom-fsharp
url = https://github.com/fsprojects/atom-fsharp
[submodule "vendor/grammars/SMT.tmbundle"]
path = vendor/grammars/SMT.tmbundle
url = https://github.com/SRI-CSL/SMT.tmbundle.git
[submodule "vendor/grammars/language-crystal"]
path = vendor/grammars/language-crystal
url = https://github.com/atom-crystal/language-crystal
[submodule "vendor/grammars/language-xbase"]
path = vendor/grammars/language-xbase
url = https://github.com/hernad/atom-language-harbour
[submodule "vendor/grammars/language-ncl"]
path = vendor/grammars/language-ncl
url = https://github.com/rpavlick/language-ncl.git
[submodule "vendor/grammars/pawn-sublime-language"]
path = vendor/grammars/pawn-sublime-language
url = https://github.com/Southclaw/pawn-sublime-language.git
[submodule "vendor/grammars/atom-language-purescript"]
path = vendor/grammars/atom-language-purescript
url = https://github.com/purescript-contrib/atom-language-purescript
[submodule "vendor/grammars/vue-syntax-highlight"]
path = vendor/grammars/vue-syntax-highlight
url = https://github.com/vuejs/vue-syntax-highlight
[submodule "vendor/grammars/st2-zonefile"]
path = vendor/grammars/st2-zonefile
url = https://github.com/sixty4k/st2-zonefile
[submodule "vendor/grammars/sublimeprolog"]
path = vendor/grammars/sublimeprolog
url = https://github.com/alnkpa/sublimeprolog
[submodule "vendor/grammars/sublime-aspectj"]
path = vendor/grammars/sublime-aspectj
url = https://github.com/pchaigno/sublime-aspectj
[submodule "vendor/grammars/sublime-pony"]
path = vendor/grammars/sublime-pony
url = https://github.com/CausalityLtd/sublime-pony
[submodule "vendor/grammars/X10"]
path = vendor/grammars/X10
url = https://github.com/x10-lang/x10-highlighting
[submodule "vendor/grammars/UrWeb-Language-Definition"]
path = vendor/grammars/UrWeb-Language-Definition
url = https://github.com/gwalborn/UrWeb-Language-Definition.git
[submodule "vendor/grammars/Stata.tmbundle"]
path = vendor/grammars/Stata.tmbundle
url = https://github.com/pschumm/Stata.tmbundle
[submodule "vendor/grammars/FreeMarker.tmbundle"]
path = vendor/grammars/FreeMarker.tmbundle
url = https://github.com/freemarker/FreeMarker.tmbundle
[submodule "vendor/grammars/MagicPython"]
path = vendor/grammars/MagicPython
url = https://github.com/MagicStack/MagicPython
[submodule "vendor/grammars/language-click"]
path = vendor/grammars/language-click
url = https://github.com/stenverbois/language-click.git
[submodule "vendor/grammars/language-maxscript"]
path = vendor/grammars/language-maxscript
url = https://github.com/Alhadis/language-maxscript
[submodule "vendor/grammars/language-renpy"]
path = vendor/grammars/language-renpy
url = https://github.com/williamd1k0/language-renpy.git
[submodule "vendor/grammars/language-inform7"]
path = vendor/grammars/language-inform7
url = https://github.com/erkyrath/language-inform7
[submodule "vendor/grammars/atom-language-stan"]
path = vendor/grammars/atom-language-stan
url = https://github.com/jrnold/atom-language-stan
[submodule "vendor/grammars/language-yang"]
path = vendor/grammars/language-yang
url = https://github.com/DzonyKalafut/language-yang.git
[submodule "vendor/grammars/language-less"]
path = vendor/grammars/language-less
url = https://github.com/atom/language-less.git
[submodule "vendor/grammars/language-povray"]
path = vendor/grammars/language-povray
url = https://github.com/c-lipka/language-povray
[submodule "vendor/grammars/sublime-terra"]
path = vendor/grammars/sublime-terra
url = https://github.com/pyk/sublime-terra
[submodule "vendor/grammars/SublimePuppet"]
path = vendor/grammars/SublimePuppet
url = https://github.com/russCloak/SublimePuppet
[submodule "vendor/grammars/sublimeassembly"]
path = vendor/grammars/sublimeassembly
url = https://github.com/Nessphoro/sublimeassembly
[submodule "vendor/grammars/monkey"]
path = vendor/grammars/monkey
url = https://github.com/gingerbeardman/monkey.tmbundle
[submodule "vendor/grammars/assembly"]
path = vendor/grammars/assembly
url = https://github.com/nanoant/assembly.tmbundle
[submodule "vendor/grammars/boo"]
path = vendor/grammars/boo
url = https://github.com/Shammah/boo-sublime
[submodule "vendor/grammars/logos"]
path = vendor/grammars/logos
url = https://github.com/Cykey/Sublime-Logos
[submodule "vendor/grammars/pig-latin"]
path = vendor/grammars/pig-latin
url = https://github.com/goblindegook/sublime-text-pig-latin
[submodule "vendor/grammars/sourcepawn"]
path = vendor/grammars/sourcepawn
url = https://github.com/github-linguist/sublime-sourcepawn
[submodule "vendor/grammars/gdscript"]
path = vendor/grammars/gdscript
url = https://github.com/beefsack/GDScript-sublime
[submodule "vendor/grammars/nesC"]
path = vendor/grammars/nesC
url = https://github.com/cdwilson/nesC.tmbundle
[submodule "vendor/grammars/ats"]
path = vendor/grammars/ats
url = https://github.com/steinwaywhw/ats-mode-sublimetext
[submodule "vendor/grammars/grace"]
path = vendor/grammars/grace
url = https://github.com/zmthy/grace-tmbundle
[submodule "vendor/grammars/ejs-tmbundle"]
path = vendor/grammars/ejs-tmbundle
url = https://github.com/gregory-m/ejs-tmbundle
[submodule "vendor/grammars/nix"]
path = vendor/grammars/nix
url = https://github.com/wmertens/sublime-nix
[submodule "vendor/grammars/idris"]
path = vendor/grammars/idris
url = https://github.com/idris-hackers/idris-sublime.git
[submodule "vendor/grammars/atomic-dreams"]
path = vendor/grammars/atomic-dreams
url = https://github.com/PJB3005/atomic-dreams
[submodule "vendor/grammars/language-apl"]
path = vendor/grammars/language-apl
url = https://github.com/Alhadis/language-apl.git
[submodule "vendor/grammars/language-graphql"]
path = vendor/grammars/language-graphql
url = https://github.com/rmosolgo/language-graphql
[submodule "vendor/grammars/language-toc-wow"]
path = vendor/grammars/language-toc-wow
url = https://github.com/nebularg/language-toc-wow
[submodule "vendor/grammars/sublime-autoit"]
path = vendor/grammars/sublime-autoit
url = https://github.com/AutoIt/SublimeAutoItScript
[submodule "vendor/grammars/TLA"]
path = vendor/grammars/TLA
url = https://github.com/agentultra/TLAGrammar
[submodule "vendor/grammars/sublime-clips"]
path = vendor/grammars/sublime-clips
url = https://github.com/psicomante/CLIPS-sublime
[submodule "vendor/grammars/creole"]
path = vendor/grammars/creole
url = https://github.com/Siddley/Creole
[submodule "vendor/grammars/language-csound"]
path = vendor/grammars/language-csound
url = https://github.com/nwhetsell/language-csound
[submodule "vendor/grammars/language-wavefront"]
path = vendor/grammars/language-wavefront
url = https://github.com/Alhadis/language-wavefront
[submodule "vendor/grammars/nu.tmbundle"]
path = vendor/grammars/nu.tmbundle
url = https://github.com/jsallis/nu.tmbundle
[submodule "vendor/grammars/Elm"]
path = vendor/grammars/Elm
url = https://github.com/elm-community/Elm.tmLanguage
[submodule "vendor/grammars/language-restructuredtext"]
path = vendor/grammars/language-restructuredtext
url = https://github.com/Lukasa/language-restructuredtext
[submodule "vendor/grammars/atom-language-clean"]
path = vendor/grammars/atom-language-clean
url = https://github.com/timjs/atom-language-clean.git
[submodule "vendor/grammars/language-turing"]
path = vendor/grammars/language-turing
url = https://github.com/Alhadis/language-turing
[submodule "vendor/grammars/atom-language-srt"]
path = vendor/grammars/atom-language-srt
url = https://github.com/314eter/atom-language-srt
[submodule "vendor/grammars/language-agc"]
path = vendor/grammars/language-agc
url = https://github.com/Alhadis/language-agc
[submodule "vendor/grammars/language-blade"]
path = vendor/grammars/language-blade
url = https://github.com/jawee/language-blade
[submodule "vendor/grammars/SublimeGDB"]
path = vendor/grammars/SublimeGDB
url = https://github.com/quarnster/SublimeGDB
[submodule "vendor/grammars/language-roff"]
path = vendor/grammars/language-roff
url = https://github.com/Alhadis/language-roff
[submodule "vendor/grammars/language-haskell"]
path = vendor/grammars/language-haskell
url = https://github.com/atom-haskell/language-haskell
[submodule "vendor/grammars/language-asn1"]
path = vendor/grammars/language-asn1
url = https://github.com/ajLangley12/language-asn1
[submodule "vendor/grammars/atom-language-1c-bsl"]
path = vendor/grammars/atom-language-1c-bsl
url = https://github.com/xDrivenDevelopment/atom-language-1c-bsl.git
[submodule "vendor/grammars/sublime-rexx"]
path = vendor/grammars/sublime-rexx
url = https://github.com/mblocker/rexx-sublime
[submodule "vendor/grammars/blitzmax"]
path = vendor/grammars/blitzmax
url = https://github.com/textmate/blitzmax.tmbundle
[submodule "vendor/grammars/cython"]
path = vendor/grammars/cython
url = https://github.com/textmate/cython.tmbundle
[submodule "vendor/grammars/forth"]
path = vendor/grammars/forth
url = https://github.com/textmate/forth.tmbundle
[submodule "vendor/grammars/parrot"]
path = vendor/grammars/parrot
url = https://github.com/textmate/parrot.tmbundle
[submodule "vendor/grammars/secondlife-lsl"]
path = vendor/grammars/secondlife-lsl
url = https://github.com/textmate/secondlife-lsl.tmbundle
[submodule "vendor/grammars/vhdl"]
path = vendor/grammars/vhdl
url = https://github.com/textmate/vhdl.tmbundle
[submodule "vendor/grammars/language-rpm-spec"]
path = vendor/grammars/language-rpm-spec
url = https://github.com/waveclaw/language-rpm-spec
[submodule "vendor/grammars/language-emacs-lisp"]
path = vendor/grammars/language-emacs-lisp
url = https://github.com/Alhadis/language-emacs-lisp
[submodule "vendor/grammars/language-babel"]
path = vendor/grammars/language-babel
url = https://github.com/github-linguist/language-babel
[submodule "vendor/CodeMirror"]
path = vendor/CodeMirror
url = https://github.com/codemirror/CodeMirror
[submodule "vendor/grammars/MQL5-sublime"]
path = vendor/grammars/MQL5-sublime
url = https://github.com/mqsoft/MQL5-sublime
[submodule "vendor/grammars/actionscript3-tmbundle"]
path = vendor/grammars/actionscript3-tmbundle
url = https://github.com/simongregory/actionscript3-tmbundle
[submodule "vendor/grammars/ABNF.tmbundle"]
path = vendor/grammars/ABNF.tmbundle
url = https://github.com/sanssecours/ABNF.tmbundle
[submodule "vendor/grammars/EBNF.tmbundle"]
path = vendor/grammars/EBNF.tmbundle
url = https://github.com/sanssecours/EBNF.tmbundle
[submodule "vendor/grammars/language-haml"]
path = vendor/grammars/language-haml
url = https://github.com/ezekg/language-haml
[submodule "vendor/grammars/language-ninja"]
path = vendor/grammars/language-ninja
url = https://github.com/khyo/language-ninja
[submodule "vendor/grammars/language-fontforge"]
path = vendor/grammars/language-fontforge
url = https://github.com/Alhadis/language-fontforge
[submodule "vendor/grammars/language-gn"]
path = vendor/grammars/language-gn
url = https://github.com/devoncarew/language-gn
[submodule "vendor/grammars/rascal-syntax-highlighting"]
path = vendor/grammars/rascal-syntax-highlighting
url = https://github.com/usethesource/rascal-syntax-highlighting
[submodule "vendor/grammars/atom-language-perl6"]
path = vendor/grammars/atom-language-perl6
url = https://github.com/perl6/atom-language-perl6
[submodule "vendor/grammars/language-xcompose"]
path = vendor/grammars/language-xcompose
url = https://github.com/samcv/language-xcompose
[submodule "vendor/grammars/SublimeEthereum"]
path = vendor/grammars/SublimeEthereum
url = https://github.com/davidhq/SublimeEthereum.git
[submodule "vendor/grammars/atom-language-rust"]
path = vendor/grammars/atom-language-rust
url = https://github.com/zargony/atom-language-rust
[submodule "vendor/grammars/language-css"]
path = vendor/grammars/language-css
url = https://github.com/atom/language-css
[submodule "vendor/grammars/language-regexp"]
path = vendor/grammars/language-regexp
url = https://github.com/Alhadis/language-regexp
[submodule "vendor/grammars/Terraform.tmLanguage"]
path = vendor/grammars/Terraform.tmLanguage
url = https://github.com/alexlouden/Terraform.tmLanguage
[submodule "vendor/grammars/shaders-tmLanguage"]
path = vendor/grammars/shaders-tmLanguage
url = https://github.com/tgjones/shaders-tmLanguage
[submodule "vendor/grammars/language-meson"]
path = vendor/grammars/language-meson
url = https://github.com/TingPing/language-meson
[submodule "vendor/grammars/atom-language-p4"]
path = vendor/grammars/atom-language-p4
url = https://github.com/TakeshiTseng/atom-language-p4
[submodule "vendor/grammars/language-jison"]
path = vendor/grammars/language-jison
url = https://github.com/cdibbs/language-jison
[submodule "vendor/grammars/openscad.tmbundle"]
path = vendor/grammars/openscad.tmbundle
url = https://github.com/tbuser/openscad.tmbundle
[submodule "vendor/grammars/marko-tmbundle"]
path = vendor/grammars/marko-tmbundle
url = https://github.com/marko-js/marko-tmbundle
[submodule "vendor/grammars/language-jolie"]
path = vendor/grammars/language-jolie
url = https://github.com/fmontesi/language-jolie
[submodule "vendor/grammars/language-typelanguage"]
path = vendor/grammars/language-typelanguage
url = https://github.com/goodmind/language-typelanguage
[submodule "vendor/grammars/sublime-shen"]
path = vendor/grammars/sublime-shen
url = https://github.com/rkoeninger/sublime-shen
[submodule "vendor/grammars/Sublime-Pep8"]
path = vendor/grammars/Sublime-Pep8
url = https://github.com/R4PaSs/Sublime-Pep8
[submodule "vendor/grammars/dartlang"]
path = vendor/grammars/dartlang
url = https://github.com/dart-atom/dartlang
[submodule "vendor/grammars/language-closure-templates"]
path = vendor/grammars/language-closure-templates
url = https://github.com/mthadley/language-closure-templates
[submodule "vendor/grammars/language-webassembly"]
path = vendor/grammars/language-webassembly
url = https://github.com/Alhadis/language-webassembly
[submodule "vendor/grammars/language-ring"]
path = vendor/grammars/language-ring
url = https://github.com/MahmoudFayed/atom-language-ring
[submodule "vendor/grammars/sublime-fantom"]
path = vendor/grammars/sublime-fantom
url = https://github.com/rkoeninger/sublime-fantom
[submodule "vendor/grammars/language-pan"]
path = vendor/grammars/language-pan
url = https://github.com/quattor/language-pan
[submodule "vendor/grammars/language-pcb"]
path = vendor/grammars/language-pcb
url = https://github.com/Alhadis/language-pcb
[submodule "vendor/grammars/language-reason"]
path = vendor/grammars/language-reason
url = https://github.com/reasonml-editor/language-reason
[submodule "vendor/grammars/sublime-nearley"]
path = vendor/grammars/sublime-nearley
url = https://github.com/Hardmath123/sublime-nearley
[submodule "vendor/grammars/data-weave-tmLanguage"]
path = vendor/grammars/data-weave-tmLanguage
url = https://github.com/mulesoft-labs/data-weave-tmLanguage
[submodule "vendor/grammars/squirrel-language"]
path = vendor/grammars/squirrel-language
url = https://github.com/mathewmariani/squirrel-language
[submodule "vendor/grammars/language-ballerina"]
path = vendor/grammars/language-ballerina
url = https://github.com/ballerinalang/plugin-vscode
[submodule "vendor/grammars/language-yara"]
path = vendor/grammars/language-yara
url = https://github.com/blacktop/language-yara
[submodule "vendor/grammars/language-ruby"]
path = vendor/grammars/language-ruby
url = https://github.com/atom/language-ruby
[submodule "vendor/grammars/sublime-angelscript"]
path = vendor/grammars/sublime-angelscript
url = https://github.com/wronex/sublime-angelscript
[submodule "vendor/grammars/TypeScript-TmLanguage"]
path = vendor/grammars/TypeScript-TmLanguage
url = https://github.com/Microsoft/TypeScript-TmLanguage
[submodule "vendor/grammars/wdl-sublime-syntax-highlighter"]
path = vendor/grammars/wdl-sublime-syntax-highlighter
url = https://github.com/broadinstitute/wdl-sublime-syntax-highlighter
[submodule "vendor/grammars/atom-language-julia"]
path = vendor/grammars/atom-language-julia
url = https://github.com/JuliaEditorSupport/atom-language-julia
[submodule "vendor/grammars/language-cwl"]
path = vendor/grammars/language-cwl
url = https://github.com/manabuishii/language-cwl
[submodule "vendor/grammars/Syntax-highlighting-for-PostCSS"]
path = vendor/grammars/Syntax-highlighting-for-PostCSS
url = https://github.com/hudochenkov/Syntax-highlighting-for-PostCSS
[submodule "vendor/grammars/MATLAB-Language-grammar"]
path = vendor/grammars/MATLAB-Language-grammar
url = https://github.com/mathworks/MATLAB-Language-grammar
[submodule "vendor/grammars/javadoc.tmbundle"]
path = vendor/grammars/javadoc.tmbundle
url = https://github.com/textmate/javadoc.tmbundle
[submodule "vendor/grammars/JSyntax"]
path = vendor/grammars/JSyntax
url = https://github.com/tikkanz/JSyntax
[submodule "vendor/grammars/Sublime-HTTP"]
path = vendor/grammars/Sublime-HTTP
url = https://github.com/samsalisbury/Sublime-HTTP
[submodule "vendor/grammars/atom-language-nextflow"]
path = vendor/grammars/atom-language-nextflow
url = https://github.com/nextflow-io/atom-language-nextflow

View File

@@ -1,32 +1,11 @@
language: ruby
sudo: false
addons:
apt:
packages:
- libicu-dev
- libicu48
before_install: script/travis/before_install
script:
- bundle exec rake
- script/licensed verify
before_install:
- git fetch origin master:master
- git fetch origin v2.0.0:v2.0.0
- sudo apt-get install libicu-dev -y
- gem update --system 2.1.11
rvm:
- 2.1
- 2.2
- 2.3.3
- 2.4.0
- 1.9.3
- 2.0.0
- 2.1.1
notifications:
disabled: true
git:
submodules: false
depth: 3
cache: bundler
dist: precise
bundler_args: --without debug

View File

@@ -1,163 +0,0 @@
# Contributing
Hi there! We're thrilled that you'd like to contribute to this project. Your help is essential for keeping it great. This project adheres to the [Contributor Covenant Code of Conduct](http://contributor-covenant.org/). By participating, you are expected to uphold this code.
The majority of contributions won't need to touch any Ruby code at all.
## Getting started
Before you can start contributing to Linguist, you'll need to set up your environment first. Clone the repo and run `script/bootstrap` to install its dependencies.
git clone https://github.com/github/linguist.git
cd linguist/
script/bootstrap
To run Linguist from the cloned repository, you will need to generate the code samples first:
bundle exec rake samples
Run this command each time a [sample][samples] has been modified.
To run Linguist from the cloned repository:
bundle exec bin/linguist --breakdown
### Dependencies
Linguist uses the [`charlock_holmes`](https://github.com/brianmario/charlock_holmes) character encoding detection library which in turn uses [ICU](http://site.icu-project.org/), and the libgit2 bindings for Ruby provided by [`rugged`](https://github.com/libgit2/rugged). These components have their own dependencies - `icu4c`, and `cmake` and `pkg-config` respectively - which you may need to install before you can install Linguist.
For example, on macOS with [Homebrew](http://brew.sh/): `brew install cmake pkg-config icu4c` and on Ubuntu: `apt-get install cmake pkg-config libicu-dev`.
## Adding an extension to a language
We try only to add new extensions once they have some usage on GitHub. In most cases we prefer that extensions be in use in hundreds of repositories before supporting them in Linguist.
To add support for a new extension:
1. Add your extension to the language entry in [`languages.yml`][languages], keeping the extensions in alphabetical and case-sensitive (uppercase before lowercase) order, with the exception of the primary extension; the primary extension should be first.
1. Add at least one sample for your extension to the [samples directory][samples] in the correct subdirectory. We'd prefer examples of real-world code showing common usage. The more representative of the structure of the language, the better.
1. Open a pull request, linking to a [GitHub search result](https://github.com/search?utf8=%E2%9C%93&q=extension%3Aboot+NOT+nothack&type=Code&ref=searchresults) showing in-the-wild usage.
If you are adding a sample, please state clearly the license covering the code in the sample, and if possible, link to the original source of the sample.
Additionally, if this extension is already listed in [`languages.yml`][languages] and associated with another language, then sometimes a few more steps will need to be taken:
1. Make sure that example `.yourextension` files are present in the [samples directory][samples] for each language that uses `.yourextension`.
1. Test the performance of the Bayesian classifier with a relatively large number (1000s) of sample `.yourextension` files. (ping **@lildude** to help with this) to ensure we're not misclassifying files.
1. If the Bayesian classifier does a bad job with the sample `.yourextension` files then a [heuristic](https://github.com/github/linguist/blob/master/lib/linguist/heuristics.rb) may need to be written to help.
## Adding a language
We try only to add languages once they have some usage on GitHub. In most cases we prefer that each new file extension be in use in hundreds of repositories before supporting them in Linguist.
To add support for a new language:
1. Add an entry for your language to [`languages.yml`][languages]. Omit the `language_id` field for now.
1. Add a syntax-highlighting grammar for your language using: `script/add-grammar https://github.com/JaneSmith/MyGrammar`
This command will analyze the grammar and, if no problems are found, add it to the repository. If problems are found, please report them to the grammar maintainer as you will not be able to add the grammar if problems are found.
**Please only add grammars that have [one of these licenses][licenses].**
1. Add samples for your language to the [samples directory][samples] in the correct subdirectory.
1. Add a `language_id` for your language using `script/set-language-ids`.
**You should only ever need to run `script/set-language-ids --update`. Anything other than this risks breaking GitHub search :cry:**
1. Open a pull request, linking to a [GitHub search results](https://github.com/search?utf8=%E2%9C%93&q=extension%3Aboot+NOT+nothack&type=Code&ref=searchresults) showing in-the-wild usage.
Please state clearly the license covering the code in the samples. Link directly to the original source if possible.
In addition, if your new language defines an extension that's already listed in [`languages.yml`][languages] (such as `.foo`) then sometimes a few more steps will need to be taken:
1. Make sure that example `.foo` files are present in the [samples directory][samples] for each language that uses `.foo`.
1. Test the performance of the Bayesian classifier with a relatively large number (1000s) of sample `.foo` files. (ping **@lildude** to help with this) to ensure we're not misclassifying files.
1. If the Bayesian classifier does a bad job with the sample `.foo` files then a [heuristic](https://github.com/github/linguist/blob/master/lib/linguist/heuristics.rb) may need to be written to help.
Remember, the goal here is to try and avoid false positives!
## Fixing a misclassified language
Most languages are detected by their file extension defined in [`languages.yml`][languages]. For disambiguating between files with common extensions, Linguist applies some [heuristics](/lib/linguist/heuristics.rb) and a [statistical classifier](lib/linguist/classifier.rb). This process can help differentiate between, for example, `.h` files which could be either C, C++, or Obj-C.
Misclassifications can often be solved by either adding a new filename or extension for the language or adding more [samples][samples] to make the classifier smarter.
## Fixing syntax highlighting
Syntax highlighting in GitHub is performed using TextMate-compatible grammars. These are the same grammars that TextMate, Sublime Text and Atom use. Every language in [`languages.yml`][languages] is mapped to its corresponding TextMate `scopeName`. This scope name will be used when picking up a grammar for highlighting.
Assuming your code is being detected as the right language, in most cases syntax highlighting problems are due to a bug in the language grammar rather than a bug in Linguist. [`vendor/README.md`][grammars] lists all the grammars we use for syntax highlighting on GitHub.com. Find the one corresponding to your code's programming language and submit a bug report upstream. If you can, try to reproduce the highlighting problem in the text editor that the grammar is designed for (TextMate, Sublime Text, or Atom) and include that information in your bug report.
You can also try to fix the bug yourself and submit a Pull Request. [TextMate's documentation](https://manual.macromates.com/en/language_grammars) offers a good introduction on how to work with TextMate-compatible grammars. You can test grammars using [Lightshow](https://github-lightshow.herokuapp.com).
Once the bug has been fixed upstream, we'll pick it up for GitHub in the next release of Linguist.
## Changing the source of a syntax highlighting grammar
We'd like to ensure Linguist and GitHub.com are using the latest and greatest grammars that are consistent with the current usage but understand that sometimes a grammar can lag behind the evolution of a language or even stop being developed. This often results in someone grasping the opportunity to create a newer and better and more actively maintained grammar, and we'd love to use it and pass on it's functionality to our users.
Switching the source of a grammar is really easy:
script/add-grammar --replace MyGrammar https://github.com/PeterPan/MyGrammar
This command will analyze the grammar and, if no problems are found, add it to the repository. If problems are found, please report these problems to the grammar maintainer as you will not be able to add the grammar if problems are found.
**Please only add grammars that have [one of these licenses][licenses].**
Please then open a pull request for the updated grammar.
## Testing
You can run the tests locally with:
bundle exec rake test
Sometimes getting the tests running can be too much work, especially if you don't have much Ruby experience. It's okay: be lazy and let our build bot [Travis](https://travis-ci.org/#!/github/linguist) run the tests for you. Just open a pull request and the bot will start cranking away.
Here's our current build status: [![Build Status](https://api.travis-ci.org/github/linguist.svg?branch=master)](https://travis-ci.org/github/linguist)
## Maintainers
Linguist is maintained with :heart: by:
- **@Alhadis**
- **@BenEddy** (GitHub staff)
- **@Caged** (GitHub staff)
- **@grantr** (GitHub staff)
- **@kivikakk** (GitHub staff)
- **@larsbrinkhoff**
- **@lildude** (GitHub staff)
- **@pchaigno**
- **@rafer** (GitHub staff)
- **@shreyasjoshis** (GitHub staff)
As Linguist is a production dependency for GitHub we have a couple of workflow restrictions:
- Anyone with commit rights can merge Pull Requests provided that there is a :+1: from a GitHub staff member.
- Releases are performed by GitHub staff so we can ensure GitHub.com always stays up to date with the latest release of Linguist and there are no regressions in production.
### Releasing
If you are the current maintainer of this gem:
1. Create a branch for the release: `git checkout -b cut-release-vxx.xx.xx`
1. Make sure your local dependencies are up to date: `script/bootstrap`
1. If grammar submodules have not been updated recently, update them: `git submodule update --remote && git commit -a`
1. Ensure that samples are updated: `bundle exec rake samples`
1. Ensure that tests are green: `bundle exec rake test`
1. Bump gem version in `lib/linguist/version.rb`, [like this](https://github.com/github/linguist/commit/8d2ea90a5ba3b2fe6e1508b7155aa4632eea2985).
1. Make a PR to github/linguist, [like this](https://github.com/github/linguist/pull/1238).
1. Build a local gem: `bundle exec rake build_gem`
1. Test the gem:
1. Bump the Gemfile and Gemfile.lock versions for an app which relies on this gem
1. Install the new gem locally
1. Test behavior locally, branch deploy, whatever needs to happen
1. Merge github/linguist PR
1. Tag and push: `git tag vx.xx.xx; git push --tags`
1. Create a GitHub release with the pushed tag (https://github.com/github/linguist/releases/new)
1. Build a grammars tarball (`./script/build-grammars-tarball`) and attach it to the GitHub release
1. Push to rubygems.org -- `gem push github-linguist-3.0.0.gem`
[grammars]: /vendor/README.md
[languages]: /lib/linguist/languages.yml
[licenses]: https://github.com/github/linguist/blob/257425141d4e2a5232786bf0b13c901ada075f93/vendor/licenses/config.yml#L2-L11
[samples]: /samples
[new-issue]: https://github.com/github/linguist/issues/new

6
Gemfile vendored
View File

@@ -1,6 +1,2 @@
source 'https://rubygems.org'
gemspec :name => "github-linguist"
group :debug do
gem 'byebug' if RUBY_VERSION >= '2.2'
end
gemspec

View File

@@ -1,4 +1,4 @@
Copyright (c) 2017 GitHub, Inc.
Copyright (c) 2011-2014 GitHub, Inc.
Permission is hereby granted, free of charge, to any person
obtaining a copy of this software and associated documentation

278
README.md
View File

@@ -1,211 +1,155 @@
# Linguist
[![Build Status](https://travis-ci.org/github/linguist.svg?branch=master)](https://travis-ci.org/github/linguist)
We use this library at GitHub to detect blob languages, highlight code, ignore binary files, suppress generated files in diffs, and generate language breakdown graphs.
[issues]: https://github.com/github/linguist/issues
[new-issue]: https://github.com/github/linguist/issues/new
## Features
This library is used on GitHub.com to detect blob languages, ignore binary or vendored files, suppress generated files in diffs, and generate language breakdown graphs.
### Language detection
See [Troubleshooting](#troubleshooting) and [`CONTRIBUTING.md`](CONTRIBUTING.md) before filing an issue or creating a pull request.
Linguist defines a list of all languages known to GitHub in a [yaml file](https://github.com/github/linguist/blob/master/lib/linguist/languages.yml). In order for a file to be highlighted, a language and a lexer must be defined there.
## How Linguist works
Linguist takes the list of languages it knows from [`languages.yml`](/lib/linguist/languages.yml) and uses a number of methods to try and determine the language used by each file, and the overall repository breakdown.
Linguist starts by going through all the files in a repository and excludes all files that it determines to be binary data, [vendored code](#vendored-code), [generated code](#generated-code), [documentation](#documentation), or are defined as `data` (e.g. SQL) or `prose` (e.g. Markdown) languages, whilst taking into account any [overrides](#overrides).
If an [explicit language override](#using-gitattributes) has been used, that language is used for the matching files. The language of each remaining file is then determined using the following strategies, in order, with each step either identifying the precise language or reducing the number of likely languages passed down to the next strategy:
- Vim or Emacs modeline,
- commonly used filename,
- shell shebang,
- file extension,
- heuristics,
- naïve Bayesian classification
The result of this analysis is used to produce the language stats bar which displays the languages percentages for the files in the repository. The percentages are calculated based on the bytes of code for each language as reported by the [List Languages](https://developer.github.com/v3/repos/#list-languages) API.
![language stats bar](https://cloud.githubusercontent.com/assets/173/5562290/48e24654-8ddf-11e4-8fe7-735b0ce3a0d3.png)
### How Linguist works on GitHub.com
When you push changes to a repository on GitHub.com, a low priority background job is enqueued to analyze your repository as explained above. The results of this analysis are cached for the lifetime of your repository and are only updated when the repository is updated. As this analysis is performed by a low priority background job, it can take a while, particularly during busy periods, for your language statistics bar to reflect your changes.
## Usage
Install the gem:
$ gem install github-linguist
#### Dependencies
Linguist uses the [`charlock_holmes`](https://github.com/brianmario/charlock_holmes) character encoding detection library which in turn uses [ICU](http://site.icu-project.org/), and the libgit2 bindings for Ruby provided by [`rugged`](https://github.com/libgit2/rugged). These components have their own dependencies - `icu4c`, and `cmake` and `pkg-config` respectively - which you may need to install before you can install Linguist.
For example, on macOS with [Homebrew](http://brew.sh/): `brew install cmake pkg-config icu4c` and on Ubuntu: `apt-get install cmake pkg-config libicu-dev`.
### Application usage
Linguist can be used in your application as follows:
Most languages are detected by their file extension. For disambiguating between files with common extensions, we first apply some common-sense heuristics to pick out obvious languages. After that, we use a
[statistical
classifier](https://github.com/github/linguist/blob/master/lib/linguist/classifier.rb).
This process can help us tell the difference between, for example, `.h` files which could be either C, C++, or Obj-C.
```ruby
require 'rugged'
require 'linguist'
repo = Rugged::Repository.new('.')
project = Linguist::Repository.new(repo, repo.head.target_id)
project.language #=> "Ruby"
project.languages #=> { "Ruby" => 119387 }
Linguist::FileBlob.new("lib/linguist.rb").language.name #=> "Ruby"
Linguist::FileBlob.new("bin/linguist").language.name #=> "Ruby"
```
### Command line usage
See [lib/linguist/language.rb](https://github.com/github/linguist/blob/master/lib/linguist/language.rb) and [lib/linguist/languages.yml](https://github.com/github/linguist/blob/master/lib/linguist/languages.yml).
A repository's languages stats can also be assessed from the command line using the `linguist` executable. Without any options, `linguist` will output the breakdown that correlates to what is shown in the language stats bar. The `--breakdown` flag will additionally show the breakdown of files by language.
### Syntax Highlighting
You can try running `linguist` on the root directory in this repository itself:
The actual syntax highlighting is handled by our Pygments wrapper, [pygments.rb](https://github.com/tmm1/pygments.rb). It also provides a [Lexer abstraction](https://github.com/tmm1/pygments.rb/blob/master/lib/pygments/lexer.rb) that determines which highlighter should be used on a file.
```console
$ bundle exec bin/linguist --breakdown
68.57% Ruby
22.90% C
6.93% Go
1.21% Lex
0.39% Shell
### Stats
Ruby:
Gemfile
Rakefile
bin/git-linguist
bin/linguist
ext/linguist/extconf.rb
github-linguist.gemspec
lib/linguist.rb
The Language stats bar that you see on every repository is built by aggregating the languages of each file in that repository. The top language in the graph determines the project's primary language.
The repository stats API, accessed through `#languages`, can be used on a directory:
```ruby
project = Linguist::Repository.from_directory(".")
project.language.name #=> "Ruby"
project.languages #=> { "Ruby" => 0.98, "Shell" => 0.02 }
```
These stats are also printed out by the `linguist` binary. You can use the
`--breakdown` flag, and the binary will also output the breakdown of files by language.
## Troubleshooting
You can try running `linguist` on the `lib/` directory in this repository itself:
### My repository is detected as the wrong language
$ bundle exec linguist lib/ --breakdown
If the language stats bar is reporting a language that you don't expect:
100.00% Ruby
1. Click on the name of the language in the stats bar to see a list of the files that are identified as that language.
Keep in mind this performs a search so the [code search restrictions](https://help.github.com/articles/searching-code/#considerations-for-code-search) may result in files identified in the language statistics not appearing in the search results. [Installing Linguist locally](#usage) and running it from the [command line](#command-line-usage) will give you accurate results.
1. If you see files that you didn't write in the search results, consider moving the files into one of the [paths for vendored code](/lib/linguist/vendor.yml), or use the [manual overrides](#overrides) feature to ignore them.
1. If the files are misclassified, search for [open issues][issues] to see if anyone else has already reported the issue. Any information you can add, especially links to public repositories, is helpful. You can also use the [manual overrides](#overrides) feature to correctly classify them in your repository.
1. If there are no reported issues of this misclassification, [open an issue][new-issue] and include a link to the repository or a sample of the code that is being misclassified.
Ruby:
linguist/blob_helper.rb
linguist/classifier.rb
linguist/file_blob.rb
linguist/generated.rb
linguist/heuristics.rb
linguist/language.rb
linguist/md5.rb
linguist/repository.rb
linguist/samples.rb
linguist/tokenizer.rb
linguist.rb
Keep in mind that the repository language stats are only [updated when you push changes](#how-linguist-works-on-github-com), and the results are cached for the lifetime of your repository. If you have not made any changes to your repository in a while, you may find pushing another change will correct the stats.
#### Ignore vendored files
### My repository isn't showing my language
Checking other code into your git repo is a common practice. But this often inflates your project's language stats and may even cause your project to be labeled as another language. We are able to identify some of these files and directories and exclude them.
Linguist does not consider [vendored code](#vendored-code), [generated code](#generated-code), [documentation](#documentation), or `data` (e.g. SQL) or `prose` (e.g. Markdown) languages (as defined by the `type` attribute in [`languages.yml`](/lib/linguist/languages.yml)) when calculating the repository language statistics.
If the language statistics bar is not showing your language at all, it could be for a few reasons:
1. Linguist doesn't know about your language.
1. The extension you have chosen is not associated with your language in [`languages.yml`](/lib/linguist/languages.yml).
1. All the files in your repository fall into one of the categories listed above that Linguist excludes by default.
If Linguist doesn't know about the language or the extension you're using, consider [contributing](CONTRIBUTING.md) to Linguist by opening a pull request to add support for your language or extension. For everything else, you can use the [manual overrides](#overrides) feature to tell Linguist to include your files in the language statistics.
### There's a problem with the syntax highlighting of a file
Linguist detects the language of a file but the actual syntax-highlighting is powered by a set of language grammars which are included in this project as a set of submodules [as listed here](/vendor/README.md).
If you experience an issue with the syntax-highlighting on GitHub, **please report the issue to the upstream grammar repository, not here.** Grammars are updated every time we build the Linguist gem so upstream bug fixes are automatically incorporated as they are fixed.
## Overrides
Linguist supports a number of different custom override strategies for language definitions and file paths.
### Using gitattributes
Add a `.gitattributes` file to your project and use standard git-style path matchers for the files you want to override using the `linguist-documentation`, `linguist-language`, `linguist-vendored`, `linguist-generated` and `linguist-detectable` attributes. `.gitattributes` will be used to determine language statistics and will be used to syntax highlight files. You can also manually set syntax highlighting using [Vim or Emacs modelines](#using-emacs-or-vim-modelines).
```
$ cat .gitattributes
*.rb linguist-language=Java
```ruby
Linguist::FileBlob.new("vendor/plugins/foo.rb").vendored? # => true
```
#### Vendored code
See [Linguist::BlobHelper#vendored?](https://github.com/github/linguist/blob/master/lib/linguist/blob_helper.rb) and [lib/linguist/vendor.yml](https://github.com/github/linguist/blob/master/lib/linguist/vendor.yml).
Checking code you didn't write, such as JavaScript libraries, into your git repo is a common practice, but this often inflates your project's language stats and may even cause your project to be labeled as another language. By default, Linguist treats all of the paths defined in [`vendor.yml`](/lib/linguist/vendor.yml) as vendored and therefore doesn't include them in the language statistics for a repository.
#### Generated file detection
Use the `linguist-vendored` attribute to vendor or un-vendor paths.
Not all plain text files are true source files. Generated files like minified js and compiled CoffeeScript can be detected and excluded from language stats. As an extra bonus, these files are suppressed in diffs.
```
$ cat .gitattributes
special-vendored-path/* linguist-vendored
jquery.js linguist-vendored=false
```ruby
Linguist::FileBlob.new("underscore.min.js").generated? # => true
```
#### Documentation
See [Linguist::Generated#generated?](https://github.com/github/linguist/blob/master/lib/linguist/generated.rb).
Just like vendored files, Linguist excludes documentation files from your project's language stats. [`documentation.yml`](/lib/linguist/documentation.yml) lists common documentation paths and excludes them from the language statistics for your repository.
## Installation
Use the `linguist-documentation` attribute to mark or unmark paths as documentation.
github.com is usually running the latest version of the `github-linguist` gem that is released on [RubyGems.org](http://rubygems.org/gems/github-linguist).
```
$ cat .gitattributes
project-docs/* linguist-documentation
docs/formatter.rb linguist-documentation=false
```
But for development you are going to want to checkout out the source. To get it, clone the repo and run [Bundler](http://gembundler.com/) to install its dependencies.
#### Generated code
git clone https://github.com/github/linguist.git
cd linguist/
bundle install
Not all plain text files are true source files. Generated files like minified JavaScript and compiled CoffeeScript can be detected and excluded from language stats. As an added bonus, unlike vendored and documentation files, these files are suppressed in diffs. [`generated.rb`](/lib/linguist/generated.rb) lists common generated paths and excludes them from the language statistics of your repository.
Use the `linguist-generated` attribute to mark or unmark paths as generated.
```
$ cat .gitattributes
Api.elm linguist-generated=true
```
#### Detectable
Only programming languages are included in the language statistics. Languages of a different type (as defined in [`languages.yml`](/lib/linguist/languages.yml)) are not "detectable" causing them not to be included in the language statistics.
Use the `linguist-detectable` attribute to mark or unmark paths as detectable.
```
$ cat .gitattributes
*.kicad_pcb linguist-detectable=true
*.sch linguist-detectable=true
tools/export_bom.py linguist-detectable=false
```
### Using Emacs or Vim modelines
If you do not want to use `.gitattributes` to override the syntax highlighting used on GitHub.com, you can use Vim or Emacs style modelines to set the language for a single file. Modelines can be placed anywhere within a file and are respected when determining how to syntax-highlight a file on GitHub.com
##### Vim
```
# Some examples of various styles:
vim: syntax=java
vim: set syntax=ruby:
vim: set filetype=prolog:
vim: set ft=cpp:
```
##### Emacs
```
-*- mode: php;-*-
```
To run the tests:
bundle exec rake test
## Contributing
Please check out our [contributing guidelines](CONTRIBUTING.md).
The majority of contributions won't need to touch any Ruby code at all. The [master language list](https://github.com/github/linguist/blob/master/lib/linguist/languages.yml) is just a YAML configuration file.
We try to only add languages once they have some usage on GitHub, so please note in-the-wild usage examples in your pull request.
## License
Almost all bug fixes or new language additions should come with some additional code samples. Just drop them under [`samples/`](https://github.com/github/linguist/tree/master/samples) in the correct subdirectory and our test suite will automatically test them. In most cases you shouldn't need to add any new assertions.
The language grammars included in this gem are covered by their repositories'
respective licenses. [`vendor/README.md`](/vendor/README.md) lists the repository for each grammar.
To update the `samples.json` after adding new files to [`samples/`](https://github.com/github/linguist/tree/master/samples):
All other files are covered by the MIT license, see `LICENSE`.
bundle exec rake samples
### A note on language extensions
Linguist has a number of methods available to it for identifying the language of a particular file. The initial lookup is based upon the extension of the file, possible file extensions are defined in an array called `extensions`. Take a look at this example for example for `Perl`:
```
Perl:
type: programming
ace_mode: perl
color: "#0298c3"
extensions:
- .pl
- .PL
- .perl
- .ph
- .plx
- .pm
- .pod
- .psgi
interpreters:
- perl
```
Any of the extensions defined are valid but the first in this array should be the most popular.
### Testing
Sometimes getting the tests running can be too much work, especially if you don't have much Ruby experience. It's okay: be lazy and let our build bot [Travis](http://travis-ci.org/#!/github/linguist) run the tests for you. Just open a pull request and the bot will start cranking away.
Here's our current build status, which is hopefully green: [![Build Status](https://secure.travis-ci.org/github/linguist.png?branch=master)](http://travis-ci.org/github/linguist)
### Releasing
If you are the current maintainer of this gem:
0. Create a branch for the release: `git checkout -b cut-release-vxx.xx.xx`
0. Make sure your local dependencies are up to date: `bundle install`
0. Ensure that samples are updated: `bundle exec rake samples`
0. Ensure that tests are green: `bundle exec rake test`
0. Bump gem version in `lib/linguist/version.rb`. For example, [like this](https://github.com/github/linguist/commit/8d2ea90a5ba3b2fe6e1508b7155aa4632eea2985).
0. Make a PR to github/linguist. For example, [#1238](https://github.com/github/linguist/pull/1238).
0. Build a local gem: `gem build github-linguist.gemspec`
0. Testing:
0. Bump the Gemfile and Gemfile.lock versions for an app which relies on this gem
0. Install the new gem locally
0. Test behavior locally, branch deploy, whatever needs to happen
0. Merge github/linguist PR
0. Tag and push: `git tag vx.xx.xx; git push --tags`
0. Push to rubygems.org -- `gem push github-linguist-3.0.0.gem`

131
Rakefile generated
View File

@@ -1,129 +1,27 @@
require 'bundler/setup'
require 'json'
require 'rake/clean'
require 'rake/testtask'
require 'rake/extensiontask'
require 'yaml'
require 'yajl'
require 'open-uri'
require 'json'
task :default => :test
Rake::TestTask.new
gem_spec = Gem::Specification.load('github-linguist.gemspec')
Rake::ExtensionTask.new('linguist', gem_spec) do |ext|
ext.lib_dir = File.join('lib', 'linguist')
end
# Extend test task to check for samples and fetch latest Ace modes
task :test => [:compile, :check_samples, :fetch_ace_modes]
desc "Check that we have samples.json generated"
task :check_samples do
unless File.exist?('lib/linguist/samples.json')
Rake::Task[:samples].invoke
end
end
desc "Fetch the latest Ace modes from its GitHub repository"
task :fetch_ace_modes do
ACE_FIXTURE_PATH = File.join('test', 'fixtures', 'ace_modes.json')
File.delete(ACE_FIXTURE_PATH) if File.exist?(ACE_FIXTURE_PATH)
begin
ace_github_modes = open("https://api.github.com/repos/ajaxorg/ace/contents/lib/ace/mode").read
File.write(ACE_FIXTURE_PATH, ace_github_modes)
rescue OpenURI::HTTPError, SocketError
# no internet? no problem.
end
end
task :samples => :compile do
task :samples do
require 'linguist/samples'
json = Yajl.dump(Linguist::Samples.data, :pretty => true)
File.write 'lib/linguist/samples.json', json
require 'yajl'
data = Linguist::Samples.data
json = Yajl::Encoder.encode(data, :pretty => true)
File.open('lib/linguist/samples.json', 'w') { |io| io.write json }
end
task :flex do
if `flex -V` !~ /^flex \d+\.\d+\.\d+/
fail "flex not detected"
end
system "cd ext/linguist && flex tokenizer.l"
end
task :build_gem => :samples do
rm_rf "grammars"
sh "script/grammar-compiler compile -o grammars || true"
task :build_gem do
languages = YAML.load_file("lib/linguist/languages.yml")
File.write("lib/linguist/languages.json", Yajl.dump(languages))
File.write("lib/linguist/languages.json", JSON.dump(languages))
`gem build github-linguist.gemspec`
File.delete("lib/linguist/languages.json")
end
namespace :benchmark do
benchmark_path = "benchmark/results"
# $ bundle exec rake benchmark:generate CORPUS=path/to/samples
desc "Generate results for"
task :generate do
ref = `git rev-parse HEAD`.strip[0,8]
corpus = File.expand_path(ENV["CORPUS"] || "samples")
require 'linguist'
results = Hash.new
Dir.glob("#{corpus}/**/*").each do |file|
next unless File.file?(file)
filename = file.gsub("#{corpus}/", "")
results[filename] = Linguist::FileBlob.new(file).language
end
# Ensure results directory exists
FileUtils.mkdir_p("benchmark/results")
# Write results
if `git status`.include?('working directory clean')
result_filename = "benchmark/results/#{File.basename(corpus)}-#{ref}.json"
else
result_filename = "benchmark/results/#{File.basename(corpus)}-#{ref}-unstaged.json"
end
File.write(result_filename, results.to_json)
puts "wrote #{result_filename}"
end
# $ bundle exec rake benchmark:compare REFERENCE=path/to/reference.json CANDIDATE=path/to/candidate.json
desc "Compare results"
task :compare do
reference_file = ENV["REFERENCE"]
candidate_file = ENV["CANDIDATE"]
reference = Yajl.load(File.read(reference_file))
reference_counts = Hash.new(0)
reference.each { |filename, language| reference_counts[language] += 1 }
candidate = Yajl.load(File.read(candidate_file))
candidate_counts = Hash.new(0)
candidate.each { |filename, language| candidate_counts[language] += 1 }
changes = diff(reference_counts, candidate_counts)
if changes.any?
changes.each do |language, (before, after)|
before_percent = 100 * before / reference.size.to_f
after_percent = 100 * after / candidate.size.to_f
puts "%s changed from %.1f%% to %.1f%%" % [language || 'unknown', before_percent, after_percent]
end
else
puts "No changes"
end
end
end
namespace :classifier do
LIMIT = 1_000
@@ -139,7 +37,7 @@ namespace :classifier do
next if file_language.nil? || file_language == 'Text'
begin
data = open(file_url).read
guessed_language, score = Linguist::Classifier.classify(Linguist::Samples.cache, data).first
guessed_language, score = Linguist::Classifier.classify(Linguist::Samples::DATA, data).first
total += 1
guessed_language == file_language ? correct += 1 : incorrect += 1
@@ -156,12 +54,14 @@ namespace :classifier do
def each_public_gist
require 'open-uri'
require 'json'
url = "https://api.github.com/gists/public"
loop do
resp = open(url)
url = resp.meta['link'][/<([^>]+)>; rel="next"/, 1]
gists = Yajl.load(resp.read)
gists = JSON.parse(resp.read)
for gist in gists
for filename, attrs in gist['files']
@@ -171,10 +71,3 @@ namespace :classifier do
end
end
end
def diff(a, b)
(a.keys | b.keys).each_with_object({}) do |key, diff|
diff[key] = [a[key], b[key]] unless a[key] == b[key]
end
end

View File

@@ -1,149 +0,0 @@
#!/usr/bin/env ruby
$LOAD_PATH[0, 0] = File.join(File.dirname(__FILE__), '..', 'lib')
require 'linguist'
require 'rugged'
require 'optparse'
require 'json'
require 'tmpdir'
require 'zlib'
class GitLinguist
def initialize(path, commit_oid, incremental = true)
@repo_path = path
@commit_oid = commit_oid
@incremental = incremental
end
def linguist
if @commit_oid.nil?
raise "git-linguist must be called with a specific commit OID to perform language computation"
end
repo = Linguist::Repository.new(rugged, @commit_oid)
if @incremental && stats = load_language_stats
old_commit_oid, old_stats = stats
# A cache with NULL oid means that we want to freeze
# these language stats in place and stop computing
# them (for performance reasons)
return old_stats if old_commit_oid == NULL_OID
repo.load_existing_stats(old_commit_oid, old_stats)
end
result = yield repo
save_language_stats(@commit_oid, repo.cache)
result
end
def load_language_stats
version, oid, stats = load_cache
if version == LANGUAGE_STATS_CACHE_VERSION && oid && stats
[oid, stats]
end
end
def save_language_stats(oid, stats)
cache = [LANGUAGE_STATS_CACHE_VERSION, oid, stats]
write_cache(cache)
end
def clear_language_stats
File.unlink(cache_file)
rescue Errno::ENOENT
end
def disable_language_stats
save_language_stats(NULL_OID, {})
end
protected
NULL_OID = ("0" * 40).freeze
LANGUAGE_STATS_CACHE = 'language-stats.cache'
LANGUAGE_STATS_CACHE_VERSION = "v3:#{Linguist::VERSION}"
def rugged
@rugged ||= Rugged::Repository.bare(@repo_path)
end
def cache_file
File.join(@repo_path, LANGUAGE_STATS_CACHE)
end
def write_cache(object)
return unless File.directory? @repo_path
begin
tmp_path = Dir::Tmpname.make_tmpname(cache_file, nil)
File.open(tmp_path, "wb") do |f|
marshal = Marshal.dump(object)
f.write(Zlib::Deflate.deflate(marshal))
end
File.rename(tmp_path, cache_file)
rescue => e
(File.unlink(tmp_path) rescue nil)
raise e
end
end
def load_cache
marshal = File.open(cache_file, "rb") { |f| Zlib::Inflate.inflate(f.read) }
Marshal.load(marshal)
rescue SystemCallError, ::Zlib::DataError, ::Zlib::BufError, TypeError
nil
end
end
def git_linguist(args)
incremental = true
commit = nil
parser = OptionParser.new do |opts|
opts.banner = <<-HELP
Linguist v#{Linguist::VERSION}
Detect language type and determine language breakdown for a given Git repository.
Usage:
git-linguist [OPTIONS] stats|breakdown|dump-cache|clear|disable"
HELP
opts.on("-f", "--force", "Force a full rescan") { incremental = false }
opts.on("-c", "--commit=COMMIT", "Commit to index") { |v| commit = v}
end
parser.parse!(args)
git_dir = `git rev-parse --git-dir`.strip
raise "git-linguist must be run in a Git repository" unless $?.success?
wrapper = GitLinguist.new(git_dir, commit, incremental)
case args.pop
when "stats"
wrapper.linguist do |linguist|
puts JSON.dump(linguist.languages)
end
when "breakdown"
wrapper.linguist do |linguist|
puts JSON.dump(linguist.breakdown_by_file)
end
when "dump-cache"
puts JSON.dump(wrapper.load_language_stats)
when "clear"
wrapper.clear_language_stats
when "disable"
wrapper.disable_language_stats
else
$stderr.print(parser.help)
exit 1
end
rescue Exception => e
$stderr.puts e.message
$stderr.puts e.backtrace
exit 1
end
git_linguist(ARGV)

View File

@@ -1,37 +1,31 @@
#!/usr/bin/env ruby
$LOAD_PATH[0, 0] = File.join(File.dirname(__FILE__), '..', 'lib')
# linguist — detect language type for a file, or, given a directory, determine language breakdown
# usage: linguist <path> [<--breakdown>]
require 'linguist'
require 'linguist/file_blob'
require 'linguist/language'
require 'linguist/repository'
require 'rugged'
require 'json'
require 'optparse'
path = ARGV[0] || Dir.pwd
# special case if not given a directory
# but still given the --breakdown or --json options/
# special case if not given a directory but still given the --breakdown option
if path == "--breakdown"
path = Dir.pwd
breakdown = true
elsif path == "--json"
path = Dir.pwd
json_breakdown = true
end
ARGV.shift
breakdown = true if ARGV[0] == "--breakdown"
json_breakdown = true if ARGV[0] == "--json"
if File.directory?(path)
rugged = Rugged::Repository.new(path)
repo = Linguist::Repository.new(rugged, rugged.head.target_id)
if !json_breakdown
repo.languages.sort_by { |_, size| size }.reverse.each do |language, size|
percentage = ((size / repo.size.to_f) * 100)
percentage = sprintf '%.2f' % percentage
puts "%-7s %s" % ["#{percentage}%", language]
end
repo.languages.sort_by { |_, size| size }.reverse.each do |language, size|
percentage = ((size / repo.size.to_f) * 100)
percentage = sprintf '%.2f' % percentage
puts "%-7s %s" % ["#{percentage}%", language]
end
if breakdown
puts
@@ -43,8 +37,6 @@ if File.directory?(path)
end
puts
end
elsif json_breakdown
puts JSON.dump(repo.breakdown_by_file)
end
elsif File.file?(path)
blob = Linguist::FileBlob.new(path, Dir.pwd)
@@ -73,12 +65,5 @@ elsif File.file?(path)
puts " appears to be a vendored file"
end
else
abort <<-HELP
Linguist v#{Linguist::VERSION}
Detect language type for a file, or, given a repository, determine language breakdown.
Usage: linguist <path>
linguist <path> [--breakdown] [--json]
linguist [--breakdown] [--json]
HELP
abort "usage: linguist <path>"
end

View File

@@ -1,3 +0,0 @@
require 'mkmf'
dir_config('linguist')
create_makefile('linguist/linguist')

File diff suppressed because it is too large Load Diff

View File

@@ -1,336 +0,0 @@
#ifndef linguist_yyHEADER_H
#define linguist_yyHEADER_H 1
#define linguist_yyIN_HEADER 1
#line 6 "lex.linguist_yy.h"
#define YY_INT_ALIGNED short int
/* A lexical scanner generated by flex */
#define FLEX_SCANNER
#define YY_FLEX_MAJOR_VERSION 2
#define YY_FLEX_MINOR_VERSION 5
#define YY_FLEX_SUBMINOR_VERSION 35
#if YY_FLEX_SUBMINOR_VERSION > 0
#define FLEX_BETA
#endif
/* First, we deal with platform-specific or compiler-specific issues. */
/* begin standard C headers. */
#include <stdio.h>
#include <string.h>
#include <errno.h>
#include <stdlib.h>
/* end standard C headers. */
/* flex integer type definitions */
#ifndef FLEXINT_H
#define FLEXINT_H
/* C99 systems have <inttypes.h>. Non-C99 systems may or may not. */
#if defined (__STDC_VERSION__) && __STDC_VERSION__ >= 199901L
/* C99 says to define __STDC_LIMIT_MACROS before including stdint.h,
* if you want the limit (max/min) macros for int types.
*/
#ifndef __STDC_LIMIT_MACROS
#define __STDC_LIMIT_MACROS 1
#endif
#include <inttypes.h>
typedef int8_t flex_int8_t;
typedef uint8_t flex_uint8_t;
typedef int16_t flex_int16_t;
typedef uint16_t flex_uint16_t;
typedef int32_t flex_int32_t;
typedef uint32_t flex_uint32_t;
typedef uint64_t flex_uint64_t;
#else
typedef signed char flex_int8_t;
typedef short int flex_int16_t;
typedef int flex_int32_t;
typedef unsigned char flex_uint8_t;
typedef unsigned short int flex_uint16_t;
typedef unsigned int flex_uint32_t;
#endif /* ! C99 */
/* Limits of integral types. */
#ifndef INT8_MIN
#define INT8_MIN (-128)
#endif
#ifndef INT16_MIN
#define INT16_MIN (-32767-1)
#endif
#ifndef INT32_MIN
#define INT32_MIN (-2147483647-1)
#endif
#ifndef INT8_MAX
#define INT8_MAX (127)
#endif
#ifndef INT16_MAX
#define INT16_MAX (32767)
#endif
#ifndef INT32_MAX
#define INT32_MAX (2147483647)
#endif
#ifndef UINT8_MAX
#define UINT8_MAX (255U)
#endif
#ifndef UINT16_MAX
#define UINT16_MAX (65535U)
#endif
#ifndef UINT32_MAX
#define UINT32_MAX (4294967295U)
#endif
#endif /* ! FLEXINT_H */
#ifdef __cplusplus
/* The "const" storage-class-modifier is valid. */
#define YY_USE_CONST
#else /* ! __cplusplus */
/* C99 requires __STDC__ to be defined as 1. */
#if defined (__STDC__)
#define YY_USE_CONST
#endif /* defined (__STDC__) */
#endif /* ! __cplusplus */
#ifdef YY_USE_CONST
#define yyconst const
#else
#define yyconst
#endif
/* An opaque pointer. */
#ifndef YY_TYPEDEF_YY_SCANNER_T
#define YY_TYPEDEF_YY_SCANNER_T
typedef void* yyscan_t;
#endif
/* For convenience, these vars (plus the bison vars far below)
are macros in the reentrant scanner. */
#define yyin yyg->yyin_r
#define yyout yyg->yyout_r
#define yyextra yyg->yyextra_r
#define yyleng yyg->yyleng_r
#define yytext yyg->yytext_r
#define yylineno (YY_CURRENT_BUFFER_LVALUE->yy_bs_lineno)
#define yycolumn (YY_CURRENT_BUFFER_LVALUE->yy_bs_column)
#define yy_flex_debug yyg->yy_flex_debug_r
/* Size of default input buffer. */
#ifndef YY_BUF_SIZE
#define YY_BUF_SIZE 16384
#endif
#ifndef YY_TYPEDEF_YY_BUFFER_STATE
#define YY_TYPEDEF_YY_BUFFER_STATE
typedef struct yy_buffer_state *YY_BUFFER_STATE;
#endif
#ifndef YY_TYPEDEF_YY_SIZE_T
#define YY_TYPEDEF_YY_SIZE_T
typedef size_t yy_size_t;
#endif
#ifndef YY_STRUCT_YY_BUFFER_STATE
#define YY_STRUCT_YY_BUFFER_STATE
struct yy_buffer_state
{
FILE *yy_input_file;
char *yy_ch_buf; /* input buffer */
char *yy_buf_pos; /* current position in input buffer */
/* Size of input buffer in bytes, not including room for EOB
* characters.
*/
yy_size_t yy_buf_size;
/* Number of characters read into yy_ch_buf, not including EOB
* characters.
*/
yy_size_t yy_n_chars;
/* Whether we "own" the buffer - i.e., we know we created it,
* and can realloc() it to grow it, and should free() it to
* delete it.
*/
int yy_is_our_buffer;
/* Whether this is an "interactive" input source; if so, and
* if we're using stdio for input, then we want to use getc()
* instead of fread(), to make sure we stop fetching input after
* each newline.
*/
int yy_is_interactive;
/* Whether we're considered to be at the beginning of a line.
* If so, '^' rules will be active on the next match, otherwise
* not.
*/
int yy_at_bol;
int yy_bs_lineno; /**< The line count. */
int yy_bs_column; /**< The column count. */
/* Whether to try to fill the input buffer when we reach the
* end of it.
*/
int yy_fill_buffer;
int yy_buffer_status;
};
#endif /* !YY_STRUCT_YY_BUFFER_STATE */
void linguist_yyrestart (FILE *input_file ,yyscan_t yyscanner );
void linguist_yy_switch_to_buffer (YY_BUFFER_STATE new_buffer ,yyscan_t yyscanner );
YY_BUFFER_STATE linguist_yy_create_buffer (FILE *file,int size ,yyscan_t yyscanner );
void linguist_yy_delete_buffer (YY_BUFFER_STATE b ,yyscan_t yyscanner );
void linguist_yy_flush_buffer (YY_BUFFER_STATE b ,yyscan_t yyscanner );
void linguist_yypush_buffer_state (YY_BUFFER_STATE new_buffer ,yyscan_t yyscanner );
void linguist_yypop_buffer_state (yyscan_t yyscanner );
YY_BUFFER_STATE linguist_yy_scan_buffer (char *base,yy_size_t size ,yyscan_t yyscanner );
YY_BUFFER_STATE linguist_yy_scan_string (yyconst char *yy_str ,yyscan_t yyscanner );
YY_BUFFER_STATE linguist_yy_scan_bytes (yyconst char *bytes,yy_size_t len ,yyscan_t yyscanner );
void *linguist_yyalloc (yy_size_t ,yyscan_t yyscanner );
void *linguist_yyrealloc (void *,yy_size_t ,yyscan_t yyscanner );
void linguist_yyfree (void * ,yyscan_t yyscanner );
/* Begin user sect3 */
#define yytext_ptr yytext_r
#ifdef YY_HEADER_EXPORT_START_CONDITIONS
#define INITIAL 0
#define sgml 1
#define c_comment 2
#define xml_comment 3
#define haskell_comment 4
#define ocaml_comment 5
#define python_dcomment 6
#define python_scomment 7
#endif
#ifndef YY_NO_UNISTD_H
/* Special case for "unistd.h", since it is non-ANSI. We include it way
* down here because we want the user's section 1 to have been scanned first.
* The user has a chance to override it with an option.
*/
#include <unistd.h>
#endif
#define YY_EXTRA_TYPE struct tokenizer_extra *
int linguist_yylex_init (yyscan_t* scanner);
int linguist_yylex_init_extra (YY_EXTRA_TYPE user_defined,yyscan_t* scanner);
/* Accessor methods to globals.
These are made visible to non-reentrant scanners for convenience. */
int linguist_yylex_destroy (yyscan_t yyscanner );
int linguist_yyget_debug (yyscan_t yyscanner );
void linguist_yyset_debug (int debug_flag ,yyscan_t yyscanner );
YY_EXTRA_TYPE linguist_yyget_extra (yyscan_t yyscanner );
void linguist_yyset_extra (YY_EXTRA_TYPE user_defined ,yyscan_t yyscanner );
FILE *linguist_yyget_in (yyscan_t yyscanner );
void linguist_yyset_in (FILE * in_str ,yyscan_t yyscanner );
FILE *linguist_yyget_out (yyscan_t yyscanner );
void linguist_yyset_out (FILE * out_str ,yyscan_t yyscanner );
yy_size_t linguist_yyget_leng (yyscan_t yyscanner );
char *linguist_yyget_text (yyscan_t yyscanner );
int linguist_yyget_lineno (yyscan_t yyscanner );
void linguist_yyset_lineno (int line_number ,yyscan_t yyscanner );
/* Macros after this point can all be overridden by user definitions in
* section 1.
*/
#ifndef YY_SKIP_YYWRAP
#ifdef __cplusplus
extern "C" int linguist_yywrap (yyscan_t yyscanner );
#else
extern int linguist_yywrap (yyscan_t yyscanner );
#endif
#endif
#ifndef yytext_ptr
static void yy_flex_strncpy (char *,yyconst char *,int ,yyscan_t yyscanner);
#endif
#ifdef YY_NEED_STRLEN
static int yy_flex_strlen (yyconst char * ,yyscan_t yyscanner);
#endif
#ifndef YY_NO_INPUT
#endif
/* Amount of stuff to slurp up with each read. */
#ifndef YY_READ_BUF_SIZE
#define YY_READ_BUF_SIZE 8192
#endif
/* Number of entries by which start-condition stack grows. */
#ifndef YY_START_STACK_INCR
#define YY_START_STACK_INCR 25
#endif
/* Default declaration of generated scanner - a define so the user can
* easily add parameters.
*/
#ifndef YY_DECL
#define YY_DECL_IS_OURS 1
extern int linguist_yylex (yyscan_t yyscanner);
#define YY_DECL int linguist_yylex (yyscan_t yyscanner)
#endif /* !YY_DECL */
/* yy_get_previous_state - get the state just before the EOB char was reached */
#undef YY_NEW_FILE
#undef YY_FLUSH_BUFFER
#undef yy_set_bol
#undef yy_new_buffer
#undef yy_set_interactive
#undef YY_DO_BEFORE_ACTION
#ifdef YY_DECL_IS_OURS
#undef YY_DECL_IS_OURS
#undef YY_DECL
#endif
#line 118 "tokenizer.l"
#line 335 "lex.linguist_yy.h"
#undef linguist_yyIN_HEADER
#endif /* linguist_yyHEADER_H */

View File

@@ -1,75 +0,0 @@
#include "ruby.h"
#include "linguist.h"
#include "lex.linguist_yy.h"
// Anything longer is unlikely to be useful.
#define MAX_TOKEN_LEN 32
int linguist_yywrap(yyscan_t yyscanner) {
return 1;
}
static VALUE rb_tokenizer_extract_tokens(VALUE self, VALUE rb_data) {
YY_BUFFER_STATE buf;
yyscan_t scanner;
struct tokenizer_extra extra;
VALUE ary, s;
long len;
int r;
Check_Type(rb_data, T_STRING);
len = RSTRING_LEN(rb_data);
if (len > 100000)
len = 100000;
linguist_yylex_init_extra(&extra, &scanner);
buf = linguist_yy_scan_bytes(RSTRING_PTR(rb_data), (int) len, scanner);
ary = rb_ary_new();
do {
extra.type = NO_ACTION;
extra.token = NULL;
r = linguist_yylex(scanner);
switch (extra.type) {
case NO_ACTION:
break;
case REGULAR_TOKEN:
len = strlen(extra.token);
if (len <= MAX_TOKEN_LEN)
rb_ary_push(ary, rb_str_new(extra.token, len));
free(extra.token);
break;
case SHEBANG_TOKEN:
len = strlen(extra.token);
if (len <= MAX_TOKEN_LEN) {
s = rb_str_new2("SHEBANG#!");
rb_str_cat(s, extra.token, len);
rb_ary_push(ary, s);
}
free(extra.token);
break;
case SGML_TOKEN:
len = strlen(extra.token);
if (len <= MAX_TOKEN_LEN) {
s = rb_str_new(extra.token, len);
rb_str_cat2(s, ">");
rb_ary_push(ary, s);
}
free(extra.token);
break;
}
} while (r);
linguist_yy_delete_buffer(buf, scanner);
linguist_yylex_destroy(scanner);
return ary;
}
__attribute__((visibility("default"))) void Init_linguist() {
VALUE rb_mLinguist = rb_define_module("Linguist");
VALUE rb_cTokenizer = rb_define_class_under(rb_mLinguist, "Tokenizer", rb_cObject);
rb_define_method(rb_cTokenizer, "extract_tokens", rb_tokenizer_extract_tokens, 1);
}

View File

@@ -1,11 +0,0 @@
enum tokenizer_type {
NO_ACTION,
REGULAR_TOKEN,
SHEBANG_TOKEN,
SGML_TOKEN,
};
struct tokenizer_extra {
char *token;
enum tokenizer_type type;
};

View File

@@ -1,119 +0,0 @@
%{
#include "linguist.h"
#define feed_token(tok, typ) do { \
yyextra->token = (tok); \
yyextra->type = (typ); \
} while (0)
#define eat_until_eol() do { \
int c; \
while ((c = input(yyscanner)) != '\n' && c != EOF && c); \
if (c == EOF || !c) \
return 0; \
} while (0)
#define eat_until_unescaped(q) do { \
int c; \
while ((c = input(yyscanner)) != EOF && c) { \
if (c == '\n') \
break; \
if (c == '\\') { \
c = input(yyscanner); \
if (c == EOF || !c) \
return 0; \
} else if (c == q) \
break; \
} \
if (c == EOF || !c) \
return 0; \
} while (0)
%}
%option never-interactive yywrap reentrant nounput warn nodefault header-file="lex.linguist_yy.h" extra-type="struct tokenizer_extra *" prefix="linguist_yy"
%x sgml c_comment xml_comment haskell_comment ocaml_comment python_dcomment python_scomment
%%
^#![ \t]*([[:alnum:]_\/]*\/)?env([ \t]+([^ \t=]*=[^ \t]*))*[ \t]+[[:alpha:]_]+ {
const char *off = strrchr(yytext, ' ');
if (!off)
off = yytext;
else
++off;
feed_token(strdup(off), SHEBANG_TOKEN);
eat_until_eol();
return 1;
}
^#![ \t]*[[:alpha:]_\/]+ {
const char *off = strrchr(yytext, '/');
if (!off)
off = yytext;
else
++off;
if (strcmp(off, "env") == 0) {
eat_until_eol();
} else {
feed_token(strdup(off), SHEBANG_TOKEN);
eat_until_eol();
return 1;
}
}
^[ \t]*(\/\/|--|\#|%|\")" ".* { /* nothing */ }
"/*" { BEGIN(c_comment); }
/* See below for xml_comment start. */
"{-" { BEGIN(haskell_comment); }
"(*" { BEGIN(ocaml_comment); }
"\"\"\"" { BEGIN(python_dcomment); }
"'''" { BEGIN(python_scomment); }
<c_comment,xml_comment,haskell_comment,ocaml_comment,python_dcomment,python_scomment>.|\n { /* nothing */ }
<c_comment>"*/" { BEGIN(INITIAL); }
<xml_comment>"-->" { BEGIN(INITIAL); }
<haskell_comment>"-}" { BEGIN(INITIAL); }
<ocaml_comment>"*)" { BEGIN(INITIAL); }
<python_dcomment>"\"\"\"" { BEGIN(INITIAL); }
<python_scomment>"'''" { BEGIN(INITIAL); }
\"\"|'' { /* nothing */ }
\" { eat_until_unescaped('"'); }
' { eat_until_unescaped('\''); }
(0x[0-9a-fA-F]([0-9a-fA-F]|\.)*|[0-9]([0-9]|\.)*)([uU][lL]{0,2}|([eE][-+][0-9]*)?[fFlL]*) { /* nothing */ }
\<[[:alnum:]_!./?-]+ {
if (strcmp(yytext, "<!--") == 0) {
BEGIN(xml_comment);
} else {
feed_token(strdup(yytext), SGML_TOKEN);
BEGIN(sgml);
return 1;
}
}
<sgml>[[:alnum:]_]+=\" { feed_token(strndup(yytext, strlen(yytext) - 1), REGULAR_TOKEN); eat_until_unescaped('"'); return 1; }
<sgml>[[:alnum:]_]+=' { feed_token(strndup(yytext, strlen(yytext) - 1), REGULAR_TOKEN); eat_until_unescaped('\''); return 1; }
<sgml>[[:alnum:]_]+=[[:alnum:]_]* { feed_token(strdup(yytext), REGULAR_TOKEN); *(strchr(yyextra->token, '=') + 1) = 0; return 1; }
<sgml>[[:alnum:]_]+ { feed_token(strdup(yytext), REGULAR_TOKEN); return 1; }
<sgml>\> { BEGIN(INITIAL); }
<sgml>.|\n { /* nothing */ }
;|\{|\}|\(|\)|\[|\] { feed_token(strdup(yytext), REGULAR_TOKEN); return 1; }
[[:alnum:]_.@#/*]+ {
if (strncmp(yytext, "/*", 2) == 0) {
if (strlen(yytext) >= 4 && strcmp(yytext + strlen(yytext) - 2, "*/") == 0) {
/* nothing */
} else {
BEGIN(c_comment);
}
} else {
feed_token(strdup(yytext), REGULAR_TOKEN);
return 1;
}
}
\<\<?|\+|\-|\*|\/|%|&&?|\|\|? { feed_token(strdup(yytext), REGULAR_TOKEN); return 1; }
.|\n { /* nothing */ }
%%

View File

@@ -2,7 +2,7 @@ require File.expand_path('../lib/linguist/version', __FILE__)
Gem::Specification.new do |s|
s.name = 'github-linguist'
s.version = ENV['GEM_VERSION'] || Linguist::VERSION
s.version = Linguist::VERSION
s.summary = "GitHub Language detection"
s.description = 'We use this library at GitHub to detect blob languages, highlight code, ignore binary files, suppress generated files in diffs, and generate language breakdown graphs.'
@@ -10,23 +10,17 @@ Gem::Specification.new do |s|
s.homepage = "https://github.com/github/linguist"
s.license = "MIT"
s.files = Dir['lib/**/*'] + Dir['ext/**/*'] + Dir['grammars/*'] + ['LICENSE']
s.executables = ['linguist', 'git-linguist']
s.extensions = ['ext/linguist/extconf.rb']
s.files = Dir['lib/**/*']
s.executables << 'linguist'
s.add_dependency 'charlock_holmes', '~> 0.7.5'
s.add_dependency 'escape_utils', '~> 1.2.0'
s.add_dependency 'mime-types', '>= 1.19'
s.add_dependency 'rugged', '>= 0.25.1'
s.add_dependency 'charlock_holmes', '~> 0.7.3'
s.add_dependency 'escape_utils', '~> 1.0.1'
s.add_dependency 'mime-types', '~> 1.19'
s.add_dependency 'pygments.rb', '~> 0.6.0'
s.add_dependency 'rugged', '~> 0.21.0'
s.add_development_dependency 'minitest', '>= 5.0'
s.add_development_dependency 'rake-compiler', '~> 0.9'
s.add_development_dependency 'json'
s.add_development_dependency 'mocha'
s.add_development_dependency 'plist', '~>3.1'
s.add_development_dependency 'pry'
s.add_development_dependency 'rake'
s.add_development_dependency 'yajl-ruby'
s.add_development_dependency 'color-proximity', '~> 0.2.1'
s.add_development_dependency 'licensed'
s.add_development_dependency 'licensee', '~> 8.8.0'
end

View File

@@ -1,741 +0,0 @@
https://bitbucket.org/Clams/sublimesystemverilog/get/default.tar.gz:
- source.systemverilog
- source.ucfconstraints
https://svn.edgewall.org/repos/genshi/contrib/textmate/Genshi.tmbundle/Syntaxes/Markup%20Template%20%28XML%29.tmLanguage:
- text.xml.genshi
vendor/grammars/ABNF.tmbundle:
- source.abnf
vendor/grammars/Agda.tmbundle:
- source.agda
vendor/grammars/Alloy.tmbundle:
- source.alloy
vendor/grammars/AutoHotkey:
- source.ahk
vendor/grammars/BrightScript.tmbundle:
- source.brightauthorproject
- source.brightscript
vendor/grammars/ColdFusion:
- source.cfscript
- source.cfscript.cfc
- text.cfml.basic
- text.html.cfm
vendor/grammars/Docker.tmbundle:
- source.dockerfile
vendor/grammars/EBNF.tmbundle:
- source.ebnf
vendor/grammars/Elm/Syntaxes:
- source.elm
- text.html.mediawiki.elm-build-output
- text.html.mediawiki.elm-documentation
vendor/grammars/FreeMarker.tmbundle:
- text.html.ftl
vendor/grammars/G-Code:
- source.LS
- source.MCPOST
- source.MOD
- source.apt
- source.gcode
vendor/grammars/Handlebars:
- text.html.handlebars
vendor/grammars/IDL-Syntax:
- source.webidl
vendor/grammars/Isabelle.tmbundle:
- source.isabelle.root
- source.isabelle.theory
vendor/grammars/JSyntax:
- source.j
vendor/grammars/Lean.tmbundle:
- source.lean
vendor/grammars/LiveScript.tmbundle:
- source.livescript
vendor/grammars/MATLAB-Language-grammar:
- source.matlab
vendor/grammars/MQL5-sublime:
- source.mql5
vendor/grammars/MagicPython:
- source.python
- source.regexp.python
- text.python.console
- text.python.traceback
vendor/grammars/Modelica:
- source.modelica
vendor/grammars/NSIS:
- source.nsis
vendor/grammars/NimLime:
- source.nim
- source.nim_filter
- source.nimcfg
vendor/grammars/PHP-Twig.tmbundle:
- text.html.twig
vendor/grammars/PogoScript.tmbundle:
- source.pogoscript
vendor/grammars/RDoc.tmbundle:
- text.rdoc
vendor/grammars/Racket:
- source.racket
vendor/grammars/SCSS.tmbundle:
- source.scss
vendor/grammars/SMT.tmbundle:
- source.smt
vendor/grammars/Scalate.tmbundle:
- source.scaml
- text.html.ssp
vendor/grammars/Slash.tmbundle:
- text.html.slash
vendor/grammars/Stata.tmbundle:
- source.mata
- source.stata
vendor/grammars/Stylus:
- source.stylus
vendor/grammars/Sublime-Coq:
- source.coq
vendor/grammars/Sublime-HTTP:
- source.httpspec
vendor/grammars/Sublime-Lasso:
- file.lasso
vendor/grammars/Sublime-Loom:
- source.loomscript
vendor/grammars/Sublime-Modula-2:
- source.modula2
vendor/grammars/Sublime-Nit:
- source.nit
vendor/grammars/Sublime-Pep8/:
- source.pep8
vendor/grammars/Sublime-QML:
- source.qml
vendor/grammars/Sublime-REBOL:
- source.rebol
vendor/grammars/Sublime-Red:
- source.red
vendor/grammars/Sublime-SQF-Language:
- source.sqf
vendor/grammars/Sublime-Text-2-OpenEdge-ABL:
- source.abl
- text.html.abl
vendor/grammars/SublimeBrainfuck:
- source.bf
vendor/grammars/SublimeClarion:
- source.clarion
vendor/grammars/SublimeEthereum:
- source.solidity
vendor/grammars/SublimeGDB/:
- source.disasm
- source.gdb
- source.gdb.session
- source.gdbregs
vendor/grammars/SublimePapyrus:
- source.papyrus.skyrim
vendor/grammars/SublimePuppet:
- source.puppet
vendor/grammars/SublimeXtend:
- source.xtend
vendor/grammars/Syntax-highlighting-for-PostCSS:
- source.css.postcss.sugarss
- source.postcss
vendor/grammars/TLA:
- source.tla
vendor/grammars/TXL:
- source.txl
vendor/grammars/Terraform.tmLanguage:
- source.terraform
vendor/grammars/Textmate-Gosu-Bundle:
- source.gosu.2
vendor/grammars/TypeScript-TmLanguage:
- source.ts
- source.tsx
- text.error-list
- text.find-refs
vendor/grammars/UrWeb-Language-Definition:
- source.ur
vendor/grammars/VBDotNetSyntax:
- source.vbnet
vendor/grammars/Vala-TMBundle:
- source.vala
vendor/grammars/X10:
- source.x10
vendor/grammars/abap.tmbundle:
- source.abap
vendor/grammars/actionscript3-tmbundle/:
- source.actionscript.3
- text.html.asdoc
- text.xml.flex-config
vendor/grammars/ada.tmbundle:
- source.ada
vendor/grammars/ampl:
- source.ampl
vendor/grammars/ant.tmbundle:
- text.xml.ant
vendor/grammars/antlr.tmbundle:
- source.antlr
vendor/grammars/apache.tmbundle:
- source.apache-config
- source.apache-config.mod_perl
vendor/grammars/api-blueprint-sublime-plugin:
- text.html.markdown.source.gfm.apib
- text.html.markdown.source.gfm.mson
vendor/grammars/applescript.tmbundle:
- source.applescript
vendor/grammars/asciidoc.tmbundle:
- text.html.asciidoc
vendor/grammars/asp.tmbundle:
- source.asp
- text.html.asp
vendor/grammars/assembly:
- objdump.x86asm
- source.x86asm
vendor/grammars/atom-fsharp:
- source.fsharp
- source.fsharp.fsi
- source.fsharp.fsl
- source.fsharp.fsx
vendor/grammars/atom-language-1c-bsl:
- source.bsl
- source.sdbl
vendor/grammars/atom-language-clean:
- source.clean
- text.restructuredtext.clean
vendor/grammars/atom-language-julia:
- source.julia
- source.julia.console
vendor/grammars/atom-language-nextflow:
- source.nextflow
- source.nextflow-groovy
vendor/grammars/atom-language-p4:
- source.p4
vendor/grammars/atom-language-perl6:
- source.meta-info
- source.perl6fe
- source.quoting.perl6fe
- source.regexp.perl6fe
vendor/grammars/atom-language-purescript:
- source.purescript
vendor/grammars/atom-language-rust:
- source.rust
vendor/grammars/atom-language-srt:
- text.srt
vendor/grammars/atom-language-stan:
- source.stan
vendor/grammars/atom-salt:
- source.python.salt
- source.yaml.salt
vendor/grammars/atomic-dreams:
- source.dm
- source.dmf
vendor/grammars/ats:
- source.ats
vendor/grammars/awk-sublime:
- source.awk
vendor/grammars/bison.tmbundle:
- source.bison
vendor/grammars/blitzmax:
- source.blitzmax
vendor/grammars/boo:
- source.boo
vendor/grammars/bro-sublime:
- source.bro
vendor/grammars/c.tmbundle:
- source.c
- source.c++
- source.c.platform
vendor/grammars/capnproto.tmbundle:
- source.capnp
vendor/grammars/carto-atom:
- source.css.mss
vendor/grammars/ceylon-sublimetext:
- source.ceylon
vendor/grammars/chapel-tmbundle:
- source.chapel
vendor/grammars/cmake.tmbundle:
- source.cache.cmake
- source.cmake
vendor/grammars/cool-tmbundle:
- source.cool
vendor/grammars/cpp-qt.tmbundle:
- source.c++.qt
- source.qmake
vendor/grammars/creole:
- text.html.creole
vendor/grammars/cucumber-tmbundle:
- source.ruby.rspec.cucumber.steps
- text.gherkin.feature
vendor/grammars/cython:
- source.cython
vendor/grammars/d.tmbundle:
- source.d
vendor/grammars/dartlang:
- source.dart
- source.yaml-ext
vendor/grammars/data-weave-tmLanguage:
- source.data-weave
vendor/grammars/desktop.tmbundle:
- source.desktop
vendor/grammars/diff.tmbundle:
- source.diff
vendor/grammars/dylan.tmbundle:
- source.dylan
- source.lid
- source.makegen
vendor/grammars/ec.tmbundle:
- source.c.ec
vendor/grammars/eiffel.tmbundle:
- source.eiffel
vendor/grammars/ejs-tmbundle:
- text.html.js
vendor/grammars/elixir-tmbundle:
- source.elixir
- text.elixir
- text.html.elixir
vendor/grammars/erlang.tmbundle:
- source.erlang
- text.html.erlang.yaws
vendor/grammars/factor:
- source.factor
- text.html.factor
vendor/grammars/fancy-tmbundle:
- source.fancy
vendor/grammars/fish-tmbundle:
- source.fish
vendor/grammars/forth:
- source.forth
vendor/grammars/fortran.tmbundle:
- source.fortran
- source.fortran.modern
vendor/grammars/gap-tmbundle:
- source.gap
vendor/grammars/gdscript:
- source.gdscript
vendor/grammars/gettext.tmbundle:
- source.po
vendor/grammars/gnuplot-tmbundle:
- source.gnuplot
vendor/grammars/go-tmbundle:
- source.go
vendor/grammars/grace:
- source.grace
vendor/grammars/gradle.tmbundle:
- source.groovy.gradle
vendor/grammars/graphviz.tmbundle:
- source.dot
vendor/grammars/groovy.tmbundle:
- source.groovy
vendor/grammars/haxe-sublime-bundle:
- source.erazor
- source.haxe.2
- source.hss.1
- source.hxml
- source.nmml
vendor/grammars/html.tmbundle:
- text.html.basic
vendor/grammars/idl.tmbundle:
- source.idl
- source.idl-dlm
- text.idl-idldoc
vendor/grammars/idris:
- source.idris
vendor/grammars/ini.tmbundle:
- source.ini
vendor/grammars/io.tmbundle:
- source.io
vendor/grammars/ioke-outdated:
- source.ioke
vendor/grammars/jade-tmbundle:
- source.pyjade
- text.jade
vendor/grammars/jasmin-sublime:
- source.jasmin
vendor/grammars/java.tmbundle:
- source.java
- source.java-properties
- text.html.jsp
- text.junit-test-report
vendor/grammars/javadoc.tmbundle:
- text.html.javadoc
vendor/grammars/javascript-objective-j.tmbundle:
- source.js.objj
vendor/grammars/jflex.tmbundle:
- source.jflex
vendor/grammars/json.tmbundle:
- source.json
vendor/grammars/kotlin-sublime-package:
- source.Kotlin
vendor/grammars/language-agc:
- source.agc
vendor/grammars/language-apl:
- source.apl
vendor/grammars/language-asn1:
- source.asn
vendor/grammars/language-babel:
- source.js.jsx
- source.regexp.babel
vendor/grammars/language-ballerina:
- source.ballerina
vendor/grammars/language-batchfile:
- source.batchfile
vendor/grammars/language-blade:
- text.html.php.blade
vendor/grammars/language-click:
- source.click
vendor/grammars/language-clojure:
- source.clojure
vendor/grammars/language-closure-templates:
- text.html.soy
vendor/grammars/language-coffee-script:
- source.coffee
- source.litcoffee
vendor/grammars/language-crystal:
- source.crystal
- text.html.ecr
vendor/grammars/language-csharp:
- source.cake
- source.cs
- source.csx
- source.nant-build
vendor/grammars/language-csound:
- source.csound
- source.csound-document
- source.csound-score
vendor/grammars/language-css:
- source.css
vendor/grammars/language-cwl:
- source.cwl
vendor/grammars/language-emacs-lisp:
- source.emacs.lisp
vendor/grammars/language-fontforge:
- source.afm
- source.fontforge
- source.opentype
- text.sfd
vendor/grammars/language-gfm:
- source.gfm
vendor/grammars/language-gn:
- source.gn
vendor/grammars/language-graphql:
- source.graphql
vendor/grammars/language-haml:
- text.haml
- text.hamlc
vendor/grammars/language-haskell:
- annotation.liquidhaskell.haskell
- hint.haskell
- hint.message.haskell
- hint.type.haskell
- source.c2hs
- source.cabal
- source.haskell
- source.hsc2hs
- source.hsig
- text.tex.latex.haskell
vendor/grammars/language-inform7:
- source.inform7
vendor/grammars/language-javascript:
- source.js
- source.js.regexp
- source.js.regexp.replacement
- source.jsdoc
vendor/grammars/language-jison:
- source.jison
- source.jisonlex
- source.jisonlex-injection
vendor/grammars/language-jolie:
- source.jolie
vendor/grammars/language-jsoniq:
- source.jq
- source.xq
vendor/grammars/language-less:
- source.css.less
vendor/grammars/language-maxscript:
- source.maxscript
vendor/grammars/language-meson:
- source.meson
vendor/grammars/language-ncl:
- source.ncl
vendor/grammars/language-ninja:
- source.ninja
vendor/grammars/language-pan:
- source.pan
vendor/grammars/language-pcb:
- source.gerber
- source.pcb.board
- source.pcb.schematic
- source.pcb.sexp
vendor/grammars/language-povray:
- source.pov-ray sdl
vendor/grammars/language-reason:
- source.reason
- source.reason.hover.type
vendor/grammars/language-regexp:
- source.regexp
- source.regexp.extended
vendor/grammars/language-renpy:
- source.renpy
vendor/grammars/language-restructuredtext:
- text.restructuredtext
vendor/grammars/language-ring:
- source.ring
vendor/grammars/language-roff:
- source.ditroff
- source.ditroff.desc
- source.ideal
- source.pic
- text.roff
- text.runoff
vendor/grammars/language-rpm-spec:
- source.changelogs.rpm-spec
- source.rpm-spec
vendor/grammars/language-ruby:
- source.ruby
- source.ruby.gemfile
- text.html.erb
vendor/grammars/language-shellscript:
- source.shell
- text.shell-session
vendor/grammars/language-supercollider:
- source.supercollider
vendor/grammars/language-toc-wow:
- source.toc
vendor/grammars/language-turing:
- source.turing
vendor/grammars/language-typelanguage:
- source.tl
vendor/grammars/language-viml:
- source.viml
vendor/grammars/language-wavefront:
- source.wavefront.mtl
- source.wavefront.obj
vendor/grammars/language-webassembly:
- source.webassembly
vendor/grammars/language-xbase:
- source.harbour
vendor/grammars/language-xcompose:
- config.xcompose
vendor/grammars/language-yaml:
- source.yaml
vendor/grammars/language-yang:
- source.yang
vendor/grammars/language-yara:
- source.yara
vendor/grammars/latex.tmbundle:
- text.bibtex
- text.log.latex
- text.tex
- text.tex.latex
- text.tex.latex.beamer
- text.tex.latex.memoir
vendor/grammars/lilypond.tmbundle:
- source.lilypond
vendor/grammars/liquid.tmbundle:
- text.html.liquid
vendor/grammars/lisp.tmbundle:
- source.lisp
vendor/grammars/llvm.tmbundle:
- source.llvm
vendor/grammars/logos:
- source.logos
vendor/grammars/logtalk.tmbundle:
- source.logtalk
vendor/grammars/lua.tmbundle:
- source.lua
vendor/grammars/make.tmbundle:
- source.makefile
vendor/grammars/mako-tmbundle:
- text.html.mako
vendor/grammars/marko-tmbundle:
- text.marko
vendor/grammars/mathematica-tmbundle:
- source.mathematica
vendor/grammars/maven.tmbundle:
- text.xml.pom
vendor/grammars/mediawiki.tmbundle:
- text.html.mediawiki
vendor/grammars/mercury-tmlanguage:
- source.mercury
vendor/grammars/monkey:
- source.monkey
vendor/grammars/moonscript-tmbundle:
- source.moonscript
vendor/grammars/nemerle.tmbundle:
- source.nemerle
vendor/grammars/nesC:
- source.nesc
vendor/grammars/nix:
- source.nix
vendor/grammars/nu.tmbundle:
- source.nu
vendor/grammars/objective-c.tmbundle:
- source.objc
- source.objc++
- source.objc.platform
- source.strings
vendor/grammars/ocaml.tmbundle:
- source.camlp4.ocaml
- source.ocaml
- source.ocamllex
- source.ocamlyacc
vendor/grammars/ooc.tmbundle:
- source.ooc
vendor/grammars/opa.tmbundle:
- source.opa
vendor/grammars/openscad.tmbundle:
- source.scad
vendor/grammars/oz-tmbundle:
- source.oz
vendor/grammars/parrot:
- source.parrot.pir
vendor/grammars/pascal.tmbundle:
- source.pascal
vendor/grammars/pawn-sublime-language:
- source.pawn
vendor/grammars/perl.tmbundle:
- source.perl
- source.perl.6
vendor/grammars/php-smarty.tmbundle:
- text.html.smarty
vendor/grammars/php.tmbundle:
- text.html.php
vendor/grammars/pig-latin:
- source.pig_latin
vendor/grammars/pike-textmate:
- source.pike
vendor/grammars/postscript.tmbundle:
- source.postscript
vendor/grammars/powershell:
- source.powershell
vendor/grammars/processing.tmbundle:
- source.processing
vendor/grammars/protobuf-tmbundle:
- source.protobuf
vendor/grammars/python-django.tmbundle:
- source.python.django
- text.html.django
vendor/grammars/r.tmbundle:
- source.r
- text.tex.latex.rd
vendor/grammars/rascal-syntax-highlighting:
- source.rascal
vendor/grammars/ruby-slim.tmbundle:
- text.slim
vendor/grammars/sas.tmbundle:
- source.SASLog
- source.sas
vendor/grammars/sass-textmate-bundle:
- source.sass
vendor/grammars/scala.tmbundle:
- source.sbt
- source.scala
vendor/grammars/scheme.tmbundle:
- source.scheme
vendor/grammars/scilab.tmbundle:
- source.scilab
vendor/grammars/secondlife-lsl:
- source.lsl
vendor/grammars/shaders-tmLanguage:
- source.hlsl
- source.shaderlab
vendor/grammars/smali-sublime:
- source.smali
vendor/grammars/smalltalk-tmbundle:
- source.smalltalk
vendor/grammars/sourcepawn:
- source.sp
vendor/grammars/sql.tmbundle:
- source.sql
vendor/grammars/squirrel-language:
- source.nut
vendor/grammars/st2-zonefile:
- text.zone_file
vendor/grammars/standard-ml.tmbundle:
- source.cm
- source.ml
vendor/grammars/sublime-MuPAD:
- source.mupad
vendor/grammars/sublime-angelscript:
- source.angelscript
vendor/grammars/sublime-aspectj:
- source.aspectj
vendor/grammars/sublime-autoit:
- source.autoit
vendor/grammars/sublime-befunge:
- source.befunge
vendor/grammars/sublime-bsv:
- source.bsv
vendor/grammars/sublime-cirru:
- source.cirru
vendor/grammars/sublime-clips:
- source.clips
vendor/grammars/sublime-fantom:
- source.fan
vendor/grammars/sublime-glsl:
- source.essl
- source.glsl
vendor/grammars/sublime-golo:
- source.golo
vendor/grammars/sublime-mask:
- source.mask
vendor/grammars/sublime-nearley:
- source.ne
vendor/grammars/sublime-netlinx:
- source.netlinx
- source.netlinx.erb
vendor/grammars/sublime-nginx:
- source.nginx
vendor/grammars/sublime-opal:
- source.opal
- source.opalsysdefs
vendor/grammars/sublime-pony:
- source.pony
vendor/grammars/sublime-rexx:
- source.rexx
vendor/grammars/sublime-robot-plugin:
- text.robot
vendor/grammars/sublime-shen:
- source.shen
vendor/grammars/sublime-spintools:
- source.regexp.spin
- source.spin
vendor/grammars/sublime-tea:
- source.tea
vendor/grammars/sublime-terra:
- source.terra
vendor/grammars/sublime-text-ox:
- source.ox
vendor/grammars/sublime-varnish:
- source.varnish.vcl
vendor/grammars/sublime_cobol:
- source.acucobol
- source.cobol
- source.jcl
- source.opencobol
vendor/grammars/sublimeassembly:
- source.assembly
vendor/grammars/sublimeprolog:
- source.prolog
- source.prolog.eclipse
vendor/grammars/sublimetext-cuda-cpp:
- source.cuda-c++
vendor/grammars/swift.tmbundle:
- source.swift
vendor/grammars/tcl.tmbundle:
- source.tcl
- text.html.tcl
vendor/grammars/thrift.tmbundle:
- source.thrift
vendor/grammars/toml.tmbundle:
- source.toml
vendor/grammars/turtle.tmbundle:
- source.sparql
- source.turtle
vendor/grammars/verilog.tmbundle:
- source.verilog
vendor/grammars/vhdl:
- source.vhdl
vendor/grammars/vue-syntax-highlight:
- text.html.vue
vendor/grammars/wdl-sublime-syntax-highlighter:
- source.wdl
vendor/grammars/xc.tmbundle:
- source.xc
vendor/grammars/xml.tmbundle:
- text.xml
- text.xml.xsl
vendor/grammars/zephir-sublime:
- source.php.zephir

View File

@@ -1,100 +1,7 @@
require 'linguist/blob_helper'
require 'linguist/generated'
require 'linguist/grammars'
require 'linguist/heuristics'
require 'linguist/language'
require 'linguist/repository'
require 'linguist/samples'
require 'linguist/shebang'
require 'linguist/version'
class << Linguist
# Public: Detects the Language of the blob.
#
# blob - an object that includes the Linguist `BlobHelper` interface;
# see Linguist::LazyBlob and Linguist::FileBlob for examples
#
# Returns Language or nil.
def detect(blob, allow_empty: false)
# Bail early if the blob is binary or empty.
return nil if blob.likely_binary? || blob.binary? || (!allow_empty && blob.empty?)
Linguist.instrument("linguist.detection", :blob => blob) do
# Call each strategy until one candidate is returned.
languages = []
returning_strategy = nil
STRATEGIES.each do |strategy|
returning_strategy = strategy
candidates = Linguist.instrument("linguist.strategy", :blob => blob, :strategy => strategy, :candidates => languages) do
strategy.call(blob, languages)
end
if candidates.size == 1
languages = candidates
break
elsif candidates.size > 1
# More than one candidate was found, pass them to the next strategy.
languages = candidates
else
# No candidates, try the next strategy
end
end
Linguist.instrument("linguist.detected", :blob => blob, :strategy => returning_strategy, :language => languages.first)
languages.first
end
end
# Internal: The strategies used to detect the language of a file.
#
# A strategy is an object that has a `.call` method that takes two arguments:
#
# blob - An object that quacks like a blob.
# languages - An Array of candidate Language objects that were returned by the
#     previous strategy.
#
# A strategy should return an Array of Language candidates.
#
# Strategies are called in turn until a single Language is returned.
STRATEGIES = [
Linguist::Strategy::Modeline,
Linguist::Strategy::Filename,
Linguist::Shebang,
Linguist::Strategy::Extension,
Linguist::Heuristics,
Linguist::Classifier
]
# Public: Set an instrumenter.
#
# class CustomInstrumenter
# def instrument(name, payload = {})
# warn "Instrumenting #{name}: #{payload[:blob]}"
# end
# end
#
# Linguist.instrumenter = CustomInstrumenter.new
#
# The instrumenter must conform to the `ActiveSupport::Notifications`
# interface, which defines `#instrument` and accepts:
#
# name - the String name of the event (e.g. "linguist.detected")
# payload - a Hash of the exception context.
attr_accessor :instrumenter
# Internal: Perform instrumentation on a block
#
# Linguist.instrument("linguist.dosomething", :blob => blob) do
# # logic to instrument here.
# end
#
def instrument(*args, &bk)
if instrumenter
instrumenter.instrument(*args, &bk)
elsif block_given?
yield
end
end
end

View File

@@ -1,82 +0,0 @@
require 'linguist/blob_helper'
module Linguist
# A Blob is a wrapper around the content of a file to make it quack
# like a Grit::Blob. It provides the basic interface: `name`,
# `data`, `path` and `size`.
class Blob
include BlobHelper
# Public: Initialize a new Blob.
#
# path - A path String (does not necessarily exists on the file system).
# content - Content of the file.
# symlink - Whether the file is a symlink.
#
# Returns a Blob.
def initialize(path, content, symlink: false)
@path = path
@content = content
@symlink = symlink
end
# Public: Filename
#
# Examples
#
# Blob.new("/path/to/linguist/lib/linguist.rb", "").path
# # => "/path/to/linguist/lib/linguist.rb"
#
# Returns a String
attr_reader :path
# Public: File name
#
# Returns a String
def name
File.basename(@path)
end
# Public: File contents.
#
# Returns a String.
def data
@content
end
# Public: Get byte size
#
# Returns an Integer.
def size
@content.bytesize
end
# Public: Get file extension.
#
# Returns a String.
def extension
extensions.last || ""
end
# Public: Return an array of the file extensions
#
# >> Linguist::Blob.new("app/views/things/index.html.erb").extensions
# => [".html.erb", ".erb"]
#
# Returns an Array
def extensions
_, *segments = name.downcase.split(".", -1)
segments.map.with_index do |segment, index|
"." + segments[index..-1].join(".")
end
end
# Public: Is this a symlink?
#
# Returns true or false.
def symlink?
@symlink
end
end
end

View File

@@ -2,11 +2,12 @@ require 'linguist/generated'
require 'charlock_holmes'
require 'escape_utils'
require 'mime/types'
require 'pygments'
require 'yaml'
module Linguist
# DEPRECATED Avoid mixing into Blob classes. Prefer functional interfaces
# like `Linguist.detect` over `Blob#language`. Functions are much easier to
# like `Language.detect` over `Blob#language`. Functions are much easier to
# cache and compose.
#
# Avoid adding additional bloat to this module.
@@ -99,7 +100,7 @@ module Linguist
elsif name.nil?
"attachment"
else
"attachment; filename=#{EscapeUtils.escape_url(name)}"
"attachment; filename=#{EscapeUtils.escape_url(File.basename(name))}"
end
end
@@ -146,13 +147,6 @@ module Linguist
end
end
# Public: Is the blob empty?
#
# Return true or false
def empty?
data.nil? || data == ""
end
# Public: Is the blob text?
#
# Return true or false
@@ -199,6 +193,10 @@ module Linguist
# Public: Is the blob safe to colorize?
#
# We use Pygments for syntax highlighting blobs. Pygments
# can be too slow for very large blobs or for certain
# corner-case blobs.
#
# Return true or false
def safe_to_colorize?
!large? && text? && !high_ratio_of_long_lines?
@@ -206,6 +204,9 @@ module Linguist
# Internal: Does the blob have a ratio of long lines?
#
# These types of files are usually going to make Pygments.rb
# angry if we try to colorize them.
#
# Return true or false
def high_ratio_of_long_lines?
return false if loc == 0
@@ -233,22 +234,7 @@ module Linguist
#
# Return true or false
def vendored?
path =~ VendoredRegexp ? true : false
end
documentation_paths = YAML.load_file(File.expand_path("../documentation.yml", __FILE__))
DocumentationRegexp = Regexp.new(documentation_paths.join('|'))
# Public: Is the blob in a documentation directory?
#
# Documentation files are ignored by language statistics.
#
# See "documentation.yml" for a list of documentation conventions that match
# this pattern.
#
# Return true or false
def documentation?
path =~ DocumentationRegexp ? true : false
name =~ VendoredRegexp ? true : false
end
# Public: Get each line of data
@@ -275,8 +261,10 @@ module Linguist
# also--importantly--without having to duplicate many (potentially
# large) strings.
begin
data.split(encoded_newlines_re, -1)
encoded_newlines = ["\r\n", "\r", "\n"].
map { |nl| nl.encode(ruby_encoding, "ASCII-8BIT").force_encoding(data.encoding) }
data.split(Regexp.union(encoded_newlines), -1)
rescue Encoding::ConverterNotFoundError
# The data is not splittable in the detected encoding. Assume it's
# one big line.
@@ -287,51 +275,6 @@ module Linguist
end
end
def encoded_newlines_re
@encoded_newlines_re ||= Regexp.union(["\r\n", "\r", "\n"].
map { |nl| nl.encode(ruby_encoding, "ASCII-8BIT").force_encoding(data.encoding) })
end
def first_lines(n)
return lines[0...n] if defined? @lines
return [] unless viewable? && data
i, c = 0, 0
while c < n && j = data.index(encoded_newlines_re, i)
i = j + $&.length
c += 1
end
data[0...i].split(encoded_newlines_re, -1)
end
def last_lines(n)
if defined? @lines
if n >= @lines.length
@lines
else
lines[-n..-1]
end
end
return [] unless viewable? && data
no_eol = true
i, c = data.length, 0
k = i
while c < n && j = data.rindex(encoded_newlines_re, i - 1)
if c == 0 && j + $&.length == i
no_eol = false
n += 1
end
i = j
k = j + $&.length
c += 1
end
r = data[k..-1].split(encoded_newlines_re, -1)
r.pop if !no_eol
r
end
# Public: Get number of lines of code
#
# Requires Blob#data
@@ -359,7 +302,7 @@ module Linguist
#
# Return true or false
def generated?
@_generated ||= Generated.generated?(path, lambda { data })
@_generated ||= Generated.generated?(name, lambda { data })
end
# Public: Detects the Language of the blob.
@@ -368,25 +311,26 @@ module Linguist
#
# Returns a Language or nil if none is detected
def language
@language ||= Linguist.detect(self)
@language ||= Language.detect(self)
end
# Internal: Get the TextMate compatible scope for the blob
def tm_scope
language && language.tm_scope
# Internal: Get the lexer of the blob.
#
# Returns a Lexer.
def lexer
language ? language.lexer : Pygments::Lexer.find_by_name('Text only')
end
DETECTABLE_TYPES = [:programming, :markup].freeze
# Internal: Should this blob be included in repository language statistics?
def include_in_language_stats?
!vendored? &&
!documentation? &&
!generated? &&
language && ( defined?(detectable?) && !detectable?.nil? ?
detectable? :
DETECTABLE_TYPES.include?(language.type)
)
# Public: Highlight syntax of blob
#
# options - A Hash of options (defaults to {})
#
# Returns html String
def colorize(options = {})
return unless safe_to_colorize?
options[:options] ||= {}
options[:options][:encoding] ||= encoding
lexer.highlight(data, options)
end
end
end

View File

@@ -3,27 +3,6 @@ require 'linguist/tokenizer'
module Linguist
# Language bayesian classifier.
class Classifier
CLASSIFIER_CONSIDER_BYTES = 50 * 1024
# Public: Use the classifier to detect language of the blob.
#
# blob - An object that quacks like a blob.
# possible_languages - Array of Language objects
#
# Examples
#
# Classifier.call(FileBlob.new("path/to/file"), [
# Language["Ruby"], Language["Python"]
# ])
#
# Returns an Array of Language objects, most probable first.
def self.call(blob, possible_languages)
language_names = possible_languages.map(&:name)
classify(Samples.cache, blob.data[0...CLASSIFIER_CONSIDER_BYTES], language_names).map do |name, _|
Language[name] # Return the actual Language objects
end
end
# Public: Train classifier that data is a certain language.
#
# db - Hash classifier database object
@@ -97,7 +76,7 @@ module Linguist
# Returns sorted Array of result pairs. Each pair contains the
# String language name and a Float score.
def classify(tokens, languages)
return [] if tokens.nil? || languages.empty?
return [] if tokens.nil?
tokens = Tokenizer.tokenize(tokens) if tokens.is_a?(String)
scores = {}

View File

@@ -1,31 +0,0 @@
# Documentation files and directories are excluded from language
# statistics.
#
# Lines in this file are Regexps that are matched against the file
# pathname.
#
# Please add additional test coverage to
# `test/test_blob.rb#test_documentation` if you make any changes.
## Documentation directories ##
- ^[Dd]ocs?/
- (^|/)[Dd]ocumentation/
- (^|/)[Jj]avadoc/
- ^[Mm]an/
- ^[Ee]xamples/
- ^[Dd]emos?/
## Documentation files ##
- (^|/)CHANGE(S|LOG)?(\.|$)
- (^|/)CONTRIBUTING(\.|$)
- (^|/)COPYING(\.|$)
- (^|/)INSTALL(\.|$)
- (^|/)LICEN[CS]E(\.|$)
- (^|/)[Ll]icen[cs]e(\.|$)
- (^|/)README(\.|$)
- (^|/)[Rr]eadme(\.|$)
# Samples folders
- ^[Ss]amples?/

View File

@@ -1,11 +1,10 @@
require 'linguist/blob_helper'
require 'linguist/blob'
module Linguist
# A FileBlob is a wrapper around a File object to make it quack
# like a Grit::Blob. It provides the basic interface: `name`,
# `data`, `path` and `size`.
class FileBlob < Blob
# `data`, and `size`.
class FileBlob
include BlobHelper
# Public: Initialize a new FileBlob from a path
@@ -15,34 +14,58 @@ module Linguist
#
# Returns a FileBlob.
def initialize(path, base_path = nil)
@fullpath = path
@path = base_path ? path.sub("#{base_path}/", '') : path
@path = path
@name = base_path ? path.sub("#{base_path}/", '') : path
end
# Public: Filename
#
# Examples
#
# FileBlob.new("/path/to/linguist/lib/linguist.rb").name
# # => "/path/to/linguist/lib/linguist.rb"
#
# FileBlob.new("/path/to/linguist/lib/linguist.rb",
# "/path/to/linguist").name
# # => "lib/linguist.rb"
#
# Returns a String
attr_reader :name
# Public: Read file permissions
#
# Returns a String like '100644'
def mode
@mode ||= File.stat(@fullpath).mode.to_s(8)
end
def symlink?
return @symlink if defined? @symlink
@symlink = (File.symlink?(@fullpath) rescue false)
File.stat(@path).mode.to_s(8)
end
# Public: Read file contents.
#
# Returns a String.
def data
@data ||= File.read(@fullpath)
File.read(@path)
end
# Public: Get byte size
#
# Returns an Integer.
def size
@size ||= File.size(@fullpath)
File.size(@path)
end
# Public: Get file extension.
#
# Returns a String.
def extension
# File.extname returns nil if the filename is an extension.
extension = File.extname(name)
basename = File.basename(name)
# Checks if the filename is an extension.
if extension.empty? && basename[0] == "."
basename
else
extension
end
end
end
end

View File

@@ -3,7 +3,7 @@ module Linguist
# Public: Is the blob a generated file?
#
# name - String filename
# data - String blob data. A block also may be passed in for lazy
# data - String blob data. A block also maybe passed in for lazy
# loading. This behavior is deprecated and you should always
# pass in a String.
#
@@ -51,45 +51,25 @@ module Linguist
#
# Return true or false
def generated?
xcode_file? ||
cocoapods? ||
carthage_build? ||
generated_net_designer_file? ||
generated_net_specflow_feature_file? ||
composer_lock? ||
node_modules? ||
go_vendor? ||
npm_shrinkwrap_or_package_lock? ||
godeps? ||
generated_by_zephir? ||
minified_files? ||
has_source_map? ||
source_map? ||
compiled_coffeescript? ||
generated_parser? ||
generated_net_docfile? ||
generated_postscript? ||
compiled_cython_file? ||
generated_go? ||
generated_protocol_buffer? ||
generated_javascript_protocol_buffer? ||
generated_apache_thrift? ||
generated_jni_header? ||
vcr_cassette? ||
generated_module? ||
generated_unity3d_meta? ||
generated_racc? ||
generated_jflex? ||
generated_grammarkit? ||
generated_roxygen2? ||
generated_jison? ||
generated_yarn_lock? ||
generated_grpc_cpp?
name == 'Gemfile.lock' ||
minified_files? ||
compiled_coffeescript? ||
xcode_file? ||
generated_parser? ||
generated_net_docfile? ||
generated_net_designer_file? ||
generated_postscript? ||
generated_protocol_buffer? ||
generated_jni_header? ||
composer_lock? ||
node_modules? ||
vcr_cassette? ||
generated_by_zephir?
end
# Internal: Is the blob an Xcode file?
#
# Generated if the file extension is an Xcode
# Generated if the file extension is an Xcode
# file extension.
#
# Returns true of false.
@@ -97,20 +77,6 @@ module Linguist
['.nib', '.xcworkspacedata', '.xcuserstate'].include?(extname)
end
# Internal: Is the blob part of Pods/, which contains dependencies not meant for humans in pull requests.
#
# Returns true or false.
def cocoapods?
!!name.match(/(^Pods|\/Pods)\//)
end
# Internal: Is the blob part of Carthage/Build/, which contains dependencies not meant for humans in pull requests.
#
# Returns true or false.
def carthage_build?
!!name.match(/(^|\/)Carthage\/Build\//)
end
# Internal: Is the blob minified files?
#
# Consider a file minified if the average line length is
@@ -128,35 +94,6 @@ module Linguist
end
end
# Internal: Does the blob contain a source map reference?
#
# We assume that if one of the last 2 lines starts with a source map
# reference, then the current file was generated from other files.
#
# We use the last 2 lines because the last line might be empty.
#
# We only handle JavaScript, no CSS support yet.
#
# Returns true or false.
def has_source_map?
return false unless extname.downcase == '.js'
lines.last(2).any? { |line| line.start_with?('//# sourceMappingURL') }
end
# Internal: Is the blob a generated source map?
#
# Source Maps usually have .css.map or .js.map extensions. In case they
# are not following the name convention, detect them based on the content.
#
# Returns true or false.
def source_map?
return false unless extname.downcase == '.map'
name =~ /(\.css|\.js)\.map$/i || # Name convention
lines[0] =~ /^{"version":\d+,/ || # Revision 2 and later begin with the version number
lines[0] =~ /^\/\*\* Begin line maps\. \*\*\/{/ # Revision 1 begins with a magic comment
end
# Internal: Is the blob of JS generated by CoffeeScript?
#
# CoffeeScript is meant to output JS that would be difficult to
@@ -225,17 +162,6 @@ module Linguist
name.downcase =~ /\.designer\.cs$/
end
# Internal: Is this a codegen file for Specflow feature file?
#
# Visual Studio's SpecFlow extension generates *.feature.cs files
# from *.feature files, they are not meant to be consumed by humans.
# Let's hide them.
#
# Returns true or false
def generated_net_specflow_feature_file?
name.downcase =~ /\.feature\.cs$/
end
# Internal: Is the blob of JS a parser generated by PEG.js?
#
# PEG.js-generated parsers are not meant to be consumed by humans.
@@ -260,11 +186,7 @@ module Linguist
#
# Returns true or false.
def generated_postscript?
return false unless ['.ps', '.eps', '.pfa'].include? extname
# Type 1 and Type 42 fonts converted to PostScript are stored as hex-encoded byte streams; these
# streams are always preceded the `eexec` operator (if Type 1), or the `/sfnts` key (if Type 42).
return true if data =~ /(\n|\r\n|\r)\s*(?:currentfile eexec\s+|\/sfnts\s+\[\1<)\h{8,}\1/
return false unless ['.ps', '.eps'].include? extname
# We analyze the "%%Creator:" comment, which contains the author/generator
# of the file. If there is one, it should be in one of the first few lines.
@@ -274,55 +196,23 @@ module Linguist
# Most generators write their version number, while human authors' or companies'
# names don't contain numbers. So look if the line contains digits. Also
# look for some special cases without version numbers.
return true if creator =~ /[0-9]|draw|mpage|ImageMagick|inkscape|MATLAB/ ||
creator =~ /PCBNEW|pnmtops|\(Unknown\)|Serif Affinity|Filterimage -tops/
# EAGLE doesn't include a version number when it generates PostScript.
# However, it does prepend its name to the document's "%%Title" field.
!!creator.include?("EAGLE") and lines[0..4].find {|line| line =~ /^%%Title: EAGLE Drawing /}
return creator =~ /[0-9]/ ||
creator.include?("mpage") ||
creator.include?("draw") ||
creator.include?("ImageMagick")
end
def generated_go?
return false unless extname == '.go'
return false unless lines.count > 1
return lines[0].include?("Code generated by")
end
PROTOBUF_EXTENSIONS = ['.py', '.java', '.h', '.cc', '.cpp']
# Internal: Is the blob a C++, Java or Python source file generated by the
# Protocol Buffer compiler?
#
# Returns true of false.
def generated_protocol_buffer?
return false unless PROTOBUF_EXTENSIONS.include?(extname)
return false unless ['.py', '.java', '.h', '.cc', '.cpp'].include?(extname)
return false unless lines.count > 1
return lines[0].include?("Generated by the protocol buffer compiler. DO NOT EDIT!")
end
# Internal: Is the blob a Javascript source file generated by the
# Protocol Buffer compiler?
#
# Returns true of false.
def generated_javascript_protocol_buffer?
return false unless extname == ".js"
return false unless lines.count > 6
return lines[5].include?("GENERATED CODE -- DO NOT EDIT!")
end
APACHE_THRIFT_EXTENSIONS = ['.rb', '.py', '.go', '.js', '.m', '.java', '.h', '.cc', '.cpp', '.php']
# Internal: Is the blob generated by Apache Thrift compiler?
#
# Returns true or false
def generated_apache_thrift?
return false unless APACHE_THRIFT_EXTENSIONS.include?(extname)
return lines.first(6).any? { |l| l.include?("Autogenerated by Thrift Compiler") }
end
# Internal: Is the blob a C/C++ header generated by the Java JNI tool javah?
#
# Returns true of false.
@@ -341,29 +231,6 @@ module Linguist
!!name.match(/node_modules\//)
end
# Internal: Is the blob part of the Go vendor/ tree,
# not meant for humans in pull requests.
#
# Returns true or false.
def go_vendor?
!!name.match(/vendor\/((?!-)[-0-9A-Za-z]+(?<!-)\.)+(com|edu|gov|in|me|net|org|fm|io)/)
end
# Internal: Is the blob a generated npm shrinkwrap or package lock file?
#
# Returns true or false.
def npm_shrinkwrap_or_package_lock?
name.match(/npm-shrinkwrap\.json/) || name.match(/package-lock\.json/)
end
# Internal: Is the blob part of Godeps/,
# which are not meant for humans in pull requests.
#
# Returns true or false.
def godeps?
!!name.match(/Godeps\//)
end
# Internal: Is the blob a generated php composer lock file?
#
# Returns true or false.
@@ -371,7 +238,7 @@ module Linguist
!!name.match(/composer\.lock/)
end
# Internal: Is the blob generated by Zephir?
# Internal: Is the blob a generated by Zephir
#
# Returns true or false.
def generated_by_zephir?
@@ -387,143 +254,6 @@ module Linguist
# VCR Cassettes have "recorded_with: VCR" in the second last line.
return lines[-2].include?("recorded_with: VCR")
end
# Internal: Is this a compiled C/C++ file from Cython?
#
# Cython-compiled C/C++ files typically contain:
# /* Generated by Cython x.x.x on ... */
# on the first line.
#
# Return true or false
def compiled_cython_file?
return false unless ['.c', '.cpp'].include? extname
return false unless lines.count > 1
return lines[0].include?("Generated by Cython")
end
# Internal: Is it a KiCAD or GFortran module file?
#
# KiCAD module files contain:
# PCBNEW-LibModule-V1 yyyy-mm-dd h:mm:ss XM
# on the first line.
#
# GFortran module files contain:
# GFORTRAN module version 'x' created from
# on the first line.
#
# Return true of false
def generated_module?
return false unless extname == '.mod'
return false unless lines.count > 1
return lines[0].include?("PCBNEW-LibModule-V") ||
lines[0].include?("GFORTRAN module version '")
end
# Internal: Is this a metadata file from Unity3D?
#
# Unity3D Meta files start with:
# fileFormatVersion: X
# guid: XXXXXXXXXXXXXXX
#
# Return true or false
def generated_unity3d_meta?
return false unless extname == '.meta'
return false unless lines.count > 1
return lines[0].include?("fileFormatVersion: ")
end
# Internal: Is this a Racc-generated file?
#
# A Racc-generated file contains:
# # This file is automatically generated by Racc x.y.z
# on the third line.
#
# Return true or false
def generated_racc?
return false unless extname == '.rb'
return false unless lines.count > 2
return lines[2].start_with?("# This file is automatically generated by Racc")
end
# Internal: Is this a JFlex-generated file?
#
# A JFlex-generated file contains:
# /* The following code was generated by JFlex x.y.z on d/at/e ti:me */
# on the first line.
#
# Return true or false
def generated_jflex?
return false unless extname == '.java'
return false unless lines.count > 1
return lines[0].start_with?("/* The following code was generated by JFlex ")
end
# Internal: Is this a GrammarKit-generated file?
#
# A GrammarKit-generated file typically contain:
# // This is a generated file. Not intended for manual editing.
# on the first line. This is not always the case, as it's possible to
# customize the class header.
#
# Return true or false
def generated_grammarkit?
return false unless extname == '.java'
return false unless lines.count > 1
return lines[0].start_with?("// This is a generated file. Not intended for manual editing.")
end
# Internal: Is this a roxygen2-generated file?
#
# A roxygen2-generated file typically contain:
# % Generated by roxygen2: do not edit by hand
# on the first line.
#
# Return true or false
def generated_roxygen2?
return false unless extname == '.Rd'
return false unless lines.count > 1
return lines[0].include?("% Generated by roxygen2: do not edit by hand")
end
# Internal: Is this a Jison-generated file?
#
# Jison-generated parsers typically contain:
# /* parser generated by jison
# on the first line.
#
# Jison-generated lexers typically contain:
# /* generated by jison-lex
# on the first line.
#
# Return true or false
def generated_jison?
return false unless extname == '.js'
return false unless lines.count > 1
return lines[0].start_with?("/* parser generated by jison ") ||
lines[0].start_with?("/* generated by jison-lex ")
end
# Internal: Is the blob a generated yarn lockfile?
#
# Returns true or false.
def generated_yarn_lock?
return false unless name.match(/yarn\.lock/)
return false unless lines.count > 0
return lines[0].include?("# THIS IS AN AUTOGENERATED FILE")
end
# Internal: Is this a protobuf/grpc-generated C++ file?
#
# A generated file contains:
# // Generated by the gRPC C++ plugin.
# on the first line.
#
# Return true or false
def generated_grpc_cpp?
return false unless %w{.cpp .hpp .h .cc}.include? extname
return false unless lines.count > 1
return lines[0].start_with?("// Generated by the gRPC")
end
end
end

View File

@@ -1,10 +0,0 @@
module Linguist
module Grammars
# Get the path to the directory containing the language grammar JSON files.
#
# Returns a String.
def self.path
File.expand_path("../../../grammars", __FILE__)
end
end
end

View File

@@ -1,525 +1,79 @@
module Linguist
# A collection of simple heuristics that can be used to better analyze languages.
class Heuristics
HEURISTICS_CONSIDER_BYTES = 50 * 1024
ACTIVE = true
# Public: Use heuristics to detect language of the blob.
# Public: Given an array of String language names,
# apply heuristics against the given data and return an array
# of matching languages, or nil.
#
# blob - An object that quacks like a blob.
# possible_languages - Array of Language objects
# data - Array of tokens or String data to analyze.
# languages - Array of language name Strings to restrict to.
#
# Examples
#
# Heuristics.call(FileBlob.new("path/to/file"), [
# Language["Ruby"], Language["Python"]
# ])
#
# Returns an Array of languages, or empty if none matched or were inconclusive.
def self.call(blob, candidates)
return [] if blob.symlink?
data = blob.data[0...HEURISTICS_CONSIDER_BYTES]
@heuristics.each do |heuristic|
if heuristic.matches?(blob.name, candidates)
return Array(heuristic.call(data))
# Returns an array of Languages or []
def self.find_by_heuristics(data, languages)
if active?
if languages.all? { |l| ["Perl", "Prolog"].include?(l) }
result = disambiguate_pl(data, languages)
end
if languages.all? { |l| ["ECL", "Prolog"].include?(l) }
result = disambiguate_ecl(data, languages)
end
return result
end
[] # No heuristics matched
end
# Internal: Define a new heuristic.
# .h extensions are ambigious between C, C++, and Objective-C.
# We want to shortcut look for Objective-C _and_ now C++ too!
#
# exts_and_langs - String names of file extensions and languages to
# disambiguate.
# heuristic - Block which takes data as an argument and returns a Language or nil.
#
# Examples
#
# disambiguate ".pm" do |data|
# if data.include?("use strict")
# Language["Perl"]
# elsif /^[^#]+:-/.match(data)
# Language["Prolog"]
# end
# end
#
def self.disambiguate(*exts_and_langs, &heuristic)
@heuristics << new(exts_and_langs, &heuristic)
# Returns an array of Languages or []
def self.disambiguate_c(data, languages)
matches = []
matches << Language["Objective-C"] if data.include?("@interface")
matches << Language["C++"] if data.include?("#include <cstdint>")
matches
end
# Internal: Array of defined heuristics
@heuristics = []
# Internal
def initialize(exts_and_langs, &heuristic)
@exts_and_langs, @candidates = exts_and_langs.partition {|e| e =~ /\A\./}
@heuristic = heuristic
def self.disambiguate_pl(data, languages)
matches = []
matches << Language["Prolog"] if data.include?(":-")
matches << Language["Perl"] if data.include?("use strict")
matches
end
# Internal: Check if this heuristic matches the candidate filenames or
# languages.
def matches?(filename, candidates)
filename = filename.downcase
candidates = candidates.compact.map(&:name)
@exts_and_langs.any? { |ext| filename.end_with?(ext) } ||
(candidates.any? &&
(@candidates - candidates == [] &&
candidates - @candidates == []))
def self.disambiguate_ecl(data, languages)
matches = []
matches << Language["Prolog"] if data.include?(":-")
matches << Language["ECL"] if data.include?(":=")
matches
end
# Internal: Perform the heuristic
def call(data)
@heuristic.call(data)
end
# Common heuristics
CPlusPlusRegex = Regexp.union(
/^\s*#\s*include <(cstdint|string|vector|map|list|array|bitset|queue|stack|forward_list|unordered_map|unordered_set|(i|o|io)stream)>/,
/^\s*template\s*</,
/^[ \t]*try/,
/^[ \t]*catch\s*\(/,
/^[ \t]*(class|(using[ \t]+)?namespace)\s+\w+/,
/^[ \t]*(private|public|protected):$/,
/std::\w+/)
ObjectiveCRegex = /^\s*(@(interface|class|protocol|property|end|synchronised|selector|implementation)\b|#import\s+.+\.h[">])/
Perl5Regex = /\buse\s+(?:strict\b|v?5\.)/
Perl6Regex = /^\s*(?:use\s+v6\b|\bmodule\b|\b(?:my\s+)?class\b)/
disambiguate ".as" do |data|
if /^\s*(package\s+[a-z0-9_\.]+|import\s+[a-zA-Z0-9_\.]+;|class\s+[A-Za-z0-9_]+\s+extends\s+[A-Za-z0-9_]+)/.match(data)
Language["ActionScript"]
def self.disambiguate_ts(data, languages)
matches = []
if (data.include?("</translation>"))
matches << Language["XML"]
else
Language["AngelScript"]
matches << Language["TypeScript"]
end
matches
end
disambiguate ".asc" do |data|
if /^(----[- ]BEGIN|ssh-(rsa|dss)) /.match(data)
Language["Public Key"]
elsif /^[=-]+(\s|\n)|{{[A-Za-z]/.match(data)
Language["AsciiDoc"]
elsif /^(\/\/.+|((import|export)\s+)?(function|int|float|char)\s+((room|repeatedly|on|game)_)?([A-Za-z]+[A-Za-z_0-9]+)\s*[;\(])/.match(data)
Language["AGS Script"]
end
def self.disambiguate_cl(data, languages)
matches = []
matches << Language["Common Lisp"] if data.include?("(defun ")
matches << Language["OpenCL"] if /\/\* |\/\/ |^\}/.match(data)
matches
end
disambiguate ".bb" do |data|
if /^\s*; /.match(data) || data.include?("End Function")
Language["BlitzBasic"]
elsif /^\s*(# |include|require)\b/.match(data)
Language["BitBake"]
end
def self.disambiguate_r(data, languages)
matches = []
matches << Language["Rebol"] if /\bRebol\b/i.match(data)
matches << Language["R"] if data.include?("<-")
matches
end
disambiguate ".builds" do |data|
if /^(\s*)(<Project|<Import|<Property|<?xml|xmlns)/i.match(data)
Language["XML"]
else
Language["Text"]
end
def self.active?
!!ACTIVE
end
disambiguate ".ch" do |data|
if /^\s*#\s*(if|ifdef|ifndef|define|command|xcommand|translate|xtranslate|include|pragma|undef)\b/i.match(data)
Language["xBase"]
end
end
disambiguate ".cl" do |data|
if /^\s*\((defun|in-package|defpackage) /i.match(data)
Language["Common Lisp"]
elsif /^class/x.match(data)
Language["Cool"]
elsif /\/\* |\/\/ |^\}/.match(data)
Language["OpenCL"]
end
end
disambiguate ".cls" do |data|
if /\\\w+{/.match(data)
Language["TeX"]
end
end
disambiguate ".cs" do |data|
if /![\w\s]+methodsFor: /.match(data)
Language["Smalltalk"]
elsif /^\s*namespace\s*[\w\.]+\s*{/.match(data) || /^\s*\/\//.match(data)
Language["C#"]
end
end
disambiguate ".d" do |data|
# see http://dlang.org/spec/grammar
# ModuleDeclaration | ImportDeclaration | FuncDeclaration | unittest
if /^module\s+[\w.]*\s*;|import\s+[\w\s,.:]*;|\w+\s+\w+\s*\(.*\)(?:\(.*\))?\s*{[^}]*}|unittest\s*(?:\(.*\))?\s*{[^}]*}/.match(data)
Language["D"]
# see http://dtrace.org/guide/chp-prog.html, http://dtrace.org/guide/chp-profile.html, http://dtrace.org/guide/chp-opt.html
elsif /^(\w+:\w*:\w*:\w*|BEGIN|END|provider\s+|(tick|profile)-\w+\s+{[^}]*}|#pragma\s+D\s+(option|attributes|depends_on)\s|#pragma\s+ident\s)/.match(data)
Language["DTrace"]
# path/target : dependency \
# target : \
# : dependency
# path/file.ext1 : some/path/../file.ext2
elsif /([\/\\].*:\s+.*\s\\$|: \\$|^ : |^[\w\s\/\\.]+\w+\.\w+\s*:\s+[\w\s\/\\.]+\w+\.\w+)/.match(data)
Language["Makefile"]
end
end
disambiguate ".ecl" do |data|
if /^[^#]+:-/.match(data)
Language["ECLiPSe"]
elsif data.include?(":=")
Language["ECL"]
end
end
disambiguate ".es" do |data|
if /^\s*(?:%%|main\s*\(.*?\)\s*->)/.match(data)
Language["Erlang"]
elsif /(?:\/\/|("|')use strict\1|export\s+default\s|\/\*.*?\*\/)/m.match(data)
Language["JavaScript"]
end
end
fortran_rx = /^([c*][^abd-z]| (subroutine|program|end|data)\s|\s*!)/i
disambiguate ".f" do |data|
if /^: /.match(data)
Language["Forth"]
elsif data.include?("flowop")
Language["Filebench WML"]
elsif fortran_rx.match(data)
Language["Fortran"]
end
end
disambiguate ".for" do |data|
if /^: /.match(data)
Language["Forth"]
elsif fortran_rx.match(data)
Language["Fortran"]
end
end
disambiguate ".fr" do |data|
if /^(: |also |new-device|previous )/.match(data)
Language["Forth"]
elsif /^\s*(import|module|package|data|type) /.match(data)
Language["Frege"]
else
Language["Text"]
end
end
disambiguate ".fs" do |data|
if /^(: |new-device)/.match(data)
Language["Forth"]
elsif /^\s*(#light|import|let|module|namespace|open|type)/.match(data)
Language["F#"]
elsif /^\s*(#version|precision|uniform|varying|vec[234])/.match(data)
Language["GLSL"]
elsif /#include|#pragma\s+(rs|version)|__attribute__/.match(data)
Language["Filterscript"]
end
end
disambiguate ".gs" do |data|
Language["Gosu"] if /^uses java\./.match(data)
end
disambiguate ".h" do |data|
if ObjectiveCRegex.match(data)
Language["Objective-C"]
elsif CPlusPlusRegex.match(data)
Language["C++"]
end
end
disambiguate ".inc" do |data|
if /^<\?(?:php)?/.match(data)
Language["PHP"]
elsif /^\s*#(declare|local|macro|while)\s/.match(data)
Language["POV-Ray SDL"]
end
end
disambiguate ".l" do |data|
if /\(def(un|macro)\s/.match(data)
Language["Common Lisp"]
elsif /^(%[%{}]xs|<.*>)/.match(data)
Language["Lex"]
elsif /^\.[a-z][a-z](\s|$)/i.match(data)
Language["Roff"]
elsif /^\((de|class|rel|code|data|must)\s/.match(data)
Language["PicoLisp"]
end
end
disambiguate ".ls" do |data|
if /^\s*package\s*[\w\.\/\*\s]*\s*{/.match(data)
Language["LoomScript"]
else
Language["LiveScript"]
end
end
disambiguate ".lsp", ".lisp" do |data|
if /^\s*\((defun|in-package|defpackage) /i.match(data)
Language["Common Lisp"]
elsif /^\s*\(define /.match(data)
Language["NewLisp"]
end
end
disambiguate ".m" do |data|
if ObjectiveCRegex.match(data)
Language["Objective-C"]
elsif data.include?(":- module")
Language["Mercury"]
elsif /^: /.match(data)
Language["MUF"]
elsif /^\s*;/.match(data)
Language["M"]
elsif /\*\)$/.match(data)
Language["Mathematica"]
elsif /^\s*%/.match(data)
Language["Matlab"]
elsif /^\w+\s*:\s*module\s*{/.match(data)
Language["Limbo"]
end
end
disambiguate ".md" do |data|
if /(^[-a-z0-9=#!\*\[|>])|<\//i.match(data) || data.empty?
Language["Markdown"]
elsif /^(;;|\(define_)/.match(data)
Language["GCC Machine Description"]
else
Language["Markdown"]
end
end
disambiguate ".ml" do |data|
if /(^\s*module)|let rec |match\s+(\S+\s)+with/.match(data)
Language["OCaml"]
elsif /=> |case\s+(\S+\s)+of/.match(data)
Language["Standard ML"]
end
end
disambiguate ".mod" do |data|
if data.include?('<!ENTITY ')
Language["XML"]
elsif /^\s*MODULE [\w\.]+;/i.match(data) || /^\s*END [\w\.]+;/i.match(data)
Language["Modula-2"]
else
[Language["Linux Kernel Module"], Language["AMPL"]]
end
end
disambiguate ".ms" do |data|
if /^[.'][a-z][a-z](\s|$)/i.match(data)
Language["Roff"]
elsif /(?<!\S)\.(include|globa?l)\s/.match(data) || /(?<!\/\*)(\A|\n)\s*\.[A-Za-z][_A-Za-z0-9]*:/.match(data.gsub(/"([^\\"]|\\.)*"|'([^\\']|\\.)*'|\\\s*(?:--.*)?\n/, ""))
Language["Unix Assembly"]
else
Language["MAXScript"]
end
end
disambiguate ".n" do |data|
if /^[.']/.match(data)
Language["Roff"]
elsif /^(module|namespace|using)\s/.match(data)
Language["Nemerle"]
end
end
disambiguate ".ncl" do |data|
if data.include?("THE_TITLE")
Language["Text"]
end
end
disambiguate ".nl" do |data|
if /^(b|g)[0-9]+ /.match(data)
Language["NL"]
else
Language["NewLisp"]
end
end
disambiguate ".php" do |data|
if data.include?("<?hh")
Language["Hack"]
elsif /<?[^h]/.match(data)
Language["PHP"]
end
end
disambiguate ".pl" do |data|
if /^[^#]*:-/.match(data)
Language["Prolog"]
elsif Perl5Regex.match(data)
Language["Perl"]
elsif Perl6Regex.match(data)
Language["Perl 6"]
end
end
disambiguate ".pm" do |data|
if Perl5Regex.match(data)
Language["Perl"]
elsif Perl6Regex.match(data)
Language["Perl 6"]
elsif /^\s*\/\* XPM \*\//.match(data)
Language["XPM"]
end
end
disambiguate ".pro" do |data|
if /^[^\[#]+:-/.match(data)
Language["Prolog"]
elsif data.include?("last_client=")
Language["INI"]
elsif data.include?("HEADERS") && data.include?("SOURCES")
Language["QMake"]
elsif /^\s*function[ \w,]+$/.match(data)
Language["IDL"]
end
end
disambiguate ".props" do |data|
if /^(\s*)(<Project|<Import|<Property|<?xml|xmlns)/i.match(data)
Language["XML"]
elsif /\w+\s*=\s*/i.match(data)
Language["INI"]
end
end
disambiguate ".r" do |data|
if /\bRebol\b/i.match(data)
Language["Rebol"]
elsif /<-|^\s*#/.match(data)
Language["R"]
end
end
disambiguate ".rno" do |data|
if /^\.!|^\.end lit(?:eral)?\b/i.match(data)
Language["RUNOFF"]
elsif /^\.\\" /.match(data)
Language["Roff"]
end
end
disambiguate ".rpy" do |data|
if /(^(import|from|class|def)\s)/m.match(data)
Language["Python"]
else
Language["Ren'Py"]
end
end
disambiguate ".rs" do |data|
if /^(use |fn |mod |pub |macro_rules|impl|#!?\[)/.match(data)
Language["Rust"]
elsif /#include|#pragma\s+(rs|version)|__attribute__/.match(data)
Language["RenderScript"]
end
end
disambiguate ".sc" do |data|
if /\^(this|super)\./.match(data) || /^\s*(\+|\*)\s*\w+\s*{/.match(data) || /^\s*~\w+\s*=\./.match(data)
Language["SuperCollider"]
elsif /^\s*import (scala|java)\./.match(data) || /^\s*val\s+\w+\s*=/.match(data) || /^\s*class\b/.match(data)
Language["Scala"]
end
end
disambiguate ".sql" do |data|
if /^\\i\b|AS \$\$|LANGUAGE '?plpgsql'?/i.match(data) || /SECURITY (DEFINER|INVOKER)/i.match(data) || /BEGIN( WORK| TRANSACTION)?;/i.match(data)
#Postgres
Language["PLpgSQL"]
elsif /(alter module)|(language sql)|(begin( NOT)+ atomic)/i.match(data) || /signal SQLSTATE '[0-9]+'/i.match(data)
#IBM db2
Language["SQLPL"]
elsif /\$\$PLSQL_|XMLTYPE|sysdate|systimestamp|\.nextval|connect by|AUTHID (DEFINER|CURRENT_USER)/i.match(data) || /constructor\W+function/i.match(data)
#Oracle
Language["PLSQL"]
elsif ! /begin|boolean|package|exception/i.match(data)
#Generic SQL
Language["SQL"]
end
end
disambiguate ".srt" do |data|
if /^(\d{2}:\d{2}:\d{2},\d{3})\s*(-->)\s*(\d{2}:\d{2}:\d{2},\d{3})$/.match(data)
Language["SubRip Text"]
end
end
disambiguate ".t" do |data|
if Perl5Regex.match(data)
Language["Perl"]
elsif Perl6Regex.match(data)
Language["Perl 6"]
elsif /^\s*%[ \t]+|^\s*var\s+\w+\s*:=\s*\w+/.match(data)
Language["Turing"]
end
end
disambiguate ".toc" do |data|
if /^## |@no-lib-strip@/.match(data)
Language["World of Warcraft Addon Data"]
elsif /^\\(contentsline|defcounter|beamer|boolfalse)/.match(data)
Language["TeX"]
end
end
disambiguate ".ts" do |data|
if /<TS\b/.match(data)
Language["XML"]
else
Language["TypeScript"]
end
end
disambiguate ".tst" do |data|
if (data.include?("gap> "))
Language["GAP"]
# Heads up - we don't usually write heuristics like this (with no regex match)
else
Language["Scilab"]
end
end
disambiguate ".tsx" do |data|
if /^\s*(import.+(from\s+|require\()['"]react|\/\/\/\s*<reference\s)/.match(data)
Language["TypeScript"]
elsif /^\s*<\?xml\s+version/i.match(data)
Language["XML"]
end
end
disambiguate ".w" do |data|
if (data.include?("&ANALYZE-SUSPEND _UIB-CODE-BLOCK _CUSTOM _DEFINITIONS"))
Language["OpenEdge ABL"]
elsif /^@(<|\w+\.)/.match(data)
Language["CWeb"]
end
end
disambiguate ".x" do |data|
if /\b(program|version)\s+\w+\s*{|\bunion\s+\w+\s+switch\s*\(/.match(data)
Language["RPC"]
elsif /^%(end|ctor|hook|group)\b/.match(data)
Language["Logos"]
end
end
end
end

View File

@@ -1,7 +1,8 @@
require 'escape_utils'
require 'pygments'
require 'yaml'
begin
require 'yajl'
require 'json'
rescue LoadError
end
@@ -10,10 +11,6 @@ require 'linguist/heuristics'
require 'linguist/samples'
require 'linguist/file_blob'
require 'linguist/blob_helper'
require 'linguist/strategy/filename'
require 'linguist/strategy/extension'
require 'linguist/strategy/modeline'
require 'linguist/shebang'
module Linguist
# Language names that are recognizable by GitHub. Defined languages
@@ -21,11 +18,10 @@ module Linguist
#
# Languages are defined in `lib/linguist/languages.yml`.
class Language
@languages = []
@index = {}
@name_index = {}
@alias_index = {}
@language_id_index = {}
@languages = []
@index = {}
@name_index = {}
@alias_index = {}
@extension_index = Hash.new { |h,k| h[k] = [] }
@interpreter_index = Hash.new { |h,k| h[k] = [] }
@@ -34,6 +30,13 @@ module Linguist
# Valid Languages types
TYPES = [:data, :markup, :programming, :prose]
# Names of non-programming languages that we will still detect
#
# Returns an array
def self.detectable_markup
["CSS", "Less", "Sass", "SCSS", "Stylus", "TeX"]
end
# Detect languages by a specific type
#
# type - A symbol that exists within TYPES
@@ -59,7 +62,7 @@ module Linguist
end
# Language name index
@index[language.name.downcase] = @name_index[language.name.downcase] = language
@index[language.name] = @name_index[language.name] = language
language.aliases.each do |name|
# All Language aliases should be unique. Raise if there is a duplicate.
@@ -67,7 +70,7 @@ module Linguist
raise ArgumentError, "Duplicate alias: #{name}"
end
@index[name.downcase] = @alias_index[name.downcase] = language
@index[name] = @alias_index[name] = language
end
language.extensions.each do |extension|
@@ -75,7 +78,7 @@ module Linguist
raise ArgumentError, "Extension is missing a '.': #{extension.inspect}"
end
@extension_index[extension.downcase] << language
@extension_index[extension] << language
end
language.interpreters.each do |interpreter|
@@ -86,11 +89,63 @@ module Linguist
@filename_index[filename] << language
end
@language_id_index[language.language_id] = language
language
end
# Public: Detects the Language of the blob.
#
# blob - an object that includes the Linguist `BlobHelper` interface;
# see Linguist::LazyBlob and Linguist::FileBlob for examples
#
# Returns Language or nil.
def self.detect(blob)
name = blob.name.to_s
# Check if the blob is possibly binary and bail early; this is a cheap
# test that uses the extension name to guess a binary binary mime type.
#
# We'll perform a more comprehensive test later which actually involves
# looking for binary characters in the blob
return nil if blob.likely_binary? || blob.binary?
# A bit of an elegant hack. If the file is executable but extensionless,
# append a "magic" extension so it can be classified with other
# languages that have shebang scripts.
extension = FileBlob.new(name).extension
if extension.empty? && blob.mode && (blob.mode.to_i(8) & 05) == 05
name += ".script!"
end
# First try to find languages that match based on filename.
possible_languages = find_by_filename(name)
# If there is more than one possible language with that extension (or no
# extension at all, in the case of extensionless scripts), we need to continue
# our detection work
if possible_languages.length > 1
data = blob.data
possible_language_names = possible_languages.map(&:name)
# Don't bother with binary contents or an empty file
if data.nil? || data == ""
nil
# Check if there's a shebang line and use that as authoritative
elsif (result = find_by_shebang(data)) && !result.empty?
result.first
# No shebang. Still more work to do. Try to find it with our heuristics.
elsif (determined = Heuristics.find_by_heuristics(data, possible_language_names)) && !determined.empty?
determined.first
# Lastly, fall back to the probablistic classifier.
elsif classified = Classifier.classify(Samples::DATA, data, possible_language_names).first
# Return the actual Language object based of the string language name (i.e., first element of `#classify`)
Language[classified[0]]
end
else
# Simplest and most common case, we can just return the one match based on extension
possible_languages.first
end
end
# Public: Get all Languages
#
# Returns an Array of Languages
@@ -109,8 +164,7 @@ module Linguist
#
# Returns the Language or nil if none was found.
def self.find_by_name(name)
return nil if !name.is_a?(String) || name.to_s.empty?
name && (@name_index[name.downcase] || @name_index[name.split(',', 2).first.downcase])
@name_index[name]
end
# Public: Look up Language by one of its aliases.
@@ -122,85 +176,44 @@ module Linguist
# Language.find_by_alias('cpp')
# # => #<Language name="C++">
#
# Returns the Language or nil if none was found.
# Returns the Lexer or nil if none was found.
def self.find_by_alias(name)
return nil if !name.is_a?(String) || name.to_s.empty?
name && (@alias_index[name.downcase] || @alias_index[name.split(',', 2).first.downcase])
@alias_index[name]
end
# Public: Look up Languages by filename.
#
# The behaviour of this method recently changed.
# See the second example below.
#
# filename - The path String.
#
# Examples
#
# Language.find_by_filename('Cakefile')
# # => [#<Language name="CoffeeScript">]
# Language.find_by_filename('foo.rb')
# # => []
# # => [#<Language name="Ruby">]
#
# Returns all matching Languages or [] if none were found.
def self.find_by_filename(filename)
basename = File.basename(filename)
@filename_index[basename]
extname = FileBlob.new(filename).extension
langs = @filename_index[basename] +
@extension_index[extname]
langs.compact.uniq
end
# Public: Look up Languages by file extension.
# Public: Look up Languages by shebang line.
#
# The behaviour of this method recently changed.
# See the second example below.
#
# filename - The path String.
# data - Array of tokens or String data to analyze.
#
# Examples
#
# Language.find_by_extension('dummy.rb')
# # => [#<Language name="Ruby">]
# Language.find_by_extension('rb')
# # => []
#
# Returns all matching Languages or [] if none were found.
def self.find_by_extension(filename)
# find the first extension with language definitions
extname = FileBlob.new(filename.downcase).extensions.detect do |e|
!@extension_index[e].empty?
end
@extension_index[extname]
end
# Public: Look up Languages by interpreter.
#
# interpreter - String of interpreter name
#
# Examples
#
# Language.find_by_interpreter("bash")
# Language.find_by_shebang("#!/bin/bash\ndate;")
# # => [#<Language name="Bash">]
#
# Returns the matching Language
def self.find_by_interpreter(interpreter)
@interpreter_index[interpreter]
def self.find_by_shebang(data)
@interpreter_index[Linguist.interpreter_from_shebang(data)]
end
# Public: Look up Languages by its language_id.
#
# language_id - Integer of language_id
#
# Examples
#
# Language.find_by_id(100)
# # => [#<Language name="Elixir">]
#
# Returns the matching Language
def self.find_by_id(language_id)
@language_id_index[language_id.to_i]
end
# Public: Look up Language by its name.
# Public: Look up Language by its name or lexer.
#
# name - The String name of the Language
#
@@ -214,12 +227,7 @@ module Linguist
#
# Returns the Language or nil if none was found.
def self.[](name)
return nil if !name.is_a?(String) || name.to_s.empty?
lang = @index[name.downcase]
return lang if lang
@index[name.split(',', 2).first.downcase]
@index[name]
end
# Public: A List of popular languages
@@ -229,7 +237,7 @@ module Linguist
#
# This list is configured in "popular.yml".
#
# Returns an Array of Languages.
# Returns an Array of Lexers.
def self.popular
@popular ||= all.select(&:popular?).sort_by { |lang| lang.name.downcase }
end
@@ -241,7 +249,7 @@ module Linguist
#
# This list is created from all the languages not listed in "popular.yml".
#
# Returns an Array of Languages.
# Returns an Array of Lexers.
def self.unpopular
@unpopular ||= all.select(&:unpopular?).sort_by { |lang| lang.name.downcase }
end
@@ -253,6 +261,13 @@ module Linguist
@colors ||= all.select(&:color).sort_by { |lang| lang.name.downcase }
end
# Public: A List of languages compatible with Ace.
#
# Returns an Array of Languages.
def self.ace_modes
@ace_modes ||= all.select(&:ace_mode).sort_by { |lang| lang.name.downcase }
end
# Internal: Initialize a new Language
#
# attributes - A hash of attributes
@@ -269,26 +284,17 @@ module Linguist
@color = attributes[:color]
# Set aliases
@aliases = [default_alias] + (attributes[:aliases] || [])
@aliases = [default_alias_name] + (attributes[:aliases] || [])
# Load the TextMate scope name or try to guess one
@tm_scope = attributes[:tm_scope] || begin
context = case @type
when :data, :markup, :prose
'text'
when :programming, nil
'source'
end
"#{context}.#{@name.downcase}"
end
# Lookup Lexer object
@lexer = Pygments::Lexer.find_by_name(attributes[:lexer] || name) ||
raise(ArgumentError, "#{@name} is missing lexer")
@ace_mode = attributes[:ace_mode]
@codemirror_mode = attributes[:codemirror_mode]
@codemirror_mime_type = attributes[:codemirror_mime_type]
@wrap = attributes[:wrap] || false
# Set the language_id
@language_id = attributes[:language_id]
# Set legacy search term
@search_term = attributes[:search_term] || default_alias_name
# Set extensions or default to [].
@extensions = attributes[:extensions] || []
@@ -341,21 +347,21 @@ module Linguist
# Returns an Array of String names
attr_reader :aliases
# Public: Get language_id (used in GitHub search)
# Deprecated: Get code search term
#
# Examples
#
# # => "1"
# # => "2"
# # => "3"
# # => "ruby"
# # => "python"
# # => "perl"
#
# Returns the integer language_id
attr_reader :language_id
# Returns the name String
attr_reader :search_term
# Public: Get the name of a TextMate-compatible scope
# Public: Get Lexer
#
# Returns the scope
attr_reader :tm_scope
# Returns the Lexer
attr_reader :lexer
# Public: Get Ace mode
#
@@ -368,31 +374,6 @@ module Linguist
# Returns a String name or nil
attr_reader :ace_mode
# Public: Get CodeMirror mode
#
# Maps to a directory in the `mode/` source code.
# https://github.com/codemirror/CodeMirror/tree/master/mode
#
# Examples
#
# # => "nil"
# # => "javascript"
# # => "clike"
#
# Returns a String name or nil
attr_reader :codemirror_mode
# Public: Get CodeMirror MIME type mode
#
# Examples
#
# # => "nil"
# # => "text/x-javascript"
# # => "text/x-csrc"
#
# Returns a String name or nil
attr_reader :codemirror_mime_type
# Public: Should language lines be wrapped
#
# Returns true or false
@@ -425,6 +406,27 @@ module Linguist
# Returns the extensions Array
attr_reader :filenames
# Public: Return all possible extensions for language
def all_extensions
(extensions + [primary_extension]).uniq
end
# Deprecated: Get primary extension
#
# Defaults to the first extension but can be overridden
# in the languages.yml.
#
# The primary extension can not be nil. Tests should verify this.
#
# This method is only used by app/helpers/gists_helper.rb for creating
# the language dropdown. It really should be using `name` instead.
# Would like to drop primary extension.
#
# Returns the extension String.
def primary_extension
extensions.first
end
# Public: Get URL escaped name.
#
# Examples
@@ -438,13 +440,12 @@ module Linguist
EscapeUtils.escape_url(name).gsub('+', '%20')
end
# Public: Get default alias name
# Internal: Get default alias name
#
# Returns the alias name String
def default_alias
def default_alias_name
name.downcase.gsub(/\s/, '-')
end
alias_method :default_alias_name, :default_alias
# Public: Get Language group
#
@@ -477,6 +478,16 @@ module Linguist
@searchable
end
# Public: Highlight syntax of text
#
# text - String of code to be highlighted
# options - A Hash of options (defaults to {})
#
# Returns html String
def colorize(text, options = {})
lexer.highlight(text, options)
end
# Public: Return name as String representation
def to_s
name
@@ -499,16 +510,16 @@ module Linguist
end
end
extensions = Samples.cache['extnames']
interpreters = Samples.cache['interpreters']
filenames = Samples.cache['filenames']
extensions = Samples::DATA['extnames']
interpreters = Samples::DATA['interpreters']
filenames = Samples::DATA['filenames']
popular = YAML.load_file(File.expand_path("../popular.yml", __FILE__))
languages_yml = File.expand_path("../languages.yml", __FILE__)
languages_json = File.expand_path("../languages.json", __FILE__)
if File.exist?(languages_json) && defined?(Yajl)
languages = Yajl.load(File.read(languages_json))
if File.exist?(languages_json) && defined?(JSON)
languages = JSON.load(File.read(languages_json))
else
languages = YAML.load_file(languages_yml)
end
@@ -520,8 +531,8 @@ module Linguist
if extnames = extensions[name]
extnames.each do |extname|
if !options['extensions'].index { |x| x.downcase.end_with? extname.downcase }
warn "#{name} has a sample with extension (#{extname.downcase}) that isn't explicitly defined in languages.yml"
if !options['extensions'].include?(extname)
warn "#{name} has a sample with extension (#{extname}) that isn't explicitly defined in languages.yml" unless extname == '.script!'
options['extensions'] << extname
end
end
@@ -552,15 +563,13 @@ module Linguist
:color => options['color'],
:type => options['type'],
:aliases => options['aliases'],
:tm_scope => options['tm_scope'],
:lexer => options['lexer'],
:ace_mode => options['ace_mode'],
:codemirror_mode => options['codemirror_mode'],
:codemirror_mime_type => options['codemirror_mime_type'],
:wrap => options['wrap'],
:group_name => options['group'],
:searchable => options.fetch('searchable', true),
:language_id => options['language_id'],
:extensions => Array(options['extensions']),
:searchable => options.key?('searchable') ? options['searchable'] : true,
:search_term => options['search_term'],
:extensions => [options['extensions'].first] + options['extensions'][1..-1].sort,
:interpreters => options['interpreters'].sort,
:filenames => options['filenames'],
:popular => popular.include?(name)

5492
lib/linguist/languages.yml Executable file → Normal file

File diff suppressed because it is too large Load Diff

View File

@@ -1,82 +1,22 @@
require 'linguist/blob_helper'
require 'linguist/language'
require 'rugged'
module Linguist
class LazyBlob
GIT_ATTR = ['linguist-documentation',
'linguist-language',
'linguist-vendored',
'linguist-generated',
'linguist-detectable']
GIT_ATTR_OPTS = { :priority => [:index], :skip_system => true }
GIT_ATTR_FLAGS = Rugged::Repository::Attributes.parse_opts(GIT_ATTR_OPTS)
include BlobHelper
MAX_SIZE = 128 * 1024
attr_reader :repository
attr_reader :oid
attr_reader :path
attr_reader :name
attr_reader :mode
alias :name :path
def initialize(repo, oid, path, mode = nil)
def initialize(repo, oid, name, mode = nil)
@repository = repo
@oid = oid
@path = path
@name = name
@mode = mode
@data = nil
end
def git_attributes
@git_attributes ||= repository.fetch_attributes(
name, GIT_ATTR, GIT_ATTR_FLAGS)
end
def documentation?
if attr = git_attributes['linguist-documentation']
boolean_attribute(attr)
else
super
end
end
def generated?
if attr = git_attributes['linguist-generated']
boolean_attribute(attr)
else
super
end
end
def vendored?
if attr = git_attributes['linguist-vendored']
return boolean_attribute(attr)
else
super
end
end
def language
return @language if defined?(@language)
@language = if lang = git_attributes['linguist-language']
Language.find_by_alias(lang)
else
super
end
end
def detectable?
if attr = git_attributes['linguist-detectable']
return boolean_attribute(attr)
else
nil
end
end
def data
@@ -89,22 +29,7 @@ module Linguist
@size
end
def symlink?
# We don't create LazyBlobs for symlinks.
false
end
def cleanup!
@data.clear if @data
end
protected
# Returns true if the attribute is present and not the string "false".
def boolean_attribute(attribute)
attribute != "false"
end
def load_blob!
@data, @size = Rugged::Blob.to_buffer(repository, oid, MAX_SIZE) if @data.nil?
end

View File

@@ -3,27 +3,27 @@
# This file should only be edited by GitHub staff
- ActionScript
- Bash
- C
- C#
- C++
- CSS
- Clojure
- CoffeeScript
- Go
- Common Lisp
- Diff
- Emacs Lisp
- Erlang
- HTML
- Haskell
- Java
- JavaScript
- Lua
- Matlab
- Objective-C
- PHP
- Perl
- Python
- R
- Ruby
- SQL
- Scala
- Shell
- Swift
- TeX
- Vim script
- Scheme

View File

@@ -30,9 +30,6 @@ module Linguist
@repository = repo
@commit_oid = commit_oid
@old_commit_oid = nil
@old_stats = nil
raise TypeError, 'commit_oid must be a commit SHA1' unless commit_oid.is_a?(String)
end
@@ -113,38 +110,18 @@ module Linguist
if @old_commit_oid == @commit_oid
@old_stats
else
compute_stats(@old_commit_oid, @old_stats)
compute_stats(@old_commit_oid, @commit_oid, @old_stats)
end
end
end
def read_index
attr_index = Rugged::Index.new
attr_index.read_tree(current_tree)
repository.index = attr_index
end
def current_tree
@tree ||= Rugged::Commit.lookup(repository, @commit_oid).tree
end
protected
MAX_TREE_SIZE = 100_000
def compute_stats(old_commit_oid, cache = nil)
return {} if current_tree.count_recursive(MAX_TREE_SIZE) >= MAX_TREE_SIZE
def compute_stats(old_commit_oid, commit_oid, cache = nil)
file_map = cache ? cache.dup : {}
old_tree = old_commit_oid && Rugged::Commit.lookup(repository, old_commit_oid).tree
read_index
diff = Rugged::Tree.diff(repository, old_tree, current_tree)
new_tree = Rugged::Commit.lookup(repository, commit_oid).tree
# Clear file map and fetch full diff if any .gitattributes files are changed
if cache && diff.each_delta.any? { |delta| File.basename(delta.new_file[:path]) == ".gitattributes" }
diff = Rugged::Tree.diff(repository, old_tree = nil, current_tree)
file_map = {}
else
file_map = cache ? cache.dup : {}
end
diff = Rugged::Tree.diff(repository, old_tree, new_tree)
diff.each_delta do |delta|
old = delta.old_file[:path]
@@ -154,18 +131,19 @@ module Linguist
next if delta.binary
if [:added, :modified].include? delta.status
# Skip submodules and symlinks
# Skip submodules
mode = delta.new_file[:mode]
mode_format = (mode & 0170000)
next if mode_format == 0120000 || mode_format == 040000 || mode_format == 0160000
next if (mode & 040000) != 0
blob = Linguist::LazyBlob.new(repository, delta.new_file[:oid], new, mode.to_s(8))
if blob.include_in_language_stats?
# Skip vendored or generated blobs
next if blob.vendored? || blob.generated? || blob.language.nil?
# Only include programming languages and acceptable markup languages
if blob.language.type == :programming || Language.detectable_markup.include?(blob.language.name)
file_map[new] = [blob.language.group.name, blob.size]
end
blob.cleanup!
end
end

73542
lib/linguist/samples.json Normal file

File diff suppressed because it is too large Load Diff

View File

@@ -1,12 +1,11 @@
begin
require 'yajl'
require 'json'
rescue LoadError
require 'yaml'
end
require 'linguist/md5'
require 'linguist/classifier'
require 'linguist/shebang'
module Linguist
# Model for accessing classifier training data.
@@ -18,11 +17,9 @@ module Linguist
PATH = File.expand_path('../samples.json', __FILE__)
# Hash of serialized samples object
def self.cache
@cache ||= begin
serializer = defined?(Yajl) ? Yajl : YAML
serializer.load(File.read(PATH, encoding: 'utf-8'))
end
if File.exist?(PATH)
serializer = defined?(JSON) ? JSON : YAML
DATA = serializer.load(File.read(PATH))
end
# Public: Iterate over each sample.
@@ -34,6 +31,10 @@ module Linguist
Dir.entries(ROOT).sort!.each do |category|
next if category == '.' || category == '..'
# Skip text and binary for now
# Possibly reconsider this later
next if category == 'Text' || category == 'Binary'
dirname = File.join(ROOT, category)
Dir.entries(dirname).each do |filename|
next if filename == '.' || filename == '..'
@@ -49,14 +50,15 @@ module Linguist
})
end
else
path = File.join(dirname, filename)
extname = File.extname(filename)
if File.extname(filename) == ""
raise "#{File.join(dirname, filename)} is missing an extension, maybe it belongs in filenames/ subdir"
end
yield({
:path => path,
:path => File.join(dirname, filename),
:language => category,
:interpreter => Shebang.interpreter(File.read(path)),
:extname => extname.empty? ? nil : extname
:interpreter => File.exist?(filename) ? Linguist.interpreter_from_shebang(File.read(filename)) : nil,
:extname => File.extname(filename)
})
end
end
@@ -108,4 +110,40 @@ module Linguist
db
end
end
# Used to retrieve the interpreter from the shebang line of a file's
# data.
def self.interpreter_from_shebang(data)
lines = data.lines.to_a
if lines.any? && (match = lines[0].match(/(.+)\n?/)) && (bang = match[0]) =~ /^#!/
bang.sub!(/^#! /, '#!')
tokens = bang.split(' ')
pieces = tokens.first.split('/')
if pieces.size > 1
script = pieces.last
else
script = pieces.first.sub('#!', '')
end
script = script == 'env' ? tokens[1] : script
# "python2.6" -> "python"
if script =~ /((?:\d+\.?)+)/
script.sub! $1, ''
end
# Check for multiline shebang hacks that call `exec`
if script == 'sh' &&
lines[0...5].any? { |l| l.match(/exec (\w+).+\$0.+\$@/) }
script = $1
end
script
else
nil
end
end
end

View File

@@ -1,61 +0,0 @@
module Linguist
class Shebang
# Public: Use shebang to detect language of the blob.
#
# blob - An object that quacks like a blob.
#
# Examples
#
# Shebang.call(FileBlob.new("path/to/file"))
#
# Returns an Array with one Language if the blob has a shebang with a valid
# interpreter, or empty if there is no shebang.
def self.call(blob, _ = nil)
return [] if blob.symlink?
Language.find_by_interpreter interpreter(blob.data)
end
# Public: Get the interpreter from the shebang
#
# Returns a String or nil
def self.interpreter(data)
shebang = data.lines.first
# First line must start with #!
return unless shebang && shebang.start_with?("#!")
s = StringScanner.new(shebang)
# There was nothing after the #!
return unless path = s.scan(/^#!\s*\S+/)
# Keep going
script = path.split('/').last
# if /usr/bin/env type shebang then walk the string
if script == 'env'
s.scan(/\s+/)
s.scan(/.*=[^\s]+\s+/) # skip over variable arguments e.g. foo=bar
script = s.scan(/\S+/)
end
# Interpreter was /usr/bin/env with no arguments
return unless script
# "python2.6" -> "python2"
script.sub!(/(\.\d+)$/, '')
# #! perl -> perl
script.sub!(/^#!\s*/, '')
# Check for multiline shebang hacks that call `exec`
if script == 'sh' &&
data.lines.first(5).any? { |l| l.match(/exec (\w+).+\$0.+\$@/) }
script = $1
end
File.basename(script)
end
end
end

View File

@@ -1,10 +0,0 @@
module Linguist
module Strategy
# Detects language based on extension
class Extension
def self.call(blob, _)
Language.find_by_extension(blob.name.to_s)
end
end
end
end

View File

@@ -1,11 +0,0 @@
module Linguist
module Strategy
# Detects language based on filename
class Filename
def self.call(blob, _)
name = blob.name.to_s
Language.find_by_filename(name)
end
end
end
end

View File

@@ -1,128 +0,0 @@
module Linguist
module Strategy
class Modeline
EMACS_MODELINE = /
-\*-
(?:
# Short form: `-*- ruby -*-`
\s* (?= [^:;\s]+ \s* -\*-)
|
# Longer form: `-*- foo:bar; mode: ruby; -*-`
(?:
.*? # Preceding variables: `-*- foo:bar bar:baz;`
[;\s] # Which are delimited by spaces or semicolons
|
(?<=-\*-) # Not preceded by anything: `-*-mode:ruby-*-`
)
mode # Major mode indicator
\s*:\s* # Allow whitespace around colon: `mode : ruby`
)
([^:;\s]+) # Name of mode
# Ensure the mode is terminated correctly
(?=
# Followed by semicolon or whitespace
[\s;]
|
# Touching the ending sequence: `ruby-*-`
(?<![-*]) # Don't allow stuff like `ruby--*-` to match; it'll invalidate the mode
-\*- # Emacs has no problems reading `ruby --*-`, however.
)
.*? # Anything between a cleanly-terminated mode and the ending -*-
-\*-
/xi
VIM_MODELINE = /
# Start modeline. Could be `vim:`, `vi:` or `ex:`
(?:
(?:\s|^)
vi
(?:m[<=>]?\d+|m)? # Version-specific modeline
|
[\t\x20] # `ex:` requires whitespace, because "ex:" might be short for "example:"
ex
)
# If the option-list begins with `set ` or `se `, it indicates an alternative
# modeline syntax partly-compatible with older versions of Vi. Here, the colon
# serves as a terminator for an option sequence, delimited by whitespace.
(?=
# So we have to ensure the modeline ends with a colon
: (?=\s* set? \s [^\n:]+ :) |
# Otherwise, it isn't valid syntax and should be ignored
: (?!\s* set? \s)
)
# Possible (unrelated) `option=value` pairs to skip past
(?:
# Option separator. Vim uses whitespace or colons to separate options (except if
# the alternate "vim: set " form is used, where only whitespace is used)
(?:
\s
|
\s* : \s* # Note that whitespace around colons is accepted too:
) # vim: noai : ft=ruby:noexpandtab
# Option's name. All recognised Vim options have an alphanumeric form.
\w*
# Possible value. Not every option takes an argument.
(?:
# Whitespace between name and value is allowed: `vim: ft =ruby`
\s*=
# Option's value. Might be blank; `vim: ft= ` says "use no filetype".
(?:
[^\\\s] # Beware of escaped characters: titlestring=\ ft=ruby
| # will be read by Vim as { titlestring: " ft=ruby" }.
\\.
)*
)?
)*
# The actual filetype declaration
[\s:] (?:filetype|ft|syntax) \s*=
# Language's name
(\w+)
# Ensure it's followed by a legal separator
(?=\s|:|$)
/xi
MODELINES = [EMACS_MODELINE, VIM_MODELINE]
# Scope of the search for modelines
# Number of lines to check at the beginning and at the end of the file
SEARCH_SCOPE = 5
# Public: Detects language based on Vim and Emacs modelines
#
# blob - An object that quacks like a blob.
#
# Examples
#
# Modeline.call(FileBlob.new("path/to/file"))
#
# Returns an Array with one Language if the blob has a Vim or Emacs modeline
# that matches a Language name or alias. Returns an empty array if no match.
def self.call(blob, _ = nil)
return [] if blob.symlink?
header = blob.first_lines(SEARCH_SCOPE).join("\n")
footer = blob.last_lines(SEARCH_SCOPE).join("\n")
Array(Language.find_by_alias(modeline(header + footer)))
end
# Public: Get the modeline from the first n-lines of the file
#
# Returns a String or nil
def self.modeline(data)
match = MODELINES.map { |regex| data.match(regex) }.reject(&:nil?).first
match[1] if match
end
end
end
end

View File

@@ -1,5 +1,4 @@
require 'strscan'
require 'linguist/linguist'
module Linguist
# Generic programming language tokenizer.
@@ -16,5 +15,184 @@ module Linguist
def self.tokenize(data)
new.extract_tokens(data)
end
# Read up to 100KB
BYTE_LIMIT = 100_000
# Start state on token, ignore anything till the next newline
SINGLE_LINE_COMMENTS = [
'//', # C
'#', # Ruby
'%', # Tex
]
# Start state on opening token, ignore anything until the closing
# token is reached.
MULTI_LINE_COMMENTS = [
['/*', '*/'], # C
['<!--', '-->'], # XML
['{-', '-}'], # Haskell
['(*', '*)'], # Coq
['"""', '"""'] # Python
]
START_SINGLE_LINE_COMMENT = Regexp.compile(SINGLE_LINE_COMMENTS.map { |c|
"\s*#{Regexp.escape(c)} "
}.join("|"))
START_MULTI_LINE_COMMENT = Regexp.compile(MULTI_LINE_COMMENTS.map { |c|
Regexp.escape(c[0])
}.join("|"))
# Internal: Extract generic tokens from data.
#
# data - String to scan.
#
# Examples
#
# extract_tokens("printf('Hello')")
# # => ['printf', '(', ')']
#
# Returns Array of token Strings.
def extract_tokens(data)
s = StringScanner.new(data)
tokens = []
until s.eos?
break if s.pos >= BYTE_LIMIT
if token = s.scan(/^#!.+$/)
if name = extract_shebang(token)
tokens << "SHEBANG#!#{name}"
end
# Single line comment
elsif s.beginning_of_line? && token = s.scan(START_SINGLE_LINE_COMMENT)
# tokens << token.strip
s.skip_until(/\n|\Z/)
# Multiline comments
elsif token = s.scan(START_MULTI_LINE_COMMENT)
# tokens << token
close_token = MULTI_LINE_COMMENTS.assoc(token)[1]
s.skip_until(Regexp.compile(Regexp.escape(close_token)))
# tokens << close_token
# Skip single or double quoted strings
elsif s.scan(/"/)
if s.peek(1) == "\""
s.getch
else
s.skip_until(/[^\\]"/)
end
elsif s.scan(/'/)
if s.peek(1) == "'"
s.getch
else
s.skip_until(/[^\\]'/)
end
# Skip number literals
elsif s.scan(/(0x)?\d(\d|\.)*/)
# SGML style brackets
elsif token = s.scan(/<[^\s<>][^<>]*>/)
extract_sgml_tokens(token).each { |t| tokens << t }
# Common programming punctuation
elsif token = s.scan(/;|\{|\}|\(|\)|\[|\]/)
tokens << token
# Regular token
elsif token = s.scan(/[\w\.@#\/\*]+/)
tokens << token
# Common operators
elsif token = s.scan(/<<?|\+|\-|\*|\/|%|&&?|\|\|?/)
tokens << token
else
s.getch
end
end
tokens
end
# Internal: Extract normalized shebang command token.
#
# Examples
#
# extract_shebang("#!/usr/bin/ruby")
# # => "ruby"
#
# extract_shebang("#!/usr/bin/env node")
# # => "node"
#
# Returns String token or nil it couldn't be parsed.
def extract_shebang(data)
s = StringScanner.new(data)
if path = s.scan(/^#!\s*\S+/)
script = path.split('/').last
if script == 'env'
s.scan(/\s+/)
script = s.scan(/\S+/)
end
script = script[/[^\d]+/, 0] if script
return script
end
nil
end
# Internal: Extract tokens from inside SGML tag.
#
# data - SGML tag String.
#
# Examples
#
# extract_sgml_tokens("<a href='' class=foo>")
# # => ["<a>", "href="]
#
# Returns Array of token Strings.
def extract_sgml_tokens(data)
s = StringScanner.new(data)
tokens = []
until s.eos?
# Emit start token
if token = s.scan(/<\/?[^\s>]+/)
tokens << "#{token}>"
# Emit attributes with trailing =
elsif token = s.scan(/\w+=/)
tokens << token
# Then skip over attribute value
if s.scan(/"/)
s.skip_until(/[^\\]"/)
elsif s.scan(/'/)
s.skip_until(/[^\\]'/)
else
s.skip_until(/\w+/)
end
# Emit lone attributes
elsif token = s.scan(/\w+/)
tokens << token
# Stop at the end of the tag
elsif s.scan(/>/)
s.terminate
else
s.getch
end
end
tokens
end
end
end

View File

@@ -15,26 +15,15 @@
# Dependencies
- ^[Dd]ependencies/
# Distributions
- (^|/)dist/
# C deps
# https://github.com/joyent/node
- ^deps/
- ^tools/
- (^|/)configure$
- (^|/)configure.ac$
- (^|/)config.guess$
- (^|/)config.sub$
# stuff autogenerated by autoconf - still C deps
- (^|/)aclocal.m4
- (^|/)libtool.m4
- (^|/)ltoptions.m4
- (^|/)ltsugar.m4
- (^|/)ltversion.m4
- (^|/)lt~obsolete.m4
# Linters
- cpplint.py
# Node dependencies
- node_modules/
@@ -43,55 +32,34 @@
# Erlang bundles
- ^rebar$
- erlang.mk
# Go dependencies
- Godeps/_workspace/
# GNU indent profiles
- .indent.pro
# Minified JavaScript and CSS
- (\.|-)min\.(js|css)$
# Stylesheets imported from packages
- ([^\s]*)import\.(css|less|scss|styl)$
# Bootstrap css and js
- (^|/)bootstrap([^.]*)\.(js|css|less|scss|styl)$
- (^|/)custom\.bootstrap([^\s]*)(js|css|less|scss|styl)$
# Bootstrap minified css and js
- (^|/)bootstrap([^.]*)(\.min)?\.(js|css)$
# Font Awesome
- (^|/)font-awesome\.(css|less|scss|styl)$
- (^|/)font-awesome/.*\.(css|less|scss|styl)$
- font-awesome.min.css
- font-awesome.css
# Foundation css
- (^|/)foundation\.(css|less|scss|styl)$
- foundation.min.css
- foundation.css
# Normalize.css
- (^|/)normalize\.(css|less|scss|styl)$
- normalize.css
# Skeleton.css
- (^|/)skeleton\.(css|less|scss|styl)$
# Bourbon css
- (^|/)[Bb]ourbon/.*\.(css|less|scss|styl)$
# Bourbon SCSS
- (^|/)[Bb]ourbon/.*\.css$
- (^|/)[Bb]ourbon/.*\.scss$
# Animate.css
- (^|/)animate\.(css|less|scss|styl)$
# Materialize.css
- (^|/)materialize\.(css|less|scss|styl|js)$
# Select2
- (^|/)select2/.*\.(css|scss|js)$
- animate.css
- animate.min.css
# Vendored dependencies
- third[-_]?party/
- 3rd[-_]?party/
- vendors?/
- extern(al)?/
- (^|/)[Vv]+endor/
# Debian packaging
- ^debian/
@@ -99,58 +67,15 @@
# Haxelib projects often contain a neko bytecode file named run.n
- run.n$
# Bootstrap Datepicker
- bootstrap-datepicker/
## Commonly Bundled JavaScript frameworks ##
# jQuery
- (^|/)jquery([^.]*)\.js$
- (^|/)jquery\-\d\.\d+(\.\d+)?\.js$
- (^|/)jquery([^.]*)(\.min)?\.js$
- (^|/)jquery\-\d\.\d+(\.\d+)?(\.min)?\.js$
# jQuery UI
- (^|/)jquery\-ui(\-\d\.\d+(\.\d+)?)?(\.\w+)?\.(js|css)$
- (^|/)jquery\.(ui|effects)\.([^.]*)\.(js|css)$
# jQuery Gantt
- jquery.fn.gantt.js
# jQuery fancyBox
- jquery.fancybox.(js|css)
# Fuel UX
- fuelux.js
# jQuery File Upload
- (^|/)jquery\.fileupload(-\w+)?\.js$
# jQuery dataTables
- jquery.dataTables.js
# bootboxjs
- bootbox.js
# pdf-worker
- pdf.worker.js
# Slick
- (^|/)slick\.\w+.js$
# Leaflet plugins
- (^|/)Leaflet\.Coordinates-\d+\.\d+\.\d+\.src\.js$
- leaflet.draw-src.js
- leaflet.draw.css
- Control.FullScreen.css
- Control.FullScreen.js
- leaflet.spin.js
- wicket-leaflet.js
# Sublime Text workspace files
- .sublime-project
- .sublime-workspace
# VS Code workspace files
- .vscode
- (^|/)jquery\-ui(\-\d\.\d+(\.\d+)?)?(\.\w+)?(\.min)?\.(js|css)$
- (^|/)jquery\.(ui|effects)\.([^.]*)(\.min)?\.(js|css)$
# Prototype
- (^|/)prototype(.*)\.js$
@@ -179,53 +104,35 @@
- (^|/)tiny_mce([^.]*)\.js$
- (^|/)tiny_mce/(langs|plugins|themes|utils)
# Ace Editor
- (^|/)ace-builds/
# Fontello CSS files
- (^|/)fontello(.*?)\.css$
# MathJax
- (^|/)MathJax/
# Chart.js
- (^|/)Chart\.js$
# CodeMirror
- (^|/)[Cc]ode[Mm]irror/(\d+\.\d+/)?(lib|mode|theme|addon|keymap|demo)
# SyntaxHighlighter - http://alexgorbatchev.com/
- (^|/)shBrush([^.]*)\.js$
- (^|/)shCore\.js$
- (^|/)shLegacy\.js$
# AngularJS
- (^|/)angular([^.]*)\.js$
- (^|/)angular([^.]*)(\.min)?\.js$
# D3.js
- (^|\/)d3(\.v\d+)?([^.]*)\.js$
- (^|\/)d3(\.v\d+)?([^.]*)(\.min)?\.js$
# React
- (^|/)react(-[^.]*)?\.js$
# flow-typed
- (^|/)flow-typed/.*\.js$
- (^|/)react(-[^.]*)?(\.min)?\.js$
# Modernizr
- (^|/)modernizr\-\d\.\d+(\.\d+)?\.js$
- (^|/)modernizr\-\d\.\d+(\.\d+)?(\.min)?\.js$
- (^|/)modernizr\.custom\.\d+\.js$
# Knockout
- (^|/)knockout-(\d+\.){3}(debug\.)?js$
- knockout-min.js
## Python ##
# Sphinx
- (^|/)docs?/_?(build|themes?|templates?|static)/
# django
- (^|/)admin_media/
- (^|/)env/
# Fabric
- ^fabfile\.py$
@@ -238,37 +145,12 @@
## Obj-C ##
# Xcode
- \.xctemplate/
- \.imageset/
# Carthage
- (^|/)Carthage/
# Cocoapods
- ^Pods/
# Sparkle
- (^|/)Sparkle/
# Crashlytics
- Crashlytics.framework/
# Fabric
- Fabric.framework/
# BuddyBuild
- BuddyBuildSDK.framework/
# Realm
- Realm.framework
# RealmSwift
- RealmSwift.framework
# git config files
- gitattributes$
- gitignore$
- gitmodules$
## Groovy ##
# Gradle
@@ -283,8 +165,8 @@
- \.intellisense\.js$
# jQuery validation plugin (MS bundles this with asp.net mvc)
- (^|/)jquery([^.]*)\.validate(\.unobtrusive)?\.js$
- (^|/)jquery([^.]*)\.unobtrusive\-ajax\.js$
- (^|/)jquery([^.]*)\.validate(\.unobtrusive)?(\.min)?\.js$
- (^|/)jquery([^.]*)\.unobtrusive\-ajax(\.min)?\.js$
# Microsoft Ajax
- (^|/)[Mm]icrosoft([Mm]vc)?([Aa]jax|[Vv]alidation)(\.debug)?\.js$
@@ -311,15 +193,27 @@
- (^|/)extjs/welcome/
# Html5shiv
- (^|/)html5shiv\.js$
- (^|/)html5shiv(\.min)?\.js$
# Samples folders
- ^[Ss]amples/
# LICENSE, README, git config files
- ^COPYING$
- LICENSE$
- License$
- gitattributes$
- gitignore$
- gitmodules$
- ^README$
- ^readme$
# Test fixtures
- ^[Tt]ests?/fixtures/
- ^[Ss]pecs?/fixtures/
- ^[Tt]est/fixtures/
# PhoneGap/Cordova
- (^|/)cordova([^.]*)\.js$
- (^|/)cordova\-\d\.\d(\.\d)?\.js$
- (^|/)cordova([^.]*)(\.min)?\.js$
- (^|/)cordova\-\d\.\d(\.\d)?(\.min)?\.js$
# Foundation js
- foundation(\..*)?\.js$
@@ -327,30 +221,17 @@
# Vagrant
- ^Vagrantfile$
# .DS_Stores
# .DS_Store's
- .[Dd][Ss]_[Ss]tore$
# Mercury --use-subdirs
- Mercury/
# R packages
- ^vignettes/
- ^inst/extdata/
# Octicons
- octicons.css
- octicons.min.css
- sprockets-octicons.scss
# Typesafe Activator
- (^|/)activator$
- (^|/)activator\.bat$
# ProGuard
- proguard.pro
- proguard-rules.pro
# PuPHPet
- ^puphpet/
# Android Google APIs
- (^|/)\.google_apis/
# Jenkins Pipeline
- ^Jenkinsfile$

View File

@@ -1,3 +1,3 @@
module Linguist
VERSION = "6.0.1"
VERSION = "3.1.5"
end

View File

@@ -1,265 +0,0 @@
&НаСервереБезКонтекста
Функция ПолучитьКонтактноеЛицоПоЭлектроннойПочте(ЭлектроннаяПочта)
Запрос = Новый Запрос;
Запрос.Текст = "ВЫБРАТЬ КонтактноеЛицо ИЗ Справочник.Контрагенты ГДЕ ЭлектроннаяПочта = &ЭлектроннаяПочта";
Запрос.Параметры.Вставить("ЭлектроннаяПочта", СокрЛП(ЭлектроннаяПочта));
Выборка = Запрос.Выполнить().Выбрать();
КонтактноеЛицо = "";
Если Выборка.Следующий() Тогда
КонтактноеЛицо = Выборка.КонтактноеЛицо;
КонецЕсли;
Возврат КонтактноеЛицо;
КонецФункции
&НаСервереБезКонтекста
Функция ПолучитьКонтактноеЛицоПоПолучателю(Получатель)
Запрос = Новый Запрос;
Запрос.Текст = "ВЫБРАТЬ КонтактноеЛицо ИЗ Справочник.Контрагенты ГДЕ Ссылка = &Получатель";
Запрос.Параметры.Вставить("Получатель", Получатель);
Выборка = Запрос.Выполнить().Выбрать();
КонтактноеЛицо = "";
Если Выборка.Следующий() Тогда
КонтактноеЛицо = Выборка.КонтактноеЛицо;
КонецЕсли;
Возврат КонтактноеЛицо;
КонецФункции
&НаСервереБезКонтекста
Процедура ДобавитьПолучателей(Получатель, Получатели)
Запрос = Новый Запрос;
Запрос.Текст = "ВЫБРАТЬ ЭлектроннаяПочта ИЗ Справочник.Контрагенты ГДЕ Ссылка ";
Если ТипЗнч(Получатели) = Тип("Массив") Тогда
Запрос.Текст = Запрос.Текст + "В (&Получатели)";
Иначе
Запрос.Текст = Запрос.Текст + "= &Получатели";
КонецЕсли;
Запрос.Параметры.Вставить("Получатели", Получатели);
Выборка = Запрос.Выполнить().Выбрать();
Пока Выборка.Следующий() Цикл
Если Получатель <> "" Тогда
Получатель = Получатель + "; ";
КонецЕсли;
Получатель = Получатель + Выборка.ЭлектроннаяПочта;
КонецЦикла;
КонецПроцедуры
&НаСервере
Процедура ПриСозданииНаСервере(Отказ, СтандартнаяОбработка)
Если Параметры.Ключ.Пустая() Тогда
Заголовок = "Исходящее письмо (Создание)";
Объект.Дата = ТекущаяДата();
ПоШаблону = Параметры.Свойство("ПоШаблону");
ВходящееПисьмо = Параметры.ВходящееПисьмо;
Если ПоШаблону = Истина Тогда
Элементы.ЗаполнитьПоШаблону.Видимость = Истина;
РаботаСПочтой.ЗаполнитьПисьмоПоШаблону(Объект, Содержимое);
ИначеЕсли Не ВходящееПисьмо.Пустая() Тогда
РаботаСПочтой.ЗаполнитьОтветНаПисьмо(ВходящееПисьмо, Объект, Содержимое);
КонецЕсли;
Адресаты = Параметры.Адресаты;
Если Адресаты <> Неопределено Тогда
Запрос = Новый Запрос;
Запрос.Текст = "ВЫБРАТЬ
| Контрагенты.ЭлектроннаяПочта
|ИЗ
| Справочник.Контрагенты КАК Контрагенты
|ГДЕ
| Контрагенты.Ссылка В(&Адресаты)
| И Контрагенты.ЭлектроннаяПочта <> """"";
Запрос.УстановитьПараметр("Адресаты", Адресаты);
Получатель = "";
Выборка = Запрос.Выполнить().Выбрать();
Пока Выборка.Следующий() Цикл
Если Получатель <> "" Тогда
Получатель = Получатель + "; ";
КонецЕсли;
Получатель = Получатель + Выборка.ЭлектроннаяПочта;
КонецЦикла;
Объект.Получатель = Получатель;
КонецЕсли;
КонецЕсли;
КонецПроцедуры
&НаСервере
Процедура ПриЧтенииНаСервере(ТекущийОбъект)
Содержимое = ТекущийОбъект.Содержимое.Получить();
Заголовок = ТекущийОбъект.Наименование + " (Исходящее письмо)";
Если РаботаСПочтой.ПисьмоОтправлено(ТекущийОбъект.Ссылка) Тогда
Заголовок = Заголовок + " - Отправлено";
КонецЕсли;
КонецПроцедуры
&НаСервере
Процедура ПередЗаписьюНаСервере(Отказ, ТекущийОбъект, ПараметрыЗаписи)
ТекущийОбъект.Содержимое = Новый ХранилищеЗначения(Содержимое, Новый СжатиеДанных());
ТекущийОбъект.Текст = Содержимое.ПолучитьТекст();
КонецПроцедуры
&НаСервере
Функция ОтправитьПисьмо(Ошибка)
Если Не Записать() Тогда
Ошибка = "ОшибкаЗаписи";
Возврат Ложь;
КонецЕсли;
Если Не РаботаСПочтой.ОтправитьПисьмо(Объект.Ссылка) Тогда
Ошибка = "ОшибкаОтправки";
Возврат Ложь;
КонецЕсли;
Заголовок = Заголовок + " - Отправлено";
Возврат Истина;
КонецФункции
&НаКлиенте
Функция ОтправитьПисьмоКлиент()
Ошибка = "";
Если Не ОтправитьПисьмо(Ошибка) Тогда
Если Ошибка = "ОшибкаОтправки" Тогда
Кнопки = Новый СписокЗначений;
Кнопки.Добавить(1, "Настроить почту");
Кнопки.Добавить(2, "Закрыть");
Оп = Новый ОписаниеОповещения(
"ОтправитьПисьмоКлиентВопросЗавершение",
ЭтотОбъект);
ПоказатьВопрос(Оп,
"Не указаны настройки интернет почты!",
Кнопки, , 1);
КонецЕсли;
Возврат Ложь;
КонецЕсли;
НавигационнаяСсылка = ПолучитьНавигационнуюСсылку(Объект.Ссылка);
ПоказатьОповещениеПользователя("Письмо отправлено", НавигационнаяСсылка, Объект.Наименование);
ОповеститьОбИзменении(Объект.Ссылка);
Возврат Истина;
КонецФункции
&НаКлиенте
Процедура ОтправитьПисьмоКлиентВопросЗавершение(Результат, Параметры) Экспорт
Если Результат = 1 Тогда
ОткрытьФорму("ОбщаяФорма.НастройкаПочты");
КонецЕсли;
КонецПроцедуры
&НаКлиенте
Процедура Отправить(Команда)
ОтправитьПисьмоКлиент();
КонецПроцедуры
&НаКлиенте
Процедура ОтправитьИЗакрыть(Команда)
Если Не ОтправитьПисьмоКлиент() Тогда
Возврат;
КонецЕсли;
Закрыть();
КонецПроцедуры
&НаКлиенте
Процедура ВставитьСтрокуВТекущуюПозицию(Поле, Документ, Строка)
Перем Начало, Конец;
Поле.ПолучитьГраницыВыделения(Начало, Конец);
Позиция = Документ.ПолучитьПозициюПоЗакладке(Начало);
Документ.Удалить(Начало, Конец);
Начало = Документ.ПолучитьЗакладкуПоПозиции(Позиция);
Документ.Вставить(Начало, Строка);
Позиция = Позиция + СтрДлина(Строка);
Закладка = Документ.ПолучитьЗакладкуПоПозиции(Позиция);
Поле.УстановитьГраницыВыделения(Закладка, Закладка);
КонецПроцедуры
&НаКлиенте
Процедура ВставитьКонтактноеЛицо(Команда)
Если Объект.Контрагент.Пустая() Тогда
Сообщить("Выберите контрагента");
Иначе
КонтактноеЛицо = ПолучитьКонтактноеЛицоПоПолучателю(Объект.Контрагент);
ВставитьСтрокуВТекущуюПозицию(Элементы.Содержимое, Содержимое, КонтактноеЛицо + " ");
КонецЕсли;
КонецПроцедуры
&НаСервере
Процедура ПослеЗаписиНаСервере(ТекущийОбъект, ПараметрыЗаписи)
Заголовок = ТекущийОбъект.Наименование + " (Исходящее письмо)";
КонецПроцедуры
&НаКлиенте
Процедура КонтрагентПриИзменении(Элемент)
ДобавитьПолучателей(Объект.Получатель, Объект.Контрагент);
КонецПроцедуры
&НаКлиенте
Процедура ВыделитьВажное(Команда)
Перем Начало, Конец;
ВсеВажное = Истина;
Элементы.Содержимое.ПолучитьГраницыВыделения(Начало, Конец);
Если Начало = Конец Тогда
Возврат;
КонецЕсли;
НаборТекстовыхЭлементов = Новый Массив();
Для Каждого ТекстовыйЭлемент Из Содержимое.СформироватьЭлементы(Начало, Конец) Цикл
Если Тип(ТекстовыйЭлемент) = Тип("ТекстФорматированногоДокумента") Тогда
НаборТекстовыхЭлементов.Добавить(ТекстовыйЭлемент);
КонецЕсли;
КонецЦикла;
Для Каждого ТекстовыйЭлемент Из НаборТекстовыхЭлементов Цикл
Если ТекстовыйЭлемент.Шрифт.Жирный <> Истина И
ТекстовыйЭлемент.ЦветТекста <> Новый Цвет(255, 0, 0) Тогда
ВсеВажное = Ложь;
Прервать;
КонецЕсли;
КонецЦикла;
Для Каждого ТекстовыйЭлемент Из НаборТекстовыхЭлементов Цикл
ТекстовыйЭлемент.Шрифт = Новый Шрифт(ТекстовыйЭлемент.Шрифт, , , Не ВсеВажное);
ТекстовыйЭлемент.ЦветТекста = Новый Цвет(?(ВсеВажное, 0, 255), 0, 0);
КонецЦикла;
КонецПроцедуры
&НаКлиенте
Процедура ЗаполнитьПоШаблону(Команда)
Если Объект.Контрагент.Пустая() Тогда
Сообщить("Выберите контрагента");
Иначе
НайтиИЗаменить("[Контрагент]", Объект.Контрагент);
НайтиИЗаменить("[КонтактноеЛицо]", ПолучитьКонтактноеЛицоПоПолучателю(Объект.Контрагент));
КонецЕсли;
НайтиИЗаменить("[ДатаПисьма]", Объект.Дата);
КонецПроцедуры
&НаКлиенте
Процедура НайтиИЗаменить(СтрокаДляПоиска, СтрокаДляЗамены)
Перем ВставленныйТекст, ШрифтОформления, ЦветТекстаОформления, ЦветФонаОформления, НавигационнаяСсылкаОформления;
РезультатПоиска = Содержимое.НайтиТекст(СтрокаДляПоиска);
Пока ((РезультатПоиска <> Неопределено) И (РезультатПоиска.ЗакладкаНачала <> Неопределено) И (РезультатПоиска.ЗакладкаКонца <> Неопределено)) Цикл
ПозицияНачалаСледующегоЦиклаПоиска = Содержимое.ПолучитьПозициюПоЗакладке(РезультатПоиска.ЗакладкаНачала) + СтрДлина(СтрокаДляЗамены);
МассивЭлементовДляОформления = Содержимое.ПолучитьЭлементы(РезультатПоиска.ЗакладкаНачала, РезультатПоиска.ЗакладкаКонца);
Для Каждого ЭлементДляОформления Из МассивЭлементовДляОформления Цикл
Если Тип(ЭлементДляОформления) = Тип("ТекстФорматированногоДокумента") Тогда
ШрифтОформления = ЭлементДляОформления.Шрифт;
ЦветТекстаОформления = ЭлементДляОформления.ЦветТекста;
ЦветФонаОформления = ЭлементДляОформления.ЦветФона;
НавигационнаяСсылкаОформления = ЭлементДляОформления.НавигационнаяССылка;
Прервать;
КонецЕсли;
КонецЦикла;
Содержимое.Удалить(РезультатПоиска.ЗакладкаНачала, РезультатПоиска.ЗакладкаКонца);
ВставленныйТекст = Содержимое.Вставить(РезультатПоиска.ЗакладкаНачала, СтрокаДляЗамены);
Если ВставленныйТекст <> Неопределено И ШрифтОформления <> Неопределено Тогда
ВставленныйТекст.Шрифт = ШрифтОформления;
КонецЕсли;
Если ВставленныйТекст <> Неопределено И ЦветТекстаОформления <> Неопределено Тогда
ВставленныйТекст.ЦветТекста = ЦветТекстаОформления;
КонецЕсли;
Если ВставленныйТекст <> Неопределено И ЦветФонаОформления <> Неопределено Тогда
ВставленныйТекст.ЦветФона = ЦветФонаОформления;
КонецЕсли;
Если ВставленныйТекст <> Неопределено И НавигационнаяСсылкаОформления <> Неопределено Тогда
ВставленныйТекст.НавигационнаяССылка = НавигационнаяСсылкаОформления;
КонецЕсли;
РезультатПоиска = Содержимое.НайтиТекст(СтрокаДляПоиска, Содержимое.ПолучитьЗакладкуПоПозиции(ПозицияНачалаСледующегоЦиклаПоиска));
КонецЦикла;
КонецПроцедуры

View File

@@ -1,85 +0,0 @@
&НаСервере
Функция ПечатнаяФорма(ПараметрКоманды)
ТабличныйДокумент = Новый ТабличныйДокумент;
ТабличныйДокумент.ОтображатьСетку = Истина;
ТабличныйДокумент.ОтображатьЗаголовки = Истина;
Сформирован = Ложь;
ТабМакет = Справочники.Товары.ПолучитьМакет("МакетПрайсЛиста");
Шапка = ТабМакет.ПолучитьОбласть("Шапка");
ТабличныйДокумент.Вывести(Шапка);
ОбластьНоменклатура = ТабМакет.ПолучитьОбласть("ОбластьНоменклатура");
Запрос = Новый Запрос;
Запрос.Текст = "ВЫБРАТЬ
| Товары.Код КАК Код,
| Товары.Наименование КАК Наименование,
| Товары.Артикул КАК Артикул,
| Товары.ФайлКартинки КАК Картинка,
| Товары.Описание КАК Описание,
| Товары.Вид КАК Вид,
| ЦеныТоваров.Цена КАК Цена
|ИЗ
| РегистрСведений.ЦеныТоваров КАК ЦеныТоваров
| ЛЕВОЕ СОЕДИНЕНИЕ Справочник.Товары КАК Товары
| ПО ЦеныТоваров.Товар = Товары.Ссылка
|ГДЕ
| Товары.ЭтоГруппа = ЛОЖЬ
| И ЦеныТоваров.ВидЦен = &ВидЦен
|
|УПОРЯДОЧИТЬ ПО
| Вид,
| Товары.Родитель.Код,
| Код";
Запрос.УстановитьПараметр("ВидЦен", Справочники.ВидыЦен.НайтиПоНаименованию("Розничная"));
Выборка = Запрос.Выполнить().Выбрать();
Пока Выборка.Следующий() Цикл
ОбластьНоменклатура.Параметры.Заполнить(Выборка);
Описание = "";
Чтение = Новый ЧтениеHTML();
Чтение.УстановитьСтроку(Выборка.Описание);
ДокDOM = Новый ПостроительDOM();
HTML = ДокDOM.Прочитать(Чтение);
Если Не HTML.ЭлементДокумента = Неопределено Тогда
Для Каждого Узел из HTML.ЭлементДокумента.ДочерниеУзлы Цикл
Если Узел.ИмяУзла = "body" Тогда
Для Каждого ЭлементОписания из Узел.ДочерниеУзлы Цикл
Описание = Описание + ЭлементОписания.ТекстовоеСодержимое;
КонецЦикла;
КонецЕсли;
КонецЦикла;
КонецЕсли;
ОбластьНоменклатура.Параметры.Описание = Описание;
Если (Выборка.Картинка <> Null) Тогда
ОбластьНоменклатура.Параметры.ПараметрКартинки = Новый Картинка(Выборка.Картинка.ДанныеФайла.Получить());
КонецЕсли;
ТабличныйДокумент.Вывести(ОбластьНоменклатура, Выборка.Уровень());
Сформирован = Истина;
КонецЦикла;
Если Сформирован Тогда
Возврат ТабличныйДокумент;
Иначе
Возврат Неопределено;
КонецЕсли;
КонецФункции
&НаКлиенте
Процедура ОбработкаКоманды(ПараметрКоманды, ПараметрыВыполненияКоманды)
ТабличныйДокумент = ПечатнаяФорма(ПараметрКоманды);
Если ТабличныйДокумент <> Неопределено Тогда
ТабличныйДокумент.Показать();
КонецЕсли;
КонецПроцедуры

View File

@@ -1,109 +0,0 @@
// Процедура на основании анализа типа данных заменяет их на данные, удаляющие
// информацию из узла в котором их не должно быть
//
// Параметры:
// Данные Объект, набор записей,... который нужно преобразовать
//
Процедура УдалениеДанных(Данные)
// Получаем объект описания метаданного, соответствующий данным
ОбъектМетаданных = ?(ТипЗнч(Данные) = Тип("УдалениеОбъекта"), Данные.Ссылка.Метаданные(), Данные.Метаданные());
// Проверяем тип, интересуют только те типы, которые реализованы на мобильной платформе
Если Метаданные.Справочники.Содержит(ОбъектМетаданных)
ИЛИ Метаданные.Документы.Содержит(ОбъектМетаданных) Тогда
// Перенос удаления объекта для объектных
Данные = Новый УдалениеОбъекта(Данные.Ссылка);
ИначеЕсли Метаданные.РегистрыСведений.Содержит(ОбъектМетаданных)
ИЛИ Метаданные.РегистрыНакопления.Содержит(ОбъектМетаданных)
ИЛИ Метаданные.Последовательности.Содержит(ОбъектМетаданных) Тогда
// Очищаем данные
Данные.Очистить();
КонецЕсли;
КонецПроцедуры
// Функция формирует пакет обмена, который будет отправлен узлу "УзелОбмена"
//
// Параметры:
// УзелОбмена узел плана обмена "мобильные", с которым осуществляется обмен
//
// Возвращаемое значение:
// сформированный пакет, помещенный в хранилище значения
Функция СформироватьПакетОбмена(УзелОбмена) Экспорт
ЗаписьXML = Новый ЗаписьXML;
ЗаписьXML.УстановитьСтроку("UTF-8");
ЗаписьXML.ЗаписатьОбъявлениеXML();
ЗаписьСообщения = ПланыОбмена.СоздатьЗаписьСообщения();
ЗаписьСообщения.НачатьЗапись(ЗаписьXML, УзелОбмена);
ЗаписьXML.ЗаписатьСоответствиеПространстваИмен("xsi", "http://www.w3.org/2001/XMLSchema-instance");
ЗаписьXML.ЗаписатьСоответствиеПространстваИмен("v8", "http://v8.1c.ru/data");
ТипДанныхУдаления = Тип("УдалениеОбъекта");
ВыборкаИзменений = ПланыОбмена.ВыбратьИзменения(УзелОбмена, ЗаписьСообщения.НомерСообщения);
Пока ВыборкаИзменений.Следующий() Цикл
Данные = ВыборкаИзменений.Получить();
// Если перенос данных не нужен, то, возможно, необходимо записать удаление данных
Если Не ОбменМобильныеПереопределяемый.НуженПереносДанных(Данные, УзелОбмена) Тогда
// Получаем значение с возможным удалением данных
УдалениеДанных(Данные);
КонецЕсли;
// Записываем данные в сообщение
ОбменМобильныеПереопределяемый.ЗаписатьДанные(ЗаписьXML, Данные);
КонецЦикла;
ЗаписьСообщения.ЗакончитьЗапись();
Возврат Новый ХранилищеЗначения(ЗаписьXML.Закрыть(), Новый СжатиеДанных(9));
КонецФункции
// Процедура вносит в информационную базу данные, которые присланы из узла "УзелОбмена"
//
// Параметры:
// УзелОбмена узел плана обмена "мобильные", с которым осуществляется обмен
// ДанныеОбмена - пакет обмена полученный из узла УзелОбмена, помещен в ХранилищеЗначения
//
Процедура ПринятьПакетОбмена(УзелОбмена, ДанныеОбмена) Экспорт
ЧтениеXML = Новый ЧтениеXML;
ЧтениеXML.УстановитьСтроку(ДанныеОбмена.Получить());
ЧтениеСообщения = ПланыОбмена.СоздатьЧтениеСообщения();
ЧтениеСообщения.НачатьЧтение(ЧтениеXML);
ПланыОбмена.УдалитьРегистрациюИзменений(ЧтениеСообщения.Отправитель,ЧтениеСообщения.НомерПринятого);
НачатьТранзакцию();
Пока ВозможностьЧтенияXML(ЧтениеXML) Цикл
Данные = ОбменМобильныеПереопределяемый.ПрочитатьДанные(ЧтениеXML);
Если Не Данные = Неопределено Тогда
Данные.ОбменДанными.Отправитель = ЧтениеСообщения.Отправитель;
Данные.ОбменДанными.Загрузка = Истина;
Данные.Записать();
КонецЕсли;
КонецЦикла;
ЗафиксироватьТранзакцию();
ЧтениеСообщения.ЗакончитьЧтение();
ЧтениеXML.Закрыть();
КонецПроцедуры

View File

@@ -1,302 +0,0 @@
////////////////////////////////////////////////////////////////////////////////
// ПРОЦЕДУРЫ И ФУНКЦИИ
//
// Формирование печатной формы документа
//
// Параметры:
// Нет.
//
// Возвращаемое значение:
// ТабличныйДокумент - Сформированный табличный документ.
Процедура ПечатнаяФорма(ТабличныйДокумент) Экспорт
Макет = Документы.РасходТовара.ПолучитьМакет("МакетПечати");
// Заголовок
Область = Макет.ПолучитьОбласть("Заголовок");
ТабличныйДокумент.Вывести(Область);
// Шапка
Шапка = Макет.ПолучитьОбласть("Шапка");
Шапка.Параметры.Заполнить(ЭтотОбъект);
ТабличныйДокумент.Вывести(Шапка);
// Товары
Область = Макет.ПолучитьОбласть("ТоварыШапка");
ТабличныйДокумент.Вывести(Область);
ОбластьТовары = Макет.ПолучитьОбласть("Товары");
Для каждого ТекСтрокаТовары Из Товары Цикл
ОбластьТовары.Параметры.Заполнить(ТекСтрокаТовары);
ТабличныйДокумент.Вывести(ОбластьТовары);
КонецЦикла;
КонецПроцедуры
// Формирование печатной формы документа
//
// Параметры:
// Нет.
//
// Возвращаемое значение:
// ТабличныйДокумент - Сформированный табличный документ.
Процедура Пересчитать() Экспорт
Для каждого ТекСтрокаТовары Из Товары Цикл
ТекСтрокаТовары.Сумма = ТекСтрокаТовары.Количество * ТекСтрокаТовары.Цена;
КонецЦикла;
КонецПроцедуры
////////////////////////////////////////////////////////////////////////////////
// ОБРАБОТЧИКИ СОБЫТИЙ ОБЪЕКТА
Процедура ОбработкаПроведения(Отказ, Режим)
// Формирование движений регистров накопления ТоварныеЗапасы и Продажи.
Движения.ТоварныеЗапасы.Записывать = Истина;
Движения.Продажи.Записывать = Истина;
Если Режим = РежимПроведенияДокумента.Оперативный Тогда
Движения.ТоварныеЗапасы.БлокироватьДляИзменения = Истина;
КонецЕсли;
// Создадим запрос, чтобы получать информацию об услугах
Запрос = Новый Запрос("ВЫБРАТЬ
| ТоварыВДокументе.НомерСтроки КАК НомерСтроки
|ИЗ
| Документ.РасходТовара.Товары КАК ТоварыВДокументе
|ГДЕ
| ТоварыВДокументе.Ссылка = &Ссылка
| И ТоварыВДокументе.Товар.Вид = ЗНАЧЕНИЕ(Перечисление.ВидыТоваров.Услуга)");
Запрос.УстановитьПараметр("Ссылка", Ссылка);
РезультатУслуги = Запрос.Выполнить().Выгрузить();
РезультатУслуги.Индексы.Добавить("НомерСтроки");
Для каждого ТекСтрокаТовары Из Товары Цикл
Строка = РезультатУслуги.Найти(ТекСтрокаТовары.НомерСтроки, "НомерСтроки");
Если Строка = Неопределено Тогда
// Не услуга
Движение = Движения.ТоварныеЗапасы.Добавить();
Движение.ВидДвижения = ВидДвиженияНакопления.Расход;
Движение.Период = Дата;
Движение.Товар = ТекСтрокаТовары.Товар;
Движение.Склад = Склад;
Движение.Количество = ТекСтрокаТовары.Количество;
КонецЕсли;
Движение = Движения.Продажи.Добавить();
Движение.Период = Дата;
Движение.Товар = ТекСтрокаТовары.Товар;
Движение.Покупатель = Покупатель;
Движение.Количество = ТекСтрокаТовары.Количество;
Движение.Сумма = ТекСтрокаТовары.Сумма;
КонецЦикла;
// Формирование движения регистра накопления Взаиморасчеты.
Движения.Взаиморасчеты.Записывать = Истина;
Движение = Движения.Взаиморасчеты.Добавить();
Движение.ВидДвижения = ВидДвиженияНакопления.Расход;
Движение.Период = Дата;
Движение.Контрагент = Покупатель;
Движение.Валюта = Валюта;
Если Валюта.Пустая() Тогда
Движение.Сумма = Товары.Итог("Сумма");
Иначе
Курс = РегистрыСведений.КурсыВалют.ПолучитьПоследнее(Дата, Новый Структура("Валюта", Валюта)).Курс;
Если Курс = 0 Тогда
Движение.Сумма = Товары.Итог("Сумма");
Иначе
Движение.Сумма = Товары.Итог("Сумма") / Курс;
КонецЕсли;
КонецЕсли;
//Запишем движения
Движения.Записать();
//Контроль остатков при оперативном проведении
Если Режим = РежимПроведенияДокумента.Оперативный Тогда
// Создадим запрос, чтобы контролировать остатки по товарам
Запрос = Новый Запрос("ВЫБРАТЬ
| ТоварыВДокументе.Товар КАК Товар,
| СУММА(ТоварыВДокументе.Количество) КАК Количество,
| МАКСИМУМ(ТоварыВДокументе.НомерСтроки) КАК НомерСтроки
|
|ПОМЕСТИТЬ ТребуетсяТовара
|
|ИЗ
| Документ.РасходТовара.Товары КАК ТоварыВДокументе
|
|ГДЕ
| ТоварыВДокументе.Ссылка = &Ссылка
| И ТоварыВДокументе.Товар.Вид = ЗНАЧЕНИЕ(Перечисление.ВидыТоваров.Товар)
|
|СГРУППИРОВАТЬ ПО
| ТоварыВДокументе.Товар
|
|ИНДЕКСИРОВАТЬ ПО
| Товар
|;
|
|////////////////////////////////////////////////////////////////////////////////
|ВЫБРАТЬ
| ПРЕДСТАВЛЕНИЕ(ТребуетсяТовара.Товар) КАК ТоварПредставление,
| ВЫБОР
| КОГДА - ЕСТЬNULL(ТоварныеЗапасыОстатки.КоличествоОстаток, 0) > ТоварыВДокументе.Количество
| ТОГДА ТоварыВДокументе.Количество
| ИНАЧЕ - ЕСТЬNULL(ТоварныеЗапасыОстатки.КоличествоОстаток, 0)
| КОНЕЦ КАК Нехватка,
| ТоварыВДокументе.Количество - ВЫБОР
| КОГДА - ЕСТЬNULL(ТоварныеЗапасыОстатки.КоличествоОстаток, 0) > ТоварыВДокументе.Количество
| ТОГДА ТоварыВДокументе.Количество
| ИНАЧЕ - ЕСТЬNULL(ТоварныеЗапасыОстатки.КоличествоОстаток, 0)
| КОНЕЦ КАК МаксимальноеКоличество,
| ТребуетсяТовара.НомерСтроки КАК НомерСтроки
|
|ИЗ
| ТребуетсяТовара КАК ТребуетсяТовара
| ЛЕВОЕ СОЕДИНЕНИЕ РегистрНакопления.ТоварныеЗапасы.Остатки(
| ,
| Товар В
| (ВЫБРАТЬ
| ТребуетсяТовара.Товар
| ИЗ
| ТребуетсяТовара)
| И Склад = &Склад) КАК ТоварныеЗапасыОстатки
| ПО ТребуетсяТовара.Товар = ТоварныеЗапасыОстатки.Товар
| ЛЕВОЕ СОЕДИНЕНИЕ Документ.РасходТовара.Товары КАК ТоварыВДокументе
| ПО ТребуетсяТовара.Товар = ТоварыВДокументе.Товар
| И ТребуетсяТовара.НомерСтроки = ТоварыВДокументе.НомерСтроки
|
|ГДЕ
| ТоварыВДокументе.Ссылка = &Ссылка И
| 0 > ЕСТЬNULL(ТоварныеЗапасыОстатки.КоличествоОстаток, 0)
|
|УПОРЯДОЧИТЬ ПО
| НомерСтроки");
Запрос.УстановитьПараметр("Склад", Склад);
Запрос.УстановитьПараметр("Ссылка", Ссылка);
РезультатСНехваткой = Запрос.Выполнить();
ВыборкаРезультатаСНехваткой = РезультатСНехваткой.Выбрать();
// Выдадим ошибки для строк, в которых не хватает остатка
Пока ВыборкаРезультатаСНехваткой.Следующий() Цикл
Сообщение = Новый СообщениеПользователю();
Сообщение.Текст = НСтр("ru = 'Не хватает '", "ru")
+ ВыборкаРезультатаСНехваткой.Нехватка
+ НСтр("ru = ' единиц товара'", "ru") + """"
+ ВыборкаРезультатаСНехваткой.ТоварПредставление
+ """"
+ НСтр("ru = ' на складе'", "ru")
+ """"
+ Склад
+ """."
+ НСтр("ru = 'Максимальное количество: '", "ru")
+ ВыборкаРезультатаСНехваткой.МаксимальноеКоличество
+ ".";
Сообщение.Поле = НСтр("ru = 'Товары'", "ru")
+ "["
+ (ВыборкаРезультатаСНехваткой.НомерСтроки - 1)
+ "]."
+ НСтр("ru = 'Количество'", "ru");
Сообщение.УстановитьДанные(ЭтотОбъект);
Сообщение.Сообщить();
Отказ = Истина;
КонецЦикла;
КонецЕсли;
КонецПроцедуры
Процедура ОбработкаПроверкиЗаполнения(Отказ, ПроверяемыеРеквизиты)
// Проверим заполненность поля "Покупатель"
Если Покупатель.Пустая() Тогда
// Если поле Покупатель не заполнено, сообщим об этом пользователю
Сообщение = Новый СообщениеПользователю();
Сообщение.Текст = НСтр("ru = 'Не указан Покупатель, для которого выписывается накладная!'", "ru");
Сообщение.Поле = НСтр("ru = 'Покупатель'", "ru");
Сообщение.УстановитьДанные(ЭтотОбъект);
Сообщение.Сообщить();
// Сообщим платформе, что мы сами обработали проверку заполнения поля "Покупатель"
ПроверяемыеРеквизиты.Удалить(ПроверяемыеРеквизиты.Найти("Покупатель"));
// Так как информация в документе не консистентна, то продолжать работу дальше смысла нет
Отказ = Истина;
КонецЕсли;
//Если склад не заполнен, то проверим есть ли в документе что-то кроме услуг
Если Склад.Пустая() И Товары.Количество() > 0 Тогда
// Создадим запрос, чтобы получать информацию об товарах
Запрос = Новый Запрос("ВЫБРАТЬ
| Количество(*) КАК Количество
|ИЗ
| Справочник.Товары КАК Товары
|ГДЕ
| Товары.Ссылка В (&ТоварыВДокументе)
| И Товары.Вид = ЗНАЧЕНИЕ(Перечисление.ВидыТоваров.Товар)");

View File

@@ -1,20 +0,0 @@
Каталог = ОбъединитьПути(ТекущийКаталог(), "libs\oscript-library\src");
Загрузчик_Оригинал_ИмяФайла = ОбъединитьПути(Каталог, "package-loader.os");
Файлы = НайтиФайлы(Каталог, , Ложь);
Для Каждого ВыбФайл Из Файлы Цикл
Если ВыбФайл.ЭтоФайл() Тогда
Продолжить;
КонецЕсли;
Загрузчик_ИмяФайла = ОбъединитьПути(ВыбФайл.ПолноеИмя, "package-loader.os");
Загрузчик_Файл = Новый Файл(Загрузчик_ИмяФайла);
Если Загрузчик_Файл.Существует() Тогда
Продолжить;
КонецЕсли;
КопироватьФайл(Загрузчик_Оригинал_ИмяФайла, Загрузчик_ИмяФайла);
КонецЦикла;

View File

@@ -1,42 +0,0 @@
#Использовать "../libs/oscript-library/src/v8runner"
#Использовать "../libs/oscript-library/src/tempfiles"
Перем Лог;
Перем КодВозврата;
Процедура Инициализация()
Лог = Логирование.ПолучитьЛог("oscript.app.gitlab-test_CanCompile");
КодВозврата = 0;
КонецПроцедуры
Процедура ВыполнитьТест()
Конфигуратор = Новый УправлениеКонфигуратором();
ПараметрыЗапуска = Конфигуратор.ПолучитьПараметрыЗапуска();
КомандаЗапуска = "/LoadConfigFromFiles ""%1""";
КомандаЗапуска = СтрШаблон(КомандаЗапуска, ТекущийКаталог() + "\source\cf");
Лог.Информация("Команда обновления конфигурации: " + КомандаЗапуска);
ПараметрыЗапуска.Добавить(КомандаЗапуска);
Попытка
Конфигуратор.ВыполнитьКоманду(ПараметрыЗапуска);
Исключение
Лог.Ошибка(Конфигуратор.ВыводКоманды());
КодВозврата = 1;
КонецПопытки;
УдалитьФайлы(Конфигуратор.ПутьКВременнойБазе());
КонецПроцедуры
Инициализация();
ВыполнитьТест();
ЗавершитьРаботу(КодВозврата);

View File

@@ -1,190 +0,0 @@
; Source: https://github.com/toml-lang/toml
; License: MIT
;; This is an attempt to define TOML in ABNF according to the grammar defined
;; in RFC 4234 (http://www.ietf.org/rfc/rfc4234.txt).
;; TOML
toml = expression *( newline expression )
expression = (
ws /
ws comment /
ws keyval ws [ comment ] /
ws table ws [ comment ]
)
;; Newline
newline = (
%x0A / ; LF
%x0D.0A ; CRLF
)
newlines = 1*newline
;; Whitespace
ws = *(
%x20 / ; Space
%x09 ; Horizontal tab
)
;; Comment
comment-start-symbol = %x23 ; #
non-eol = %x09 / %x20-10FFFF
comment = comment-start-symbol *non-eol
;; Key-Value pairs
keyval-sep = ws %x3D ws ; =
keyval = key keyval-sep val
key = unquoted-key / quoted-key
unquoted-key = 1*( ALPHA / DIGIT / %x2D / %x5F ) ; A-Z / a-z / 0-9 / - / _
quoted-key = quotation-mark 1*basic-char quotation-mark ; See Basic Strings
val = integer / float / string / boolean / date-time / array / inline-table
;; Table
table = std-table / array-table
;; Standard Table
std-table-open = %x5B ws ; [ Left square bracket
std-table-close = ws %x5D ; ] Right square bracket
table-key-sep = ws %x2E ws ; . Period
std-table = std-table-open key *( table-key-sep key) std-table-close
;; Array Table
array-table-open = %x5B.5B ws ; [[ Double left square bracket
array-table-close = ws %x5D.5D ; ]] Double right square bracket
array-table = array-table-open key *( table-key-sep key) array-table-close
;; Integer
integer = [ minus / plus ] int
minus = %x2D ; -
plus = %x2B ; +
digit1-9 = %x31-39 ; 1-9
underscore = %x5F ; _
int = DIGIT / digit1-9 1*( DIGIT / underscore DIGIT )
;; Float
float = integer ( frac / frac exp / exp )
zero-prefixable-int = DIGIT *( DIGIT / underscore DIGIT )
frac = decimal-point zero-prefixable-int
decimal-point = %x2E ; .
exp = e integer
e = %x65 / %x45 ; e E
;; String
string = basic-string / ml-basic-string / literal-string / ml-literal-string
;; Basic String
basic-string = quotation-mark *basic-char quotation-mark
quotation-mark = %x22 ; "
basic-char = basic-unescaped / escaped
escaped = escape ( %x22 / ; " quotation mark U+0022
%x5C / ; \ reverse solidus U+005C
%x2F / ; / solidus U+002F
%x62 / ; b backspace U+0008
%x66 / ; f form feed U+000C
%x6E / ; n line feed U+000A
%x72 / ; r carriage return U+000D
%x74 / ; t tab U+0009
%x75 4HEXDIG / ; uXXXX U+XXXX
%x55 8HEXDIG ) ; UXXXXXXXX U+XXXXXXXX
basic-unescaped = %x20-21 / %x23-5B / %x5D-10FFFF
escape = %x5C ; \
;; Multiline Basic String
ml-basic-string-delim = quotation-mark quotation-mark quotation-mark
ml-basic-string = ml-basic-string-delim ml-basic-body ml-basic-string-delim
ml-basic-body = *( ml-basic-char / newline / ( escape newline ))
ml-basic-char = ml-basic-unescaped / escaped
ml-basic-unescaped = %x20-5B / %x5D-10FFFF
;; Literal String
literal-string = apostraphe *literal-char apostraphe
apostraphe = %x27 ; ' Apostrophe
literal-char = %x09 / %x20-26 / %x28-10FFFF
;; Multiline Literal String
ml-literal-string-delim = apostraphe apostraphe apostraphe
ml-literal-string = ml-literal-string-delim ml-literal-body ml-literal-string-delim
ml-literal-body = *( ml-literal-char / newline )
ml-literal-char = %x09 / %x20-10FFFF
;; Boolean
boolean = true / false
true = %x74.72.75.65 ; true
false = %x66.61.6C.73.65 ; false
;; Datetime (as defined in RFC 3339)
date-fullyear = 4DIGIT
date-month = 2DIGIT ; 01-12
date-mday = 2DIGIT ; 01-28, 01-29, 01-30, 01-31 based on month/year
time-hour = 2DIGIT ; 00-23
time-minute = 2DIGIT ; 00-59
time-second = 2DIGIT ; 00-58, 00-59, 00-60 based on leap second rules
time-secfrac = "." 1*DIGIT
time-numoffset = ( "+" / "-" ) time-hour ":" time-minute
time-offset = "Z" / time-numoffset
partial-time = time-hour ":" time-minute ":" time-second [time-secfrac]
full-date = date-fullyear "-" date-month "-" date-mday
full-time = partial-time time-offset
date-time = full-date "T" full-time
;; Array
array-open = %x5B ws ; [
array-close = ws %x5D ; ]
array = array-open array-values array-close
array-values = [ val [ array-sep ] [ ( comment newlines) / newlines ] /
val array-sep [ ( comment newlines) / newlines ] array-values ]
array-sep = ws %x2C ws ; , Comma
;; Inline Table
inline-table-open = %x7B ws ; {
inline-table-close = ws %x7D ; }
inline-table-sep = ws %x2C ws ; , Comma
inline-table = inline-table-open inline-table-keyvals inline-table-close
inline-table-keyvals = [ inline-table-keyvals-non-empty ]
inline-table-keyvals-non-empty = key keyval-sep val /
key keyval-sep val inline-table-sep inline-table-keyvals-non-empty
;; Built-in ABNF terms, reproduced here for clarity
; ALPHA = %x41-5A / %x61-7A ; A-Z / a-z
; DIGIT = %x30-39 ; 0-9
; HEXDIG = DIGIT / "A" / "B" / "C" / "D" / "E" / "F"

View File

@@ -1,58 +0,0 @@
param num_beams; # number of beams
param num_rows >= 1, integer; # number of rows
param num_cols >= 1, integer; # number of columns
set BEAMS := 1 .. num_beams; # set of beams
set ROWS := 1 .. num_rows; # set of rows
set COLUMNS := 1 .. num_cols; # set of columns
# values for entries of each beam
param beam_values {BEAMS, ROWS, COLUMNS} >= 0;
# values of tumor
param tumor_values {ROWS, COLUMNS} >= 0;
# values of critical area
param critical_values {ROWS, COLUMNS} >= 0;
# critical maximum dosage requirement
param critical_max;
# tumor minimum dosage requirement
param tumor_min;
# dosage scalar of each beam
var X {i in BEAMS} >= 0;
# define the tumor area which includes the locations where tumor exists
set tumor_area := {k in ROWS, h in COLUMNS: tumor_values[k,h] > 0};
# define critical area
set critical_area := {k in ROWS, h in COLUMNS: critical_values[k,h] > 0};
var S {(k,h) in tumor_area} >= 0;
var T {(k,h) in critical_area} >= 0;
# maximize total dosage in tumor area
maximize total_tumor_dosage: sum {i in BEAMS} sum {(k,h) in tumor_area} X[i] * beam_values[i,k,h];
# minimize total dosage in critical area
minimize total_critical_dosage: sum {i in BEAMS} sum {(k,h) in critical_area} X[i] * beam_values[i,k,h];
# minimize total tumor slack
minimize total_tumor_slack: sum {(k,h) in tumor_area} S[k,h];
# minimize total critical area slack
minimize total_critical_slack: sum {(k,h) in critical_area} T[k,h];
# total dosage at each tumor location [k,h] should be >= min tumor dosage with slack variable
subject to tumor_limit {(k,h) in tumor_area} : sum {i in BEAMS} X[i] * beam_values[i,k,h] == tumor_min - S[k,h];
# total dosage at each critical location [k,h] should be = max critical dosage with slack variable
subject to critical_limit {(k,h) in critical_area} : sum {i in BEAMS} X[i] * beam_values[i,k,h] == critical_max + T[k,h];

View File

@@ -1,25 +0,0 @@
# A toy knapsack problem from the LocalSolver docs written in AMPL.
set I;
param Value{I};
param Weight{I};
param KnapsackBound;
var Take{I} binary;
maximize TotalValue: sum{i in I} Take[i] * Value[i];
s.t. WeightLimit: sum{i in I} Take[i] * Weight[i] <= KnapsackBound;
data;
param:
I: Weight Value :=
0 10 1
1 60 10
2 30 15
3 40 40
4 30 60
5 20 90
6 20 100
7 2 15;
param KnapsackBound := 102;

View File

@@ -1,55 +0,0 @@
FORMAT: 1A
# Advanced Action API
A resource action is in fact a state transition. This API example demonstrates an action - state transition - to another resource.
## API Blueprint
+ [Previous: Resource Model](11.%20Resource%20Model.md)
+ [This: Raw API Blueprint](https://raw.github.com/apiaryio/api-blueprint/master/examples/11.%20Advanced%20Action.md)
# Tasks [/tasks/tasks{?status,priority}]
+ Parameters
+ status (string)
+ priority (number)
## List All Tasks [GET]
+ Response 200 (application/json)
[
{
"id": 123,
"name": "Exercise in gym",
"done": false,
"type": "task"
},
{
"id": 124,
"name": "Shop for groceries",
"done": true,
"type": "task"
}
]
## Retrieve Task [GET /task/{id}]
This is a state transition to another resource
+ Parameters
+ id (string)
+ Response 200 (application/json)
{
"id": 123,
"name": "Go to gym",
"done": false,
"type": "task"
}
## Delete Task [DELETE /task/{id}]
+ Parameters
+ id (string)
+ Response 204

View File

@@ -1,39 +0,0 @@
FORMAT: 1A
# Attributes API
This API example demonstrates how to describe body attributes of a request or response message.
In this case, the description is complementary (and duplicate!) to the provided JSON example in the body section. The [Advanced Attributes](09.%20Advanced%20Attributes.md) API example will demonstrate how to avoid duplicates and how to reuse attributes descriptions.
## API Blueprint
+ [Previous: Parameters](07.%20Parameters.md)
+ [This: Raw API Blueprint](https://raw.github.com/apiaryio/api-blueprint/master/examples/08.%20Attributes.md)
+ [Next: Advanced Attributes](09.%20Advanced%20Attributes.md)
# Group Coupons
## Coupon [/coupons/{id}]
A coupon contains information about a percent-off or amount-off discount you might want to apply to a customer.
### Retrieve a Coupon [GET]
Retrieves the coupon with the given ID.
+ Response 200 (application/json)
+ Attributes (object)
+ id: 250FF (string)
+ created: 1415203908 (number) - Time stamp
+ percent_off: 25 (number)
A positive integer between 1 and 100 that represents the discount the coupon will apply.
+ redeem_by (number) - Date after which the coupon can no longer be redeemed
+ Body
{
"id": "250FF",
"created": 1415203908,
"percent_off": 25,
"redeem_by:" null
}

View File

@@ -1,18 +0,0 @@
FORMAT: 1A
# The Simplest API
This is one of the simplest APIs written in the **API Blueprint**.
One plain resource combined with a method and that's it! We will explain what is going on in the next installment - [Resource and Actions](02.%20Resource%20and%20Actions.md).
**Note:** As we progress through the examples, do not also forget to view the [Raw](https://raw.github.com/apiaryio/api-blueprint/master/examples/01.%20Simplest%20API.md) code to see what is really going on in the API Blueprint, as opposed to just seeing the output of the Github Markdown parser.
Also please keep in mind that every single example in this course is a **real API Blueprint** and as such you can **parse** it with the [API Blueprint parser](https://github.com/apiaryio/drafter) or one of its [bindings](https://github.com/apiaryio/drafter#bindings).
## API Blueprint
+ [This: Raw API Blueprint](https://raw.github.com/apiaryio/api-blueprint/master/examples/01.%20Simplest%20API.md)
+ [Next: Resource and Actions](02.%20Resource%20and%20Actions.md)
# GET /message
+ Response 200 (text/plain)
Hello World!

View File

@@ -1,367 +0,0 @@
:NameSpace UT
sac ← 0
expect_orig ← expect ← ⎕NS⍬
exception ← ⍬
nexpect_orig ← nexpect ← ⎕NS⍬
∇ {Z}←{Conf}run Argument;PRE_test;POST_test;TEST_step;COVER_step;FromSpace
load_display_if_not_already_loaded
load_salt_scripts_into_current_namespace_if_configured
FromSpace←1⊃⎕RSI
PRE_test←{}
POST_test←{}
COVER_step←{}
:If 0≠⎕NC'Conf'
:If Conf has'cover_target'
PRE_test←{{}⎕PROFILE'start'}
POST_test←{{}⎕PROFILE'stop'}
:EndIf
:EndIf
:If is_function Argument
TEST_step←single_function_test_function
COVER_file←Argument,'_coverage.html'
:ElseIf is_list_of_functions Argument
TEST_step←list_of_functions_test_function
COVER_file←'list_coverage.html'
:ElseIf is_file Argument
TEST_step←file_test_function
COVER_file←(get_file_name Argument),'_coverage.html'
:ElseIf is_dir Argument
test_files←test_files_in_dir Argument
TEST_step←test_dir_function
Argument←test_files
:EndIf
:If 0≠⎕NC'Conf'
:If Conf has'cover_target'
COVER_step←{Conf,←⊂('cover_file'COVER_file)
generate_coverage_page Conf}
:EndIf
:EndIf
PRE_test ⍬
Z←FromSpace TEST_step Argument
POST_test ⍬
COVER_step ⍬
∇ load_display_if_not_already_loaded
:If 0=⎕NC'#.DISPLAY'
'DISPLAY'#.⎕CY'display'
:EndIf
∇ load_salt_scripts_into_current_namespace_if_configured
:If 0≠⎕NC'#.UT.appdir'
:If ⍬≢#.UT.appdir
⎕SE.SALT.Load #.UT.appdir,'src/*.dyalog -target=#'
⎕SE.SALT.Load #.UT.appdir,'test/*.dyalog -target=#'
:EndIf
:EndIf
∇ Z←FromSpace single_function_test_function TestName
Z←run_ut FromSpace TestName
∇ Z←FromSpace list_of_functions_test_function ListOfNames;t
t←⎕TS
Z←run_ut¨{FromSpace ⍵}¨ListOfNames
t←⎕TS-t
('Test execution report')print_passed_crashed_failed Z t
∇ Z←FromSpace file_test_function FilePath;FileNS;Functions;TestFunctions;t
FileNS←⎕SE.SALT.Load FilePath,' -target=#'
Functions←↓FileNS.⎕NL 3
TestFunctions←(is_test¨Functions)/Functions
:If (0/⍬,⊂0/'')≡TestFunctions
⎕←'No test functions found'
Z←⍬
:Else
t←⎕TS
Z←run_ut¨{FileNS ⍵}¨TestFunctions
t←⎕TS-t
(FilePath,' tests')print_passed_crashed_failed Z t
:EndIf
∇ Z←FromSpace test_dir_function Test_files
:If Test_files≡⍬/⍬,⊂''
⎕←'No test files found'
Z←⍬
:Else
Z←#.UT.run¨Test_files
:EndIf
∇ Z←get_file_name Argument;separator
separator←⊃⌽(Argument∊'/\')/Argument
Z←¯7↓separator↓Argument
∇ generate_coverage_page Conf;ProfileData;CoverResults;HTML
ProfileData←⎕PROFILE'data'
ToCover←retrieve_coverables¨(⊃'cover_target'in Conf)
:If (ToCover)≡(⊂1)
ToCover←⊃ToCover
:EndIf
Representations←get_representation¨ToCover
CoverResults←ProfileData∘generate_cover_result¨↓ToCover,[1.5]Representations
HTML←generate_html CoverResults
Conf write_html_to_page HTML
⎕PROFILE'clear'
∇ Z←retrieve_coverables Something;nc;functions
nc←⎕NC Something
:If nc=3
Z←Something
:ElseIf nc=9
functions←strip¨↓⍎Something,'.⎕NL 3'
Z←{(Something,'.',⍵)}¨functions
:EndIf
∇ Z←strip input
Z←(input≠' ')/input
∇ Z←get_representation Function;nc;rep
nc←⎕NC⊂Function
:If nc=3.1
rep←↓⎕CR Function
rep[1]←⊂'∇',⊃rep[1]
rep,←⊂'∇'
rep←↑rep
:Else
rep←⎕CR Function
:EndIf
Z←rep
∇ Z←ProfileData generate_cover_result(name representation);Indices;lines;functionlines;covered_lines
Indices←({name≡⍵}¨ProfileData[;1])/ProfileData[;1]
lines←ProfileData[Indices;2]
nc←⎕NC⊂name
:If 3.1=nc
functionlines←¯2+↓representation
:Else
functionlines←⊃↓representation
:EndIf
covered_lines←(⍬∘≢¨lines)/lines
Z←(nc lines functionlines covered_lines representation)
∇ Z←generate_html CoverResults;Covered;Total;Percentage;CoverageText;ColorizedCode;Timestamp;Page
Covered←⊃⊃+/{4⊃⍵}¨CoverResults
Total←⊃⊃+/{3⊃⍵}¨CoverResults
Percentage←100×Covered÷Total
CoverageText←'Coverage: ',Percentage,'% (',Covered,'/',Total,')'
ColorizedCode←⊃,/{colorize_code_by_coverage ⍵}¨CoverResults
Timestamp←generate_timestamp_text
Page←⍬
Page,←⊂⍬,'<html>'
Page,←⊂⍬,'<meta http-equiv="Content-Type" content="text/html;charset=utf-8"/>'
Page,←⊂⍬,'<style>pre cov {line-height:80%;}'
Page,←⊂⍬,'pre cov {color: green;}'
Page,←⊂⍬,'pre uncov {line-height:80%;}'
Page,←⊂⍬,'pre uncov {color:red;}</style>'
Page,←⊂⍬,CoverageText
Page,←⊂⍬,'<pre>'
Page,←ColorizedCode
Page,←⊂⍬,'</pre>'
Page,←Timestamp
Page,←⊂⍬,'</html>'
Z←Page
∇ Z←colorize_code_by_coverage CoverResult;Colors;Ends;Code
:If 3.1=⊃CoverResult
Colors←(2+3⊃CoverResult)⍴⊂'<uncov>'
Colors[1]←⊂''
Colors[Colors]←⊂''
Ends←(2+3⊃CoverResult)⍴⊂'</uncov>'
Ends[1]←⊂''
Ends[Ends]←⊂''
:Else
Colors←(3⊃CoverResult)⍴⊂'<uncov>'
Ends←(3⊃CoverResult)⍴⊂'</uncov>'
:EndIf
Colors[1+4⊃CoverResult]←⊂'<cov>'
Ends[1+4⊃CoverResult]←⊂'</cov>'
Code←↓5⊃CoverResult
Z←Colors,[1.5]Code
Z←{,(⎕UCS 13),⍵}/Z,Ends
∇ Z←generate_timestamp_text;TS;YYMMDD;HHMMSS
TS←⎕TS
YYMMDD←⊃{,'-',⍵}/3↑TS
HHMMSS←⊃{,':',⍵}/3↑3↓TS
Z←'Page generated: ',YYMMDD,'|',HHMMSS
∇ Conf write_html_to_page Page;tie;filename
filename←(⊃'cover_out'in Conf),(⊃'cover_file'in Conf)
:Trap 22
tie←filename ⎕NTIE 0
filename ⎕NERASE tie
filename ⎕NCREATE tie
:Else
tie←filename ⎕NCREATE 0
:EndTrap
Simple_array←⍕⊃,/Page
(⎕UCS'UTF-8'⎕UCS Simple_array)⎕NAPPEND tie
∇ Z←is_function Argument
Z←'_TEST'≡¯5↑Argument
∇ Z←is_list_of_functions Argument
Z←2=≡Argument
∇ Z←is_file Argument
Z←'.dyalog'≡¯7↑Argument
∇ Z←is_dir Argument;attr
:If 'Linux'≡5↑⊃'.'⎕WG'APLVersion'
Z←'yes'≡⊃⎕CMD'test -d ',Argument,' && echo yes || echo no'
:Else
'gfa'⎕NA'I kernel32|GetFileAttributes* <0t'
:If Z←¯1≠attr←gfa⊂Argument ⍝ If file exists
Z←⊃2 16attr ⍝ Return bit 4
:EndIf
:EndIf
∇ Z←test_files_in_dir Argument
:If 'Linux'≡5↑⊃'.'⎕WG'APLVersion'
Z←⎕SH'find ',Argument,' -name \*_tests.dyalog'
:Else
#.⎕CY'files'
Z←#.Files.Dir Argument,'\*_tests.dyalog'
Z←(Argument,'\')∘,¨Z
:EndIf
∇ Z←run_ut ut_data;returned;crashed;pass;crash;fail;message
(returned crashed time)←execute_function ut_data
(pass crash fail)←determine_pass_crash_or_fail returned crashed
message←determine_message pass fail crashed(2⊃ut_data)returned time
print_message_to_screen message
Z←(pass crash fail)
∇ Z←execute_function ut_data;function;t
reset_UT_globals
function←(⍕(⊃ut_data[1])),'.',⊃ut_data[2]
:Trap sac
:If 3.2≡⎕NC⊂function
t←⎕TS
Z←(⍎function,' ⍬')0
t←⎕TS-t
:Else
t←⎕TS
Z←(⍎function)0
t←⎕TS-t
:EndIf
:Else
Z←(↑⎕DM)1
:If exception≢⍬
expect←exception
Z[2]←0
t←⎕TS-t
:EndIf
:EndTrap
Z,←⊂t
∇ reset_UT_globals
expect_orig ← expect← ⎕NS⍬
exception←⍬
nexpect_orig ← nexpect← ⎕NS⍬
∇ Z←is_test FunctionName;wsIndex
wsIndex←FunctionName' '
FunctionName←(wsIndex-1)↑FunctionName
Z←'_TEST'≡¯5↑FunctionName
∇ Heading print_passed_crashed_failed(ArrayRes time)
⎕←'-----------------------------------------'
⎕←Heading
⎕←' ⍋ Passed: ',+/{1⊃⍵}¨ArrayRes
⎕←' ⍟ Crashed: ',+/{2⊃⍵}¨ArrayRes
⎕←' ⍒ Failed: ',+/{3⊃⍵}¨ArrayRes
⎕←' ○ Runtime: ',time[5],'m',time[6],'s',time[7],'ms'
determine_pass_crash_or_fail←{
r c←⍵ ⋄ 0≠c:0 1 0 ⋄ z←(0 0 1)(1 0 0)
expect_orig≢expect:(⎕IO+expect≡r)⊃z ⋄ (⎕IO+nexpect≢r)⊃z
}
∇ Z←determine_message(pass fail crashed name returned time)
:If crashed
Z←'CRASHED: 'failure_message name returned
:ElseIf pass
Z←'Passed ',time[5],'m',time[6],'s',time[7],'ms'
:Else
Z←'FAILED: 'failure_message name returned
:EndIf
∇ print_message_to_screen message
⎕←message
∇ Z←term_to_text Term;Text;Rows
Text←#.DISPLAY Term
Rows←1⊃Text
Z←(Rows 4''),Text
∇ Z←Cause failure_message(name returned);hdr;exp;expterm;got;gotterm
hdr←Cause,name
exp←'Expected'
expterm←term_to_text #.UT.expect
got←'Got'
gotterm←term_to_text returned
Z←align_and_join_message_parts hdr exp expterm got gotterm
∇ Z←align_and_join_message_parts Parts;hdr;exp;expterm;got;gotterm;R1;C1;R2;C2;W
(hdr exp expterm got gotterm)←Parts
(R1 C1)←expterm
(R2 C2)←gotterm
W←⊃⊃⌈/C1 C2(hdr)(exp)(got)
Z←(W↑hdr),[0.5](W↑exp)
Z←Z⍪(R1 W↑expterm)
Z←Z⍪(W↑got)
Z←Z⍪(R2 W↑gotterm)
∇ Z←confparam in config
Z←1↓⊃({confparam≡⊃⍵}¨config)/config
∇ Z←config has confparam
Z←/{confparam≡⊃⍵}¨config
:EndNameSpace

View File

@@ -1,7 +0,0 @@
#!/usr/local/bin/apl --script
NEWLINE ← ⎕UCS 10
HEADERS ← 'Content-Type: text/plain', NEWLINE
HEADERS
⍝ ⎕←HEADERS
⍝ ⍕⎕TS
)OFF

View File

@@ -1,33 +0,0 @@
MyShopPurchaseOrders DEFINITIONS AUTOMATIC TAGS ::= BEGIN
PurchaseOrder ::= SEQUENCE {
dateOfOrder DATE,
customer CustomerInfo,
items ListOfItems
}
CustomerInfo ::= SEQUENCE {
companyName VisibleString (SIZE (3..50)),
billingAddress Address,
contactPhone NumericString (SIZE (7..12))
}
Address::= SEQUENCE {
street VisibleString (SIZE (5 .. 50)) OPTIONAL,
city VisibleString (SIZE (2..30)),
state VisibleString (SIZE(2) ^ FROM ("A".."Z")),
zipCode NumericString (SIZE(5 | 9))
}
ListOfItems ::= SEQUENCE (SIZE (1..100)) OF Item
Item ::= SEQUENCE {
itemCode INTEGER (1..99999),
color VisibleString ("Black" | "Blue" | "Brown"),
power INTEGER (110 | 220),
deliveryTime INTEGER (8..12 | 14..19),
quantity INTEGER (1..1000),
unitPrice REAL (1.00 .. 9999.00),
isTaxable BOOLEAN
}
END

View File

@@ -1,318 +0,0 @@
(*
* The MIT License (MIT)
*
* Copyright (c) 2014 Hongwei Xi
*
* Permission is hereby granted, free of charge, to any person obtaining a copy
* of this software and associated documentation files (the "Software"), to deal
* in the Software without restriction, including without limitation the rights
* to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
* copies of the Software, and to permit persons to whom the Software is
* furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in all
* copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
* AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
* OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
* SOFTWARE.)
*)
// Source: https://github.com/githwxi/ATS-Postiats-contrib/blob/201d635062d0ea64ff5ba5457a4ea0bb4d5ae202/contrib/libats-/hwxi/teaching/mysession-g/SATS/basis_ssntype.sats
(*
** Basis for g-session types
*)
(* ****** ****** *)
//
staload
"./basis_intset.sats"
//
(* ****** ****** *)
//
fun{}
channel_cap(): intGte(1)
//
(* ****** ****** *)
//
abstype
session_msg
(i:int, j:int, a:vt@ype)
//
(* ****** ****** *)
abstype ssession_nil
abstype ssession_cons(a:type, ssn:type)
(* ****** ****** *)
//
stadef msg = session_msg
//
stadef nil = ssession_nil
//
stadef :: = ssession_cons
stadef cons = ssession_cons
//
(* ****** ****** *)
//
abstype
session_append
(ssn1: type, ssn2: type)
//
stadef append = session_append
//
(* ****** ****** *)
//
abstype
session_choose
(
i:int, ssn1:type, ssn2:type
) (* session_choose *)
//
stadef choose = session_choose
//
(* ****** ****** *)
//
abstype
session_repeat
(
i:int, ssn:type(*body*)
) (* session_repeat *)
//
stadef repeat = session_repeat
//
(* ****** ****** *)
//
typedef
session_sing
(
i: int
, j: int
, a:vt@ype
) = cons(msg(i, j, a), nil)
//
(* ****** ****** *)
//
absvtype
channel1_vtype
(G:iset, n:int, ssn:type) = ptr
//
vtypedef
channel1
(G:iset, n:int, ssn:type) = channel1_vtype(G, n, ssn)
//
vtypedef
cchannel1
(G:iset, n:int, ssn:type) = channel1_vtype(ncomp(n, G), n, ssn)
//
(* ****** ****** *)
//
fun{}
channel1_get_nrole
{n:int}{ssn:type}{G:iset}
(chan: !channel1(G, n, ssn)): int(n)
//
fun{}
channel1_get_group
{n:int}{ssn:type}{G:iset}
(chan: !channel1(G, n, ssn)): intset(n,G)
//
(* ****** ****** *)
//
fun
{a:vt0p}
channel1_close
{n:int}{ssn:type}{G:iset}(chan: channel1(G, n, nil)): void
//
(* ****** ****** *)
//
fun{}
channel1_skipin
{a:vt0p}
{n:int}{ssn:type}{G:iset}
{i,j:nat | ismbr(G, i); ismbr(G, j)}
(
!channel1(G, n, msg(i, j, a)::ssn) >> channel1(G, n, ssn)
) : void // end-of-function
praxi
lemma_channel1_skipin
{a:vt0p}
{n:int}{ssn:type}{G:iset}
{i,j:nat | ismbr(G, i); ismbr(G, j)}
(
!channel1(G, n, msg(i, j, a)::ssn) >> channel1(G, n, ssn)
) : void // lemma_channel1_skipin
//
fun{}
channel1_skipex
{a:vt0p}
{n:int}{ssn:type}{G:iset}
{i,j:nat | ~ismbr(G, i); ~ismbr(G, j)}
(
!channel1(G, n, msg(i, j, a)::ssn) >> channel1(G, n, ssn)
) : void // end-of-function
praxi
lemma_channel1_skipex
{a:vt0p}
{n:int}{ssn:type}{G:iset}
{i,j:nat | ~ismbr(G, i); ~ismbr(G, j)}
(
!channel1(G, n, msg(i, j, a)::ssn) >> channel1(G, n, ssn)
) : void // lemma_channel1_skipex
//
(* ****** ****** *)
//
fun
{a:vt0p}
channel1_send
{n:int}{ssn:type}{G:iset}
{i,j:nat | i < n; j < n; ismbr(G, i); ~ismbr(G, j)}
(
!channel1(G, n, msg(i, j, a)::ssn) >> channel1(G, n, ssn), int(i), int(j), a
) : void // end of [channel1_send]
//
fun
{a:vt0p}
channel1_recv
{n:int}{ssn:type}{G:iset}
{i,j:nat | i < n; j < n; ~ismbr(G, i); ismbr(G, j)}
(
!channel1(G, n, msg(i, j, a)::ssn) >> channel1(G, n, ssn), int(i), int(j), &a? >> a
) : void // end of [channel1_recv]
//
fun
{a:vt0p}
channel1_recv_val
{n:int}{ssn:type}{G:iset}
{i,j:nat | i < n; j < n; ~ismbr(G, i); ismbr(G, j)}
(!channel1(G, n, msg(i, j, a)::ssn) >> channel1(G, n, ssn), int(i), int(j)): (a)
//
(* ****** ****** *)
fun{}
channel1_append
{n:int}
{ssn1,ssn2:type}
{G:iset}
(
chan: !channel1(G, n, append(ssn1, ssn2)) >> channel1(G, n, ssn2)
, fserv: (!channel1(G, n, ssn1) >> channel1(G, n, nil)) -<lincloptr1> void
) : void // end of [channel1_append]
(* ****** ****** *)
//
datatype
choosetag
(
a:type, b:type, c:type
) =
| choosetag_l(a, b, a) of ()
| choosetag_r(a, b, b) of ()
//
(* ****** ****** *)
//
fun{}
channel1_choose_l
{n:int}
{ssn1,ssn2:type}
{G:iset}
{i:nat | i < n; ismbr(G, i)}
(
!channel1(G, n, choose(i,ssn1,ssn2)) >> channel1(G, n, ssn1), i: int(i)
) : void // end of [channel1_choose_l]
//
fun{}
channel1_choose_r
{n:int}
{ssn1,ssn2:type}
{G:iset}
{i:nat | i < n; ismbr(G, i)}
(
!channel1(G, n, choose(i,ssn1,ssn2)) >> channel1(G, n, ssn2), i: int(i)
) : void // end of [channel1_choose_r]
//
fun{}
channel1_choose_tag
{n:int}
{ssn1,ssn2:type}
{G:iset}
{i:nat | i < n; ~isnil(G); ~ismbr(G, i)}
(
!channel1(G, n, choose(i,ssn1,ssn2)) >> channel1(G, n, ssn_chosen), i: int(i)
) : #[ssn_chosen:type] choosetag(ssn1, ssn2, ssn_chosen)
//
(* ****** ****** *)
//
fun{}
channel1_repeat_0
{n:int}
{ssn:type}
{G:iset}
{i:nat | i < n; ismbr(G, i)}
(
!channel1(G, n, repeat(i,ssn)) >> channel1(G, n, nil), i: int(i)
) : void // end of [channel1_repeat_nil]
//
fun{}
channel1_repeat_1
{n:int}
{ssn:type}
{G:iset}
{i:nat | i < n; ismbr(G, i)}
(
!channel1(G, n, repeat(i,ssn)) >> channel1(G, n, append(ssn,repeat(i,ssn))), i: int(i)
) : void // end of [channel1_repeat_more]
//
fun{}
channel1_repeat_tag
{n:int}
{ssn:type}
{G:iset}
{i:nat | i < n; ~isnil(G); ~ismbr(G, i)}
(
!channel1(G, n, repeat(i,ssn)) >> channel1(G, n, ssn_chosen), i: int(i)
) : #[ssn_chosen:type] choosetag(nil, append(ssn,repeat(i,ssn)), ssn_chosen)
//
(* ****** ****** *)
//
(*
//
// HX-2015-03-06:
// This one does not work with sschoose!!!
//
fun{}
channel1_link
{n:int}{ssn:type}
{G1,G2:iset | isnil(G1*G2)}
(channel1(G1, n, ssn), channel1(G2, n, ssn)): channel1(G1+G2, n, ssn)
*)
//
fun{}
channel1_link
{n:int}{ssn:type}
{G1,G2:iset | isful(G1+G2,n)}
(channel1(G1, n, ssn), channel1(G2, n, ssn)): channel1(G1*G2, n, ssn)
//
(* ****** ****** *)
//
fun{}
channel1_link_elim
{n:int}{ssn:type}{G:iset}(channel1(G, n, ssn), cchannel1(G, n, ssn)): void
//
(* ****** ****** *)
//
fun{}
cchannel1_create_exn
{n:nat}{ssn:type}{G:iset}
(
nrole: int(n), G: intset(n), fserv: channel1(G, n, ssn) -<lincloptr1> void
) : cchannel1(G, n, ssn) // end of [cchannel1_create_exn]
//
(* ****** ****** *)
(* end of [basis_ssntype.sats] *)

View File

@@ -1,179 +0,0 @@
(*
* The MIT License (MIT)
*
* Copyright (c) 2014 Hongwei Xi
*
* Permission is hereby granted, free of charge, to any person obtaining a copy
* of this software and associated documentation files (the "Software"), to deal
* in the Software without restriction, including without limitation the rights
* to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
* copies of the Software, and to permit persons to whom the Software is
* furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in all
* copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
* AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
* OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
* SOFTWARE.)
*)
// Source: https://github.com/githwxi/ATS-Postiats-contrib/blob/0f26aa0df8542d2ae21df9be1e13208f66f571d6/contrib/libats-/hwxi/teaching/mygrading/HATS/csv_parse.hats
(* ****** ****** *)
//
// Author: Hongwei Xi
// Authoremail: gmhwxiATgmailDOTcom
// Start time: the first of July, 2016
//
(* ****** ****** *)
//
#ifdef
MYGRADING_HATS
#then
#else
//
extern
fun
csv_parse_line
(
line: string
) : List0_vt(Strptr1)
//
#endif // #ifdef
//
(* ****** ****** *)
local
//
staload
UN = "prelude/SATS/unsafe.sats"
//
extern
fun{}
getpos(): int
//
extern
fun{}
is_end(): bool
//
extern
fun{}
char_at(): int
//
extern
fun{}
Strptr1_at(i0: int): Strptr1
//
extern
fun{}
rmove(): void
extern
fun{}
rmove_while(test: char -<cloref1> bool): void
//
in (* in-of-local *)
//
implement
{}(*tmp*)
rmove_while
(test) = let
//
val c0 = char_at()
//
in
//
if c0 >= 0 then
if test(int2char0(c0)) then (rmove(); rmove_while(test)) else ()
// end of [if]
//
end // end of [rmove_while]
(* ****** ****** *)
implement
csv_parse_line
(line) = let
//
val line = g1ofg0(line)
//
var i: int = 0
val p_i = addr@i
//
val n0 = sz2i(length(line))
//
macdef get_i() = $UN.ptr0_get<int>(p_i)
macdef inc_i() = $UN.ptr0_addby<int>(p_i, 1)
macdef set_i(i0) = $UN.ptr0_set<int>(p_i, ,(i0))
//
implement
getpos<>() = get_i()
//
implement
is_end<>() = get_i() >= n0
//
implement
char_at<>() = let
val i = get_i()
val i = ckastloc_gintGte(i, 0)
//
in
if i < n0 then char2u2int0(line[i]) else ~1
end // end of [char_at]
//
implement
Strptr1_at<>(i0) = let
//
val i1 = get_i()
val i0 = ckastloc_gintGte(i0, 0)
val i1 = ckastloc_gintBtwe(i1, i0, n0)
//
in
$UN.castvwtp0(
string_make_substring(line, i2sz(i0), i2sz(i1-i0))
) (* $UN.castvwtp0 *)
end // end of [Strptr1_at]
//
implement
rmove<>() =
if get_i() < n0 then inc_i()
//
vtypedef res_vt = List0_vt(Strptr1)
//
fun
loop
(
i: int, res: res_vt
) : res_vt =
if
is_end()
then res
else let
val () =
(
if i > 0 then rmove()
)
val i0 = getpos()
var f0 =
(
lam@(c: char) =<clo> c != ','
)
val () = rmove_while($UN.cast(addr@f0))
val s0 = Strptr1_at(i0)
in
loop(i+1, list_vt_cons(s0, res))
end // end of [else]
//
in
list_vt_reverse(loop(0(*i*), list_vt_nil((*void*))))
end // end of [csv_parse_line]
end // end of [local]
(* ****** ****** *)
(* end of [csv_parse.hats] *)

View File

@@ -1,694 +0,0 @@
(***********************************************************************)
(* *)
(* ATS/contrib/atshwxi *)
(* *)
(***********************************************************************)
(*
** Copyright (C) 2013 Hongwei Xi, ATS Trustful Software, Inc.
**
** Permission is hereby granted, free of charge, to any person obtaining a
** copy of this software and associated documentation files (the "Software"),
** to deal in the Software without restriction, including without limitation
** the rights to use, copy, modify, merge, publish, distribute, sublicense,
** and/or sell copies of the Software, and to permit persons to whom the
** Software is furnished to do so, subject to the following stated conditions:
**
** The above copyright notice and this permission notice shall be included in
** all copies or substantial portions of the Software.
**
** THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
** OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
** FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
** THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
** LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
** FROM OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
** IN THE SOFTWARE.
*)
// Source: https://github.com/githwxi/ATS-Postiats-contrib/blob/04a984d9c08c1831f7dda8a05ce356db01f81850/contrib/libats-/hwxi/intinf/DATS/intinf_vt.dats
(* ****** ****** *)
//
// Author: Hongwei Xi
// Authoremail: hwxi AT gmail DOT com
// Start Time: April, 2013
//
(* ****** ****** *)
#include
"share/atspre_define.hats"
(* ****** ****** *)
staload
UN = "prelude/SATS/unsafe.sats"
(* ****** ****** *)
staload
GMP = "{$LIBGMP}/SATS/gmp.sats"
(* ****** ****** *)
vtypedef mpz = $GMP.mpz_vt0ype
(* ****** ****** *)
//
staload "./../SATS/intinf.sats"
staload "./../SATS/intinf_vt.sats"
//
(* ****** ****** *)
macdef i2u (x) = g1int2uint_int_uint (,(x))
(* ****** ****** *)
local
assume
intinf_vtype
(i: int) = // HX: [i] is a fake
[l:addr] (mpz @ l, mfree_gc_v (l) | ptr l)
// end of [intinf_vtype]
in (* in of [local] *)
implement{}
intinf_make_int
(i) = (x) where
{
//
val x = ptr_alloc<mpz> ()
val () = $GMP.mpz_init_set_int (!(x.2), i)
//
} (* end of [intinf_make_int] *)
implement{}
intinf_make_uint
(i) = (x) where
{
//
val x = ptr_alloc<mpz> ()
val () = $GMP.mpz_init_set_uint (!(x.2), i)
//
} (* end of [intinf_make_uint] *)
implement{}
intinf_make_lint
(i) = (x) where
{
//
val x = ptr_alloc<mpz> ()
val () = $GMP.mpz_init_set_lint (!(x.2), i)
//
} (* end of [intinf_make_lint] *)
implement{}
intinf_make_ulint
(i) = (x) where
{
//
val x = ptr_alloc<mpz> ()
val () = $GMP.mpz_init_set_ulint (!(x.2), i)
//
} (* end of [intinf_make_ulint] *)
(* ****** ****** *)
implement{}
intinf_free (x) = let
val (pfat, pfgc | p) = x
val () = $GMP.mpz_clear (!p) in ptr_free (pfgc, pfat | p)
end (* end of [intinf_free] *)
(* ****** ****** *)
implement{}
intinf_get_int (x) = $GMP.mpz_get_int (!(x.2))
implement{}
intinf_get_lint (x) = $GMP.mpz_get_lint (!(x.2))
(* ****** ****** *)
implement{}
intinf_get_strptr
(x, base) = $GMP.mpz_get_str_null (base, !(x.2))
// end of [intinf_get_strptr]
(* ****** ****** *)
implement{}
fprint_intinf_base
(out, x, base) = let
val nsz = $GMP.mpz_out_str (out, base, !(x.2))
in
//
if (nsz = 0) then
exit_errmsg (1, "libgmp/gmp: fprint_intinf_base")
// end of [if]
//
end (* fprint_intinf_base *)
(* ****** ****** *)
implement{
} neg_intinf0
(x) = (x) where
{
//
val () = $GMP.mpz_neg (!(x.2))
//
} (* end of [neg_intinf0] *)
implement{
} neg_intinf1
(x) = (y) where
{
//
val y = ptr_alloc<mpz> ()
val () = $GMP.mpz_init (!(y.2))
val () = $GMP.mpz_neg (!(y.2), !(x.2))
//
} (* end of [neg_intinf1] *)
(* ****** ****** *)
implement{
} abs_intinf0
(x) = (x) where
{
//
val () = $GMP.mpz_abs (!(x.2))
//
} (* end of [abs_intinf0] *)
implement{
} abs_intinf1
(x) = (y) where
{
//
val y = ptr_alloc<mpz> ()
val () = $GMP.mpz_init (!(y.2))
val () = $GMP.mpz_abs (!(y.2), !(x.2))
//
} (* end of [abs_intinf1] *)
(* ****** ****** *)
implement{}
succ_intinf0 (x) = add_intinf0_int (x, 1)
implement{}
succ_intinf1 (x) = add_intinf1_int (x, 1)
(* ****** ****** *)
implement{}
pred_intinf0 (x) = sub_intinf0_int (x, 1)
implement{}
pred_intinf1 (x) = sub_intinf1_int (x, 1)
(* ****** ****** *)
implement{}
add_intinf0_int
(x, y) = (x) where
{
//
val () = $GMP.mpz_add2_int (!(x.2), y)
//
} (* end of [add_intinf0_int] *)
implement{}
add_intinf1_int
(x, y) = (z) where
{
//
val z = ptr_alloc<mpz> ()
val () = $GMP.mpz_init (!(z.2))
val () = $GMP.mpz_add3_int (!(z.2), !(x.2), y)
//
} (* end of [add_intinf1_int] *)
(* ****** ****** *)
implement{}
add_int_intinf0 (x, y) = add_intinf0_int (y, x)
implement{}
add_int_intinf1 (x, y) = add_intinf1_int (y, x)
(* ****** ****** *)
implement{}
add_intinf0_intinf1
(x, y) = (x) where
{
//
val () = $GMP.mpz_add2_mpz (!(x.2), !(y.2))
//
} (* end of [add_intinf0_intinf1] *)
implement{}
add_intinf1_intinf0
(x, y) = (y) where
{
//
val () = $GMP.mpz_add2_mpz (!(y.2), !(x.2))
//
} (* end of [add_intinf1_intinf0] *)
(* ****** ****** *)
implement{}
add_intinf1_intinf1
(x, y) = (z) where
{
//
val z = ptr_alloc<mpz> ()
val () = $GMP.mpz_init (!(z.2))
val () = $GMP.mpz_add3_mpz (!(z.2), !(x.2), !(y.2))
//
} (* end of [add_intinf1_intinf1] *)
(* ****** ****** *)
implement{}
sub_intinf0_int
(x, y) = (x) where
{
//
val () = $GMP.mpz_sub2_int (!(x.2), y)
//
} (* end of [sub_intinf0_int] *)
implement{}
sub_intinf1_int
(x, y) = (z) where
{
//
val z = ptr_alloc<mpz> ()
val () = $GMP.mpz_init (!(z.2))
val () = $GMP.mpz_sub3_int (!(z.2), !(x.2), y)
//
} (* end of [sub_intinf1_int] *)
(* ****** ****** *)
implement{}
sub_int_intinf0 (x, y) = let
val z = sub_intinf0_int (y, x) in neg_intinf0 (z)
end (* end of [sub_int_intinf0] *)
implement{}
sub_int_intinf1 (x, y) = let
val z = sub_intinf1_int (y, x) in neg_intinf0 (z)
end (* end of [sub_int_intinf1] *)
(* ****** ****** *)
implement{}
sub_intinf0_intinf1
(x, y) = (x) where
{
//
val () = $GMP.mpz_sub2_mpz (!(x.2), !(y.2))
//
} (* end of [sub_intinf0_intinf1] *)
implement{}
sub_intinf1_intinf0
(x, y) = neg_intinf0 (sub_intinf0_intinf1 (y, x))
// end of [sub_intinf1_intinf0]
implement{}
sub_intinf1_intinf1
(x, y) = (z) where
{
//
val z = ptr_alloc<mpz> ()
val () = $GMP.mpz_init (!(z.2))
val () = $GMP.mpz_sub3_mpz (!(z.2), !(x.2), !(y.2))
//
} (* end of [sub_intinf1_intinf1] *)
(* ****** ****** *)
implement{}
mul_intinf0_int
(x, y) = (x) where
{
//
val () = $GMP.mpz_mul2_int (!(x.2), y)
//
} (* end of [mul_intinf0_int] *)
implement{}
mul_intinf1_int
(x, y) = (z) where
{
//
val z = ptr_alloc<mpz> ()
val () = $GMP.mpz_init (!(z.2))
val () = $GMP.mpz_mul3_int (!(z.2), !(x.2), y)
//
} (* end of [mul_intinf1_int] *)
(* ****** ****** *)
implement{}
mul_int_intinf0 (x, y) = mul_intinf0_int (y, x)
implement{}
mul_int_intinf1 (x, y) = mul_intinf1_int (y, x)
(* ****** ****** *)
implement{}
mul_intinf0_intinf1
(x, y) = (x) where
{
//
val () = $GMP.mpz_mul2_mpz (!(x.2), !(y.2))
//
} (* end of [mul_intinf0_intinf1] *)
implement{}
mul_intinf1_intinf0
(x, y) = (y) where
{
//
val () = $GMP.mpz_mul2_mpz (!(y.2), !(x.2))
//
} (* end of [mul_intinf0_intinf1] *)
(* ****** ****** *)
implement{}
mul_intinf1_intinf1
(x, y) = (z) where
{
//
val z = ptr_alloc<mpz> ()
val () = $GMP.mpz_init (!(z.2))
val () = $GMP.mpz_mul3_mpz (!(z.2), !(x.2), !(y.2))
//
} (* end of [mul_intinf1_intinf1] *)
(* ****** ****** *)
implement{}
div_intinf0_int
{i,j} (x, y) = let
in
//
if y >= 0 then let
val () = $GMP.mpz_tdiv2_q_uint (!(x.2), i2u(y)) in x
end else let
val () = $GMP.mpz_tdiv2_q_uint (!(x.2), i2u(~y)) in neg_intinf0 (x)
end // end of [if]
//
end (* end of [div_intinf0_int] *)
implement{}
div_intinf1_int
{i,j} (x, y) = let
//
val z = ptr_alloc<mpz> ()
val () = $GMP.mpz_init (!(z.2))
//
in
//
if y >= 0 then let
val () = $GMP.mpz_tdiv3_q_uint (!(z.2), !(x.2), i2u(y)) in z
end else let
val () = $GMP.mpz_tdiv3_q_uint (!(z.2), !(x.2), i2u(~y)) in neg_intinf0 (z)
end // end of [if]
//
end (* end of [div_intinf1_int] *)
(* ****** ****** *)
implement{}
div_intinf0_intinf1
(x, y) = (x) where
{
//
val () = $GMP.mpz_tdiv2_q_mpz (!(x.2), !(y.2))
//
} (* end of [div_intinf0_intinf1] *)
(* ****** ****** *)
implement{}
div_intinf1_intinf1
(x, y) = (z) where
{
//
val z = ptr_alloc<mpz> ()
val () = $GMP.mpz_init (!(z.2))
val () = $GMP.mpz_tdiv3_q_mpz (!(z.2), !(x.2), !(y.2))
//
} (* end of [div_intinf1_intinf1] *)
(* ****** ****** *)
implement{}
ndiv_intinf0_int (x, y) = div_intinf0_int (x, y)
implement{}
ndiv_intinf1_int (x, y) = div_intinf1_int (x, y)
(* ****** ****** *)
implement{}
nmod_intinf0_int
{i,j} (x, y) = let
//
val r =
$GMP.mpz_fdiv_uint (!(x.2), i2u(y))
val () = intinf_free (x)
//
in
$UN.cast{intBtw(0,j)}(r)
end (* end of [nmod_intinf0_int] *)
implement{}
nmod_intinf1_int
{i,j} (x, y) = let
//
val r = $GMP.mpz_fdiv_uint (!(x.2), i2u(y))
//
in
$UN.cast{intBtw(0,j)}(r)
end (* end of [nmod_intinf1_int] *)
(* ****** ****** *)
//
// comparison-functions
//
(* ****** ****** *)
implement{}
lt_intinf_int
{i,j} (x, y) = let
//
val sgn = $GMP.mpz_cmp_int (!(x.2), y)
val ans = (if sgn < 0 then true else false): bool
//
in
$UN.cast{bool(i < j)}(sgn)
end // end of [lt_intinf_int]
implement{}
lt_intinf_intinf
{i,j} (x, y) = let
//
val sgn = $GMP.mpz_cmp_mpz (!(x.2), !(y.2))
val ans = (if sgn < 0 then true else false): bool
//
in
$UN.cast{bool(i < j)}(sgn)
end // end of [lt_intinf_intinf]
(* ****** ****** *)
implement{}
lte_intinf_int
{i,j} (x, y) = let
//
val sgn = $GMP.mpz_cmp_int (!(x.2), y)
val ans = (if sgn <= 0 then true else false): bool
//
in
$UN.cast{bool(i <= j)}(sgn)
end // end of [lte_intinf_int]
implement{}
lte_intinf_intinf
{i,j} (x, y) = let
//
val sgn = $GMP.mpz_cmp_mpz (!(x.2), !(y.2))
val ans = (if sgn <= 0 then true else false): bool
//
in
$UN.cast{bool(i <= j)}(sgn)
end // end of [lte_intinf_intinf]
(* ****** ****** *)
implement{}
gt_intinf_int
{i,j} (x, y) = let
//
val sgn = $GMP.mpz_cmp_int (!(x.2), y)
val ans = (if sgn > 0 then true else false): bool
//
in
$UN.cast{bool(i > j)}(sgn)
end // end of [gt_intinf_int]
implement{}
gt_intinf_intinf
{i,j} (x, y) = let
//
val sgn = $GMP.mpz_cmp_mpz (!(x.2), !(y.2))
val ans = (if sgn > 0 then true else false): bool
//
in
$UN.cast{bool(i > j)}(sgn)
end // end of [gt_intinf_intinf]
(* ****** ****** *)
implement{}
gte_intinf_int
{i,j} (x, y) = let
//
val sgn = $GMP.mpz_cmp_int (!(x.2), y)
val ans = (if sgn >= 0 then true else false): bool
//
in
$UN.cast{bool(i >= j)}(sgn)
end // end of [gte_intinf_int]
implement{}
gte_intinf_intinf
{i,j} (x, y) = let
//
val sgn = $GMP.mpz_cmp_mpz (!(x.2), !(y.2))
val ans = (if sgn >= 0 then true else false): bool
//
in
$UN.cast{bool(i >= j)}(sgn)
end // end of [gte_intinf_intinf]
(* ****** ****** *)
implement{}
eq_intinf_int
{i,j} (x, y) = let
//
val sgn = $GMP.mpz_cmp_int (!(x.2), y)
val ans = (if sgn = 0 then true else false): bool
//
in
$UN.cast{bool(i == j)}(sgn)
end // end of [eq_intinf_int]
implement{}
eq_intinf_intinf
{i,j} (x, y) = let
//
val sgn = $GMP.mpz_cmp_mpz (!(x.2), !(y.2))
val ans = (if sgn = 0 then true else false): bool
//
in
$UN.cast{bool(i == j)}(sgn)
end // end of [eq_intinf_intinf]
(* ****** ****** *)
implement{}
neq_intinf_int
{i,j} (x, y) = let
//
val sgn = $GMP.mpz_cmp_int (!(x.2), y)
val ans = (if sgn != 0 then true else false): bool
//
in
$UN.cast{bool(i != j)}(sgn)
end // end of [neq_intinf_int]
implement{}
neq_intinf_intinf
{i,j} (x, y) = let
//
val sgn = $GMP.mpz_cmp_mpz (!(x.2), !(y.2))
val ans = (if sgn != 0 then true else false): bool
//
in
$UN.cast{bool(i != j)}(sgn)
end // end of [neq_intinf_intinf]
(* ****** ****** *)
implement{}
compare_intinf_int
{i,j} (x, y) = let
//
val sgn = $GMP.mpz_cmp_int (!(x.2), y)
val sgn = (if sgn < 0 then ~1 else (if sgn > 0 then 1 else 0)): int
//
in
$UN.cast{int(sgn(i-j))}(sgn)
end // end of [compare_intinf_int]
implement{}
compare_int_intinf
{i,j} (x, y) = let
//
val sgn = $GMP.mpz_cmp_int (!(y.2), x)
val sgn = (if sgn > 0 then ~1 else (if sgn < 0 then 1 else 0)): int
//
in
$UN.cast{int(sgn(i-j))}(sgn)
end // end of [compare_int_intinf]
implement{}
compare_intinf_intinf
{i,j} (x, y) = let
//
val sgn = $GMP.mpz_cmp_mpz (!(x.2), !(y.2))
val sgn = (if sgn < 0 then ~1 else (if sgn > 0 then 1 else 0)): int
//
in
$UN.cast{int(sgn(i-j))}(sgn)
end // end of [compare_intinf_intinf]
(* ****** ****** *)
implement{}
pow_intinf_int
(base, exp) = r where
{
//
val r = ptr_alloc<mpz> ()
val () = $GMP.mpz_init (!(r.2))
val () = $GMP.mpz_pow_uint (!(r.2), !(base.2), i2u(exp))
//
} (* end of [pow_intinf_int] *)
(* ****** ****** *)
end // end of [local]
(* ****** ****** *)
implement{}
print_intinf (x) = fprint_intinf (stdout_ref, x)
implement{}
prerr_intinf (x) = fprint_intinf (stderr_ref, x)
implement{}
fprint_intinf (out, x) = fprint_intinf_base (out, x, 10(*base*))
(* ****** ****** *)
(* end of [intinf_vt.dats] *)

187
samples/ATS/linset.hats Normal file
View File

@@ -0,0 +1,187 @@
(***********************************************************************)
(* *)
(* Applied Type System *)
(* *)
(***********************************************************************)
(*
** ATS/Postiats - Unleashing the Potential of Types!
** Copyright (C) 2011-2013 Hongwei Xi, ATS Trustful Software, Inc.
** All rights reserved
**
** ATS is free software; you can redistribute it and/or modify it under
** the terms of the GNU GENERAL PUBLIC LICENSE (GPL) as published by the
** Free Software Foundation; either version 3, or (at your option) any
** later version.
**
** ATS is distributed in the hope that it will be useful, but WITHOUT ANY
** WARRANTY; without even the implied warranty of MERCHANTABILITY or
** FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License
** for more details.
**
** You should have received a copy of the GNU General Public License
** along with ATS; see the file COPYING. If not, please write to the
** Free Software Foundation, 51 Franklin Street, Fifth Floor, Boston, MA
** 02110-1301, USA.
*)
(* ****** ****** *)
(* Author: Hongwei Xi *)
(* Authoremail: hwxi AT cs DOT bu DOT edu *)
(* Start time: December, 2012 *)
(* ****** ****** *)
//
// HX: shared by linset_listord (* ordered list *)
// HX: shared by linset_avltree (* AVL-tree-based *)
//
(* ****** ****** *)
//
// HX-2013-02:
// for sets of nonlinear elements
//
absvtype set_vtype (a:t@ype+) = ptr
//
(* ****** ****** *)
vtypedef set (a:t0p) = set_vtype (a)
(* ****** ****** *)
fun{a:t0p}
compare_elt_elt (x1: a, x2: a):<> int
(* ****** ****** *)
fun{} linset_nil{a:t0p} ():<> set(a)
fun{} linset_make_nil{a:t0p} ():<> set(a)
(* ****** ****** *)
fun{a:t0p} linset_sing (x: a):<!wrt> set(a)
fun{a:t0p} linset_make_sing (x: a):<!wrt> set(a)
(* ****** ****** *)
fun{a:t0p}
linset_make_list (xs: List(INV(a))):<!wrt> set(a)
(* ****** ****** *)
fun{}
linset_is_nil {a:t0p} (xs: !set(INV(a))):<> bool
fun{}
linset_isnot_nil {a:t0p} (xs: !set(INV(a))):<> bool
(* ****** ****** *)
fun{a:t0p} linset_size (!set(INV(a))): size_t
(* ****** ****** *)
fun{a:t0p}
linset_is_member (xs: !set(INV(a)), x0: a):<> bool
fun{a:t0p}
linset_isnot_member (xs: !set(INV(a)), x0: a):<> bool
(* ****** ****** *)
fun{a:t0p}
linset_copy (!set(INV(a))):<!wrt> set(a)
fun{a:t0p}
linset_free (xs: set(INV(a))):<!wrt> void
(* ****** ****** *)
//
fun{a:t0p}
linset_insert
(xs: &set(INV(a)) >> _, x0: a):<!wrt> bool
//
(* ****** ****** *)
//
fun{a:t0p}
linset_takeout
(
&set(INV(a)) >> _, a, res: &(a?) >> opt(a, b)
) :<!wrt> #[b:bool] bool(b) // endfun
fun{a:t0p}
linset_takeout_opt (&set(INV(a)) >> _, a):<!wrt> Option_vt(a)
//
(* ****** ****** *)
//
fun{a:t0p}
linset_remove
(xs: &set(INV(a)) >> _, x0: a):<!wrt> bool
//
(* ****** ****** *)
//
// HX: choosing an element in an unspecified manner
//
fun{a:t0p}
linset_choose
(
xs: !set(INV(a)), x: &a? >> opt (a, b)
) :<!wrt> #[b:bool] bool(b)
//
fun{a:t0p}
linset_choose_opt (xs: !set(INV(a))):<!wrt> Option_vt(a)
//
(* ****** ****** *)
fun{a:t0p}
linset_takeoutmax
(
xs: &set(INV(a)) >> _, res: &a? >> opt(a, b)
) :<!wrt> #[b:bool] bool (b)
fun{a:t0p}
linset_takeoutmax_opt (xs: &set(INV(a)) >> _):<!wrt> Option_vt(a)
(* ****** ****** *)
fun{a:t0p}
linset_takeoutmin
(
xs: &set(INV(a)) >> _, res: &a? >> opt(a, b)
) :<!wrt> #[b:bool] bool (b)
fun{a:t0p}
linset_takeoutmin_opt (xs: &set(INV(a)) >> _):<!wrt> Option_vt(a)
(* ****** ****** *)
//
fun{}
fprint_linset$sep (FILEref): void // ", "
//
fun{a:t0p}
fprint_linset (out: FILEref, xs: !set(INV(a))): void
//
overload fprint with fprint_linset
//
(* ****** ****** *)
//
fun{
a:t0p}{env:vt0p
} linset_foreach$fwork
(x: a, env: &(env) >> _): void
//
fun{a:t0p}
linset_foreach (set: !set(INV(a))): void
fun{
a:t0p}{env:vt0p
} linset_foreach_env
(set: !set(INV(a)), env: &(env) >> _): void
// end of [linset_foreach_env]
//
(* ****** ****** *)
fun{a:t0p}
linset_listize (xs: set(INV(a))): List0_vt (a)
(* ****** ****** *)
fun{a:t0p}
linset_listize1 (xs: !set(INV(a))): List0_vt (a)
(* ****** ****** *)
(* end of [linset.hats] *)

View File

@@ -0,0 +1,504 @@
(***********************************************************************)
(* *)
(* Applied Type System *)
(* *)
(***********************************************************************)
(*
** ATS/Postiats - Unleashing the Potential of Types!
** Copyright (C) 2011-2013 Hongwei Xi, ATS Trustful Software, Inc.
** All rights reserved
**
** ATS is free software; you can redistribute it and/or modify it under
** the terms of the GNU GENERAL PUBLIC LICENSE (GPL) as published by the
** Free Software Foundation; either version 3, or (at your option) any
** later version.
**
** ATS is distributed in the hope that it will be useful, but WITHOUT ANY
** WARRANTY; without even the implied warranty of MERCHANTABILITY or
** FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License
** for more details.
**
** You should have received a copy of the GNU General Public License
** along with ATS; see the file COPYING. If not, please write to the
** Free Software Foundation, 51 Franklin Street, Fifth Floor, Boston, MA
** 02110-1301, USA.
*)
(* ****** ****** *)
(* Author: Hongwei Xi *)
(* Authoremail: hwxi AT cs DOT bu DOT edu *)
(* Start time: February, 2013 *)
(* ****** ****** *)
//
// HX-2013-08:
// a set is represented as a sorted list in descending order;
// note that descending order is chosen to faciliate set comparison
//
(* ****** ****** *)
staload
UN = "prelude/SATS/unsafe.sats"
(* ****** ****** *)
staload "libats/SATS/linset_listord.sats"
(* ****** ****** *)
#include "./SHARE/linset.hats" // code reuse
#include "./SHARE/linset_node.hats" // code reuse
(* ****** ****** *)
assume
set_vtype (elt:t@ype) = List0_vt (elt)
(* ****** ****** *)
implement{}
linset_nil () = list_vt_nil ()
implement{}
linset_make_nil () = list_vt_nil ()
(* ****** ****** *)
implement
{a}(*tmp*)
linset_sing
(x) = list_vt_cons{a}(x, list_vt_nil)
// end of [linset_sing]
implement{a}
linset_make_sing
(x) = list_vt_cons{a}(x, list_vt_nil)
// end of [linset_make_sing]
(* ****** ****** *)
implement{}
linset_is_nil (xs) = list_vt_is_nil (xs)
implement{}
linset_isnot_nil (xs) = list_vt_is_cons (xs)
(* ****** ****** *)
implement{a}
linset_size (xs) =
let val n = list_vt_length(xs) in i2sz(n) end
// end of [linset_size]
(* ****** ****** *)
implement{a}
linset_is_member
(xs, x0) = let
//
fun aux
{n:nat} .<n>.
(
xs: !list_vt (a, n)
) :<> bool = let
in
//
case+ xs of
| list_vt_cons (x, xs) => let
val sgn = compare_elt_elt<a> (x0, x) in
if sgn > 0 then false else (if sgn < 0 then aux (xs) else true)
end // end of [list_vt_cons]
| list_vt_nil ((*void*)) => false
//
end // end of [aux]
//
in
aux (xs)
end // end of [linset_is_member]
(* ****** ****** *)
implement{a}
linset_copy (xs) = list_vt_copy<a> (xs)
implement{a}
linset_free (xs) = list_vt_free<a> (xs)
(* ****** ****** *)
implement{a}
linset_insert
(xs, x0) = let
//
fun
mynode_cons
{n:nat} .<>.
(
nx: mynode1 (a), xs: list_vt (a, n)
) : list_vt (a, n+1) = let
//
val xs1 =
$UN.castvwtp0{List1_vt(a)}(nx)
val+@list_vt_cons (_, xs2) = xs1
prval () = $UN.cast2void (xs2); val () = (xs2 := xs)
//
in
fold@ (xs1); xs1
end // end of [mynode_cons]
//
fun ins
{n:nat} .<n>. // tail-recursive
(
xs: &list_vt (a, n) >> list_vt (a, n1)
) : #[n1:nat | n <= n1; n1 <= n+1] bool =
(
case+ xs of
| @list_vt_cons
(x, xs1) => let
val sgn =
compare_elt_elt<a> (x0, x)
// end of [val]
in
if sgn > 0 then let
prval () = fold@ (xs)
val nx = mynode_make_elt<a> (x0)
val ((*void*)) = xs := mynode_cons (nx, xs)
in
false
end else if sgn < 0 then let
val ans = ins (xs1)
prval () = fold@ (xs)
in
ans
end else let // [x0] is found
prval () = fold@ (xs)
in
true (* [x0] in [xs] *)
end (* end of [if] *)
end // end of [list_vt_cons]
| list_vt_nil () => let
val nx = mynode_make_elt<a> (x0)
val ((*void*)) = xs := mynode_cons (nx, xs)
in
false
end // end of [list_vt_nil]
) (* end of [ins] *)
//
in
$effmask_all (ins (xs))
end // end of [linset_insert]
(* ****** ****** *)
(*
//
HX-2013-08:
[linset_remove] moved up
//
implement{a}
linset_remove
(xs, x0) = let
//
fun rem
{n:nat} .<n>. // tail-recursive
(
xs: &list_vt (a, n) >> list_vt (a, n1)
) : #[n1:nat | n1 <= n; n <= n1+1] bool =
(
case+ xs of
| @list_vt_cons
(x, xs1) => let
val sgn =
compare_elt_elt<a> (x0, x)
// end of [val]
in
if sgn > 0 then let
prval () = fold@ (xs)
in
false
end else if sgn < 0 then let
val ans = rem (xs1)
prval () = fold@ (xs)
in
ans
end else let // x0 = x
val xs1_ = xs1
val ((*void*)) = free@{a}{0}(xs)
val () = xs := xs1_
in
true // [x0] in [xs]
end (* end of [if] *)
end // end of [list_vt_cons]
| list_vt_nil () => false
) (* end of [rem] *)
//
in
$effmask_all (rem (xs))
end // end of [linset_remove]
*)
(* ****** ****** *)
(*
** By Brandon Barker
*)
implement
{a}(*tmp*)
linset_choose
(xs, x0) = let
in
//
case+ xs of
| list_vt_cons
(x, xs1) => let
val () = x0 := x
prval () = opt_some{a}(x0)
in
true
end // end of [list_vt_cons]
| list_vt_nil () => let
prval () = opt_none{a}(x0)
in
false
end // end of [list_vt_nil]
//
end // end of [linset_choose]
(* ****** ****** *)
implement
{a}{env}
linset_foreach_env (xs, env) = let
//
implement
list_vt_foreach$fwork<a><env>
(x, env) = linset_foreach$fwork<a><env> (x, env)
//
in
list_vt_foreach_env<a><env> (xs, env)
end // end of [linset_foreach_env]
(* ****** ****** *)
implement{a}
linset_listize (xs) = xs
(* ****** ****** *)
implement{a}
linset_listize1 (xs) = list_vt_copy (xs)
(* ****** ****** *)
//
// HX: functions for processing mynodes
//
(* ****** ****** *)
implement{
} mynode_null{a} () =
$UN.castvwtp0{mynode(a,null)}(the_null_ptr)
// end of [mynode_null]
(* ****** ****** *)
implement
{a}(*tmp*)
mynode_make_elt
(x) = let
//
val nx = list_vt_cons{a}{0}(x, _ )
//
in
$UN.castvwtp0{mynode1(a)}(nx)
end // end of [mynode_make_elt]
(* ****** ****** *)
implement{
} mynode_free
{a}(nx) = () where {
val nx =
$UN.castvwtp0{List1_vt(a)}(nx)
//
val+~list_vt_cons (_, nx2) = nx
//
prval ((*void*)) = $UN.cast2void (nx2)
//
} (* end of [mynode_free] *)
(* ****** ****** *)
implement
{a}(*tmp*)
mynode_get_elt
(nx) = (x) where {
//
val nx1 =
$UN.castvwtp1{List1_vt(a)}(nx)
//
val+list_vt_cons (x, _) = nx1
//
prval ((*void*)) = $UN.cast2void (nx1)
//
} (* end of [mynode_get_elt] *)
(* ****** ****** *)
implement
{a}(*tmp*)
mynode_set_elt
{l} (nx, x0) =
{
//
val nx1 =
$UN.castvwtp1{List1_vt(a)}(nx)
//
val+@list_vt_cons (x, _) = nx1
//
val () = x := x0
//
prval () = fold@ (nx1)
prval () = $UN.cast2void (nx1)
//
prval () = __assert (nx) where
{
extern praxi __assert (nx: !mynode(a?, l) >> mynode (a, l)): void
} (* end of [prval] *)
//
} (* end of [mynode_set_elt] *)
(* ****** ****** *)
implement
{a}(*tmp*)
mynode_getfree_elt
(nx) = (x) where {
//
val nx =
$UN.castvwtp0{List1_vt(a)}(nx)
//
val+~list_vt_cons (x, nx2) = nx
//
prval ((*void*)) = $UN.cast2void (nx2)
//
} (* end of [mynode_getfree_elt] *)
(* ****** ****** *)
(*
fun{a:t0p}
linset_takeout_ngc
(set: &set(INV(a)) >> _, x0: a):<!wrt> mynode0 (a)
// end of [linset_takeout_ngc]
*)
implement
{a}(*tmp*)
linset_takeout_ngc
(set, x0) = let
//
fun takeout
(
xs: &List0_vt (a) >> _
) : mynode0(a) = let
in
//
case+ xs of
| @list_vt_cons
(x, xs1) => let
prval pf_x = view@x
prval pf_xs1 = view@xs1
val sgn =
compare_elt_elt<a> (x0, x)
// end of [val]
in
if sgn > 0 then let
prval () = fold@ (xs)
in
mynode_null{a}((*void*))
end else if sgn < 0 then let
val res = takeout (xs1)
prval ((*void*)) = fold@ (xs)
in
res
end else let // x0 = x
val xs1_ = xs1
val res = $UN.castvwtp0{mynode1(a)}((pf_x, pf_xs1 | xs))
val () = xs := xs1_
in
res // [x0] in [xs]
end (* end of [if] *)
end // end of [list_vt_cons]
| list_vt_nil () => mynode_null{a}((*void*))
//
end (* end of [takeout] *)
//
in
$effmask_all (takeout (set))
end // end of [linset_takeout_ngc]
(* ****** ****** *)
implement
{a}(*tmp*)
linset_takeoutmax_ngc
(xs) = let
in
//
case+ xs of
| @list_vt_cons
(x, xs1) => let
prval pf_x = view@x
prval pf_xs1 = view@xs1
val xs_ = xs
val () = xs := xs1
in
$UN.castvwtp0{mynode1(a)}((pf_x, pf_xs1 | xs_))
end // end of [list_vt_cons]
| @list_vt_nil () => let
prval () = fold@ (xs)
in
mynode_null{a}((*void*))
end // end of [list_vt_nil]
//
end // end of [linset_takeoutmax_ngc]
(* ****** ****** *)
implement
{a}(*tmp*)
linset_takeoutmin_ngc
(xs) = let
//
fun unsnoc
{n:pos} .<n>.
(
xs: &list_vt (a, n) >> list_vt (a, n-1)
) :<!wrt> mynode1 (a) = let
//
val+@list_vt_cons (x, xs1) = xs
//
prval pf_x = view@x and pf_xs1 = view@xs1
//
in
//
case+ xs1 of
| list_vt_cons _ =>
let val res = unsnoc(xs1) in fold@xs; res end
// end of [list_vt_cons]
| list_vt_nil () => let
val xs_ = xs
val () = xs := list_vt_nil{a}()
in
$UN.castvwtp0{mynode1(a)}((pf_x, pf_xs1 | xs_))
end // end of [list_vt_nil]
//
end // end of [unsnoc]
//
in
//
case+ xs of
| list_vt_cons _ => unsnoc (xs)
| list_vt_nil () => mynode_null{a}((*void*))
//
end // end of [linset_takeoutmin_ngc]
(* ****** ****** *)
(* end of [linset_listord.dats] *)

View File

@@ -0,0 +1,51 @@
(***********************************************************************)
(* *)
(* Applied Type System *)
(* *)
(***********************************************************************)
(*
** ATS/Postiats - Unleashing the Potential of Types!
** Copyright (C) 2011-2013 Hongwei Xi, ATS Trustful Software, Inc.
** All rights reserved
**
** ATS is free software; you can redistribute it and/or modify it under
** the terms of the GNU GENERAL PUBLIC LICENSE (GPL) as published by the
** Free Software Foundation; either version 3, or (at your option) any
** later version.
**
** ATS is distributed in the hope that it will be useful, but WITHOUT ANY
** WARRANTY; without even the implied warranty of MERCHANTABILITY or
** FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License
** for more details.
**
** You should have received a copy of the GNU General Public License
** along with ATS; see the file COPYING. If not, please write to the
** Free Software Foundation, 51 Franklin Street, Fifth Floor, Boston, MA
** 02110-1301, USA.
*)
(* ****** ****** *)
//
// Author: Hongwei Xi
// Authoremail: hwxiATcsDOTbuDOTedu
// Time: October, 2010
//
(* ****** ****** *)
#define ATS_PACKNAME "ATSLIB.libats.linset_listord"
#define ATS_STALOADFLAG 0 // no static loading at run-time
(* ****** ****** *)
#include "./SHARE/linset.hats"
#include "./SHARE/linset_node.hats"
(* ****** ****** *)
castfn
linset2list {a:t0p} (xs: set (INV(a))):<> List0_vt (a)
(* ****** ****** *)
(* end of [linset_listord.sats] *)

215
samples/ATS/main.atxt Normal file
View File

@@ -0,0 +1,215 @@
%{
#include "./../ATEXT/atextfun.hats"
%}
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.1//EN"
"http://www.w3.org/TR/xhtml11/DTD/xhtml11.dtd">
<html xmlns="http://www.w3.org/1999/xhtml">
<head>
<meta http-equiv="content-type" content="text/html; charset=UTF-8" />
<title>EFFECTIVATS-DiningPhil2</title>
#patscode_style()
</head>
<body>
<h1>
Effective ATS: Dining Philosophers
</h1>
In this article, I present an implementation of a slight variant of the
famous problem of 5-Dining-Philosophers by Dijkstra that makes simple but
convincing use of linear types.
<h2>
The Original Problem
</h2>
There are five philosophers sitting around a table and there are also 5
forks placed on the table such that each fork is located between the left
hand of a philosopher and the right hand of another philosopher. Each
philosopher does the following routine repeatedly: thinking and dining. In
order to dine, a philosopher needs to first acquire two forks: one located
on his left-hand side and the other on his right-hand side. After
finishing dining, a philosopher puts the two acquired forks onto the table:
one on his left-hand side and the other on his right-hand side.
<h2>
A Variant of the Original Problem
</h2>
The following twist is added to the original version:
<p>
After a fork is used, it becomes a "dirty" fork and needs to be put in a
tray for dirty forks. There is a cleaner who cleans dirty forks and then
puts them back on the table.
<h2>
Channels for Communication
</h2>
A channel is just a shared queue of fixed capacity. The following two
functions are for inserting an element into and taking an element out of a
given channel:
<pre
class="patsyntax">
#pats2xhtml_sats("\
fun{a:vt0p} channel_insert (channel (a), a): void
fun{a:vt0p} channel_takeout (chan: channel (a)): (a)
")</pre>
If [channel_insert] is called on a channel that is full, then the caller is
blocked until an element is taken out of the channel. If [channel_takeout]
is called on a channel that is empty, then the caller is blocked until an
element is inserted into the channel.
<h2>
A Channel for Each Fork
</h2>
Forks are resources given a linear type. Each fork is initially stored in a
channel, which can be obtained by calling the following function:
<pre
class="patsyntax">
#pats2xhtml_sats("\
fun fork_changet (n: nphil): channel(fork)
")</pre>
where the type [nphil] is defined to be [natLt(5)] (for natural numbers
less than 5). The channels for storing forks are chosen to be of capacity
2. The reason that channels of capacity 2 are chosen to store at most one
element (in each of them) is to guarantee that these channels can never be
full (so that there is no attempt made to send signals to awake callers
supposedly being blocked due to channels being full).
<h2>
A Channel for the Fork Tray
</h2>
A tray for storing "dirty" forks is also a channel, which can be obtained
by calling the following function:
<pre
class="patsyntax">
#pats2xhtml_sats("\
fun forktray_changet ((*void*)): channel(fork)
")</pre>
The capacity chosen for the channel is 6 (instead of 5) so that it can
never become full (as there are only 5 forks in total).
<h2>
Philosopher Loop
</h2>
Each philosopher is implemented as a loop:
<pre
class="patsyntax">
#pats2xhtml_dats('\
implement
phil_loop (n) = let
//
val () = phil_think (n)
//
val nl = phil_left (n) // = n
val nr = phil_right (n) // = (n+1) % 5
//
val ch_lfork = fork_changet (nl)
val ch_rfork = fork_changet (nr)
//
val lf = channel_takeout (ch_lfork)
val () = println! ("phil_loop(", n, ") picks left fork")
//
val () = randsleep (2) // sleep up to 2 seconds
//
val rf = channel_takeout (ch_rfork)
val () = println! ("phil_loop(", n, ") picks right fork")
//
val () = phil_dine (n, lf, rf)
//
val ch_forktray = forktray_changet ()
val () = channel_insert (ch_forktray, lf) // left fork to dirty tray
val () = channel_insert (ch_forktray, rf) // right fork to dirty tray
//
in
phil_loop (n)
end // end of [phil_loop]
')</pre>
It should be straighforward to follow the code for [phil_loop].
<h2>
Fork Cleaner Loop
</h2>
A cleaner is implemented as a loop:
<pre
class="patsyntax">
#pats2xhtml_dats('\
implement
cleaner_loop () = let
//
val ch = forktray_changet ()
val f0 = channel_takeout (ch) // [f0] is dirty
//
val () = cleaner_wash (f0) // washes dirty [f0]
val () = cleaner_return (f0) // puts back cleaned [f0]
//
in
cleaner_loop ()
end // end of [cleaner_loop]
')</pre>
The function [cleaner_return] first finds out the number of a given fork
and then uses the number to locate the channel for storing the fork. Its
actual implementation is given as follows:
<pre
class="patsyntax">
#pats2xhtml_dats('\
implement
cleaner_return (f) =
{
val n = fork_get_num (f)
val ch = fork_changet (n)
val () = channel_insert (ch, f)
}
')</pre>
It should now be straighforward to follow the code for [cleaner_loop].
<h2>
Testing
</h2>
The entire code of this implementation is stored in the following files:
<pre>
DiningPhil2.sats
DiningPhil2.dats
DiningPhil2_fork.dats
DiningPhil2_thread.dats
</pre>
There is also a Makefile available for compiling the ATS source code into
an excutable for testing. One should be able to encounter a deadlock after
running the simulation for a while.
<hr size="2">
This article is written by <a href="http://www.cs.bu.edu/~hwxi/">Hongwei Xi</a>.
</body>
</html>
%{
implement main () = fprint_filsub (stdout_ref, "main_atxt.txt")
%}

View File

@@ -1,35 +0,0 @@
// A sample for Actionscript.
package foobar
{
import flash.display.MovieClip;
class Bar
{
public function getNumber():Number
{
return 10;
}
}
class Foo extends Bar
{
private var ourNumber:Number = 25;
override public function getNumber():Number
{
return ourNumber;
}
}
class Main extends MovieClip
{
public function Main()
{
var x:Bar = new Bar();
var y:Foo = new Foo();
trace(x.getNumber());
trace(y.getNumber());
}
}
}

View File

@@ -1,13 +0,0 @@
package mypackage
{
public class Hello
{
/* Let's say hello!
* This is just a test script for Linguist's Actionscript detection.
*/
public function sayHello():void
{
trace("Hello, world");
}
}
}

View File

@@ -1,69 +0,0 @@
StartFontMetrics 2.0
Comment Generated by FontForge 20170719
Comment Creation Date: Sun Jul 23 19:47:25 2017
FontName OpenSansCondensed-Bold
FullName Open Sans Condensed Bold
FamilyName Open Sans Condensed
Weight Bold
Notice (Digitized data copyright (c) 2010-2011, Google Corporation.)
ItalicAngle 0
IsFixedPitch false
UnderlinePosition -205
UnderlineThickness 102
Version 1.11
EncodingScheme ISO10646-1
FontBBox -667 -290 1046 1062
CapHeight 714
XHeight 544
Ascender 760
Descender -240
StartCharMetrics 939
C 32 ; WX 247 ; N space ; B 0 0 0 0 ;
C 33 ; WX 270 ; N exclam ; B 54 -14 216 714 ;
C 34 ; WX 445 ; N quotedbl ; B 59 456 388 714 ;
C 35 ; WX 543 ; N numbersign ; B 20 0 525 714 ;
C 36 ; WX 462 ; N dollar ; B 36 -59 427 760 ;
C 37 ; WX 758 ; N percent ; B 30 -9 729 725 ;
C 38 ; WX 581 ; N ampersand ; B 28 -10 572 725 ;
C 39 ; WX 246 ; N quotesingle ; B 59 456 188 714 ;
C -1 ; WX 462 ; N six.os ; B 36 -10 427 724 ;
C -1 ; WX 420 ; N seven.os ; B 19 -170 402 544 ;
C -1 ; WX 462 ; N eight.os ; B 35 -10 429 724 ;
C -1 ; WX 461 ; N nine.os ; B 33 -182 424 564 ;
C -1 ; WX 496 ; N g.alt ; B 36 -241 442 555 ;
C -1 ; WX 496 ; N gcircumflex.alt ; B 36 -241 442 767 ;
C -1 ; WX 496 ; N gbreve.alt ; B 36 -241 442 766 ;
C -1 ; WX 496 ; N gdot.alt ; B 36 -241 442 756 ;
C -1 ; WX 496 ; N gcommaaccent.alt ; B 36 -241 442 767 ;
C -1 ; WX 0 ; N cyrotmarkcomb ; B -203 591 203 714 ;
EndCharMetrics
StartKernData
StartKernPairs 15878
KPX quotedbl uni1ECA 20
KPX quotedbl uni1EC8 20
KPX quotedbl Idotaccent 20
KPX quotedbl Iogonek 20
KPX quotedbl Imacron 20
KPX quotedbl Idieresis 20
KPX quotedbl Icircumflex 20
KPX quotedbl Iacute 20
KPX quotedbl Igrave 20
KPX quotedbl I 20
KPX quotedbl uni1EF9 20
KPX quoteleft q -20
KPX quoteleft o -20
KPX quoteleft g -9
KPX quoteleft e -20
KPX quoteleft d -20
KPX quoteleft c -20
KPX quoteleft Z 20
KPX Delta C -9
KPX Delta A -20
KPX Delta question 20
KPX Delta period -41
KPX Delta comma -41
KPX Delta quotesingle 41
KPX Delta quotedbl 41
EndKernPairs
EndKernData
EndFontMetrics

View File

@@ -1,464 +0,0 @@
StartFontMetrics 2.0
Comment Generated by FontForge 20170719
Comment Creation Date: Sun Jul 23 19:52:19 2017
FontName SpecialElite-Regular
FullName Special Elite
FamilyName Special Elite
Weight Book
Notice (Copyright (c) 2010 by Brian J. Bonislawsky DBA Astigmatic (AOETI). All rights reserved. Available under the Apache 2.0 licence.http://www.apache.org/licenses/LICENSE-2.0.html)
ItalicAngle 0
IsFixedPitch false
UnderlinePosition -133
UnderlineThickness 20
Version 1.000
EncodingScheme ISO10646-1
FontBBox -33 -322 1052 959
CapHeight 714
XHeight 487
Ascender 688
Descender -225
StartCharMetrics 371
C 32 ; WX 292 ; N space ; B 0 0 0 0 ;
C 33 ; WX 276 ; N exclam ; B 73 0 207 702 ;
C 34 ; WX 352 ; N quotedbl ; B 48 449 295 704 ;
C 35 ; WX 554 ; N numbersign ; B 31 -2 524 713 ;
C 36 ; WX 526 ; N dollar ; B 31 -201 498 919 ;
C 37 ; WX 666 ; N percent ; B 32 -186 642 872 ;
C 38 ; WX 676 ; N ampersand ; B 31 -5 645 705 ;
C 39 ; WX 196 ; N quotesingle ; B 48 449 143 703 ;
C 40 ; WX 279 ; N parenleft ; B 55 -71 243 757 ;
C 41 ; WX 281 ; N parenright ; B 37 -59 229 770 ;
C 42 ; WX 522 ; N asterisk ; B 32 276 493 707 ;
C 43 ; WX 496 ; N plus ; B 29 131 465 560 ;
C 44 ; WX 336 ; N comma ; B 39 -197 290 251 ;
C 45 ; WX 636 ; N hyphen ; B 63 273 573 397 ;
C 46 ; WX 349 ; N period ; B 52 -3 298 245 ;
C 47 ; WX 557 ; N slash ; B 23 -41 536 760 ;
C 48 ; WX 610 ; N zero ; B 55 0 560 720 ;
C 49 ; WX 569 ; N one ; B 27 -12 572 712 ;
C 50 ; WX 573 ; N two ; B 50 -25 541 680 ;
C 51 ; WX 557 ; N three ; B 44 -25 514 694 ;
C 52 ; WX 612 ; N four ; B 15 4 584 708 ;
C 53 ; WX 537 ; N five ; B 47 0 505 690 ;
C 54 ; WX 588 ; N six ; B 48 -10 548 707 ;
C 55 ; WX 555 ; N seven ; B 15 -34 549 734 ;
C 56 ; WX 598 ; N eight ; B 51 1 551 720 ;
C 57 ; WX 584 ; N nine ; B 48 -2 539 715 ;
C 58 ; WX 343 ; N colon ; B 51 -3 297 518 ;
C 59 ; WX 328 ; N semicolon ; B 45 -197 297 518 ;
C 60 ; WX 463 ; N less ; B 31 120 401 565 ;
C 61 ; WX 636 ; N equal ; B 63 186 573 513 ;
C 62 ; WX 463 ; N greater ; B 62 120 433 565 ;
C 63 ; WX 470 ; N question ; B 34 2 442 729 ;
C 64 ; WX 665 ; N at ; B 46 -4 618 680 ;
C 65 ; WX 549 ; N A ; B -1 -16 550 703 ;
C 66 ; WX 604 ; N B ; B 29 -6 557 704 ;
C 67 ; WX 579 ; N C ; B 46 -13 531 700 ;
C 68 ; WX 622 ; N D ; B 36 -17 579 713 ;
C 69 ; WX 638 ; N E ; B 38 -16 587 691 ;
C 70 ; WX 605 ; N F ; B 29 -9 595 709 ;
C 71 ; WX 615 ; N G ; B 45 -3 586 710 ;
C 72 ; WX 652 ; N H ; B 40 -20 622 690 ;
C 73 ; WX 495 ; N I ; B 26 -24 469 710 ;
C 74 ; WX 541 ; N J ; B 16 -3 539 703 ;
C 75 ; WX 582 ; N K ; B 28 -5 584 711 ;
C 76 ; WX 602 ; N L ; B 23 -14 583 718 ;
C 77 ; WX 697 ; N M ; B 46 -10 655 704 ;
C 78 ; WX 627 ; N N ; B 41 -15 595 700 ;
C 79 ; WX 616 ; N O ; B 42 -30 574 702 ;
C 80 ; WX 553 ; N P ; B 30 -12 527 689 ;
C 81 ; WX 602 ; N Q ; B 42 -98 571 711 ;
C 82 ; WX 636 ; N R ; B 14 -9 624 706 ;
C 83 ; WX 588 ; N S ; B 51 -13 547 690 ;
C 84 ; WX 594 ; N T ; B 25 1 564 707 ;
C 85 ; WX 621 ; N U ; B 24 -6 611 710 ;
C 86 ; WX 611 ; N V ; B -1 -15 614 726 ;
C 87 ; WX 643 ; N W ; B 8 0 614 689 ;
C 88 ; WX 582 ; N X ; B 3 -1 580 697 ;
C 89 ; WX 561 ; N Y ; B -21 -2 562 719 ;
C 90 ; WX 592 ; N Z ; B 49 -1 551 709 ;
C 91 ; WX 312 ; N bracketleft ; B 85 -72 297 754 ;
C 92 ; WX 557 ; N backslash ; B 21 -41 534 760 ;
C 249 ; WX 639 ; N ugrave ; B 5 -28 624 679 ;
C 250 ; WX 639 ; N uacute ; B 5 -28 624 682 ;
C 251 ; WX 639 ; N ucircumflex ; B 5 -28 624 691 ;
C 252 ; WX 639 ; N udieresis ; B 5 -28 624 649 ;
C 253 ; WX 592 ; N yacute ; B 0 -232 596 666 ;
C 254 ; WX 552 ; N thorn ; B -33 -221 512 699 ;
C 255 ; WX 592 ; N ydieresis ; B 0 -232 596 643 ;
C -1 ; WX 549 ; N Amacron ; B -1 -16 550 809 ;
C -1 ; WX 565 ; N amacron ; B 38 -6 561 619 ;
C -1 ; WX 549 ; N Abreve ; B -1 -16 550 890 ;
C -1 ; WX 565 ; N abreve ; B 38 -6 561 686 ;
C -1 ; WX 549 ; N Aogonek ; B -1 -138 589 703 ;
C -1 ; WX 565 ; N aogonek ; B 38 -118 624 502 ;
C -1 ; WX 579 ; N Cacute ; B 46 -13 531 900 ;
C -1 ; WX 547 ; N cacute ; B 39 -22 506 693 ;
C -1 ; WX 579 ; N Ccircumflex ; B 46 -13 531 890 ;
C -1 ; WX 547 ; N ccircumflex ; B 39 -22 506 689 ;
C -1 ; WX 579 ; N Cdotaccent ; B 46 -13 531 859 ;
C -1 ; WX 547 ; N cdotaccent ; B 39 -22 506 657 ;
C -1 ; WX 579 ; N Ccaron ; B 46 -13 531 918 ;
C -1 ; WX 547 ; N ccaron ; B 39 -22 506 710 ;
C -1 ; WX 622 ; N Dcaron ; B 36 -17 579 924 ;
C -1 ; WX 750 ; N dcaron ; B 40 -26 716 704 ;
C -1 ; WX 623 ; N Dcroat ; B 36 -17 580 713 ;
C -1 ; WX 603 ; N dcroat ; B 40 -26 597 714 ;
C -1 ; WX 638 ; N Emacron ; B 38 -16 587 798 ;
C -1 ; WX 543 ; N emacron ; B 40 -23 501 616 ;
C -1 ; WX 638 ; N Ebreve ; B 38 -16 587 876 ;
C -1 ; WX 543 ; N ebreve ; B 40 -23 501 683 ;
C -1 ; WX 638 ; N Edotaccent ; B 38 -16 587 848 ;
C -1 ; WX 543 ; N edotaccent ; B 40 -23 501 659 ;
C -1 ; WX 638 ; N Eogonek ; B 38 -113 610 691 ;
C -1 ; WX 543 ; N eogonek ; B 40 -145 501 499 ;
C -1 ; WX 638 ; N Ecaron ; B 38 -16 587 913 ;
C -1 ; WX 543 ; N ecaron ; B 40 -23 501 714 ;
C -1 ; WX 615 ; N Gcircumflex ; B 45 -3 586 906 ;
C -1 ; WX 583 ; N gcircumflex ; B 42 -224 562 676 ;
C -1 ; WX 615 ; N Gbreve ; B 45 -3 586 899 ;
C -1 ; WX 583 ; N gbreve ; B 42 -224 562 667 ;
C -1 ; WX 615 ; N Gdotaccent ; B 45 -3 586 871 ;
C -1 ; WX 583 ; N gdotaccent ; B 42 -224 562 637 ;
C -1 ; WX 615 ; N Gcommaaccent ; B 45 -253 586 710 ;
C -1 ; WX 583 ; N gcommaaccent ; B 42 -224 562 734 ;
C -1 ; WX 652 ; N Hcircumflex ; B 40 -20 622 897 ;
C -1 ; WX 616 ; N hcircumflex ; B 5 -29 601 688 ;
C -1 ; WX 652 ; N Hbar ; B 40 -20 622 690 ;
C -1 ; WX 616 ; N hbar ; B 5 -29 601 683 ;
C -1 ; WX 495 ; N Itilde ; B 26 -24 469 859 ;
C -1 ; WX 568 ; N itilde ; B 36 -42 568 615 ;
C -1 ; WX 495 ; N Imacron ; B 26 -24 469 819 ;
C -1 ; WX 568 ; N imacron ; B 36 -42 568 585 ;
C -1 ; WX 495 ; N Ibreve ; B 26 -24 469 901 ;
C -1 ; WX 568 ; N ibreve ; B 36 -42 568 661 ;
C -1 ; WX 495 ; N Iogonek ; B 26 -154 469 710 ;
C -1 ; WX 568 ; N iogonek ; B 36 -149 568 674 ;
C -1 ; WX 495 ; N Idotaccent ; B 26 -24 469 873 ;
C -1 ; WX 568 ; N dotlessi ; B 36 -42 568 468 ;
C -1 ; WX 1036 ; N IJ ; B 26 -24 1034 710 ;
C -1 ; WX 983 ; N ij ; B 36 -236 913 683 ;
C -1 ; WX 541 ; N Jcircumflex ; B 16 -3 539 913 ;
C -1 ; WX 415 ; N jcircumflex ; B -12 -236 405 699 ;
C -1 ; WX 582 ; N Kcommaaccent ; B 28 -253 584 711 ;
C -1 ; WX 620 ; N kcommaaccent ; B 11 -253 600 683 ;
C -1 ; WX 620 ; N kgreenlandic ; B 11 -28 600 482 ;
C -1 ; WX 602 ; N Lacute ; B 23 -14 583 923 ;
C -1 ; WX 540 ; N lacute ; B 4 -28 538 902 ;
C -1 ; WX 602 ; N Lcommaaccent ; B 23 -267 583 718 ;
C -1 ; WX 540 ; N lcommaaccent ; B 4 -267 538 682 ;
C -1 ; WX 602 ; N Lcaron ; B 23 -14 583 794 ;
C -1 ; WX 582 ; N lcaron ; B 4 -28 549 704 ;
C -1 ; WX 781 ; N Ldot ; B 23 -14 748 718 ;
C -1 ; WX 571 ; N ldotaccent ; B 4 -28 538 682 ;
C -1 ; WX 603 ; N Lslash ; B 24 -14 584 718 ;
C -1 ; WX 541 ; N lslash ; B 4 -28 538 682 ;
C -1 ; WX 627 ; N Nacute ; B 41 -15 595 894 ;
C -1 ; WX 632 ; N nacute ; B 32 -23 612 696 ;
C -1 ; WX 627 ; N Ncommaaccent ; B 41 -268 595 700 ;
C -1 ; WX 632 ; N ncommaaccent ; B 32 -268 612 491 ;
C -1 ; WX 627 ; N Ncaron ; B 41 -15 595 900 ;
C -1 ; WX 632 ; N ncaron ; B 32 -23 612 712 ;
C -1 ; WX 815 ; N napostrophe ; B 34 -23 795 704 ;
C -1 ; WX 627 ; N Eng ; B 41 -320 595 700 ;
C -1 ; WX 605 ; N eng ; B 32 -322 534 491 ;
C -1 ; WX 616 ; N Omacron ; B 42 -30 574 815 ;
C -1 ; WX 583 ; N omacron ; B 40 -34 543 598 ;
C -1 ; WX 616 ; N Obreve ; B 42 -30 574 891 ;
C -1 ; WX 583 ; N obreve ; B 40 -34 543 675 ;
C -1 ; WX 616 ; N Ohungarumlaut ; B 42 -30 574 907 ;
C -1 ; WX 583 ; N ohungarumlaut ; B 40 -34 545 693 ;
C -1 ; WX 1018 ; N OE ; B 42 -30 967 702 ;
C -1 ; WX 958 ; N oe ; B 40 -34 916 499 ;
C -1 ; WX 636 ; N Racute ; B 14 -9 624 910 ;
C -1 ; WX 579 ; N racute ; B 28 -16 566 693 ;
C -1 ; WX 636 ; N Rcommaaccent ; B 14 -268 624 706 ;
C -1 ; WX 579 ; N rcommaaccent ; B 28 -272 566 495 ;
C -1 ; WX 636 ; N Rcaron ; B 14 -9 624 927 ;
C -1 ; WX 579 ; N rcaron ; B 28 -16 566 698 ;
C -1 ; WX 588 ; N Sacute ; B 51 -13 547 900 ;
C -1 ; WX 519 ; N sacute ; B 48 -31 481 713 ;
C -1 ; WX 588 ; N Scircumflex ; B 51 -13 547 904 ;
C -1 ; WX 519 ; N scircumflex ; B 48 -31 481 710 ;
C -1 ; WX 588 ; N Scedilla ; B 51 -145 547 690 ;
C -1 ; WX 519 ; N scedilla ; B 48 -145 481 496 ;
C -1 ; WX 588 ; N Scaron ; B 51 -13 547 904 ;
C -1 ; WX 519 ; N scaron ; B 48 -31 481 710 ;
C -1 ; WX 594 ; N Tcommaaccent ; B 25 -263 564 707 ;
C -1 ; WX 510 ; N tcommaaccent ; B 0 -282 488 694 ;
C -1 ; WX 594 ; N Tcaron ; B 25 1 564 920 ;
C -1 ; WX 713 ; N tcaron ; B 0 -34 680 704 ;
C -1 ; WX 594 ; N Tbar ; B 25 1 564 707 ;
C -1 ; WX 510 ; N tbar ; B 0 -34 488 694 ;
C -1 ; WX 621 ; N Utilde ; B 24 -6 611 850 ;
C -1 ; WX 638 ; N utilde ; B 5 -28 624 636 ;
C -1 ; WX 621 ; N Umacron ; B 24 -6 611 811 ;
C -1 ; WX 638 ; N umacron ; B 5 -28 624 587 ;
C -1 ; WX 621 ; N Ubreve ; B 24 -6 611 888 ;
C -1 ; WX 638 ; N ubreve ; B 5 -28 624 665 ;
C -1 ; WX 621 ; N Uring ; B 24 -6 611 959 ;
C -1 ; WX 638 ; N uring ; B 5 -28 624 738 ;
C -1 ; WX 621 ; N Uhungarumlaut ; B 24 -6 611 918 ;
C -1 ; WX 638 ; N uhungarumlaut ; B 5 -28 624 691 ;
C -1 ; WX 621 ; N Uogonek ; B 24 -136 611 710 ;
C -1 ; WX 638 ; N uogonek ; B 5 -147 671 487 ;
C -1 ; WX 643 ; N Wcircumflex ; B 8 0 614 901 ;
C -1 ; WX 678 ; N wcircumflex ; B 5 -10 674 685 ;
C -1 ; WX 561 ; N Ycircumflex ; B -21 -2 562 934 ;
C -1 ; WX 592 ; N ycircumflex ; B 0 -232 596 691 ;
C -1 ; WX 561 ; N Ydieresis ; B -21 -2 562 885 ;
C -1 ; WX 592 ; N Zacute ; B 49 -1 551 905 ;
C -1 ; WX 528 ; N zacute ; B 45 -22 487 684 ;
C -1 ; WX 592 ; N Zdotaccent ; B 49 -1 551 866 ;
C -1 ; WX 528 ; N zdotaccent ; B 45 -22 487 632 ;
C -1 ; WX 592 ; N Zcaron ; B 49 -1 551 917 ;
C -1 ; WX 528 ; N zcaron ; B 45 -22 487 688 ;
C -1 ; WX 915 ; N AEacute ; B -11 -16 864 904 ;
C -1 ; WX 888 ; N aeacute ; B 38 -23 846 670 ;
C -1 ; WX 617 ; N Oslashacute ; B 43 -41 574 912 ;
C -1 ; WX 583 ; N oslashacute ; B 40 -73 543 697 ;
C -1 ; WX 415 ; N dotlessj ; B -12 -236 344 478 ;
C -1 ; WX 281 ; N circumflex ; B 0 558 282 746 ;
C -1 ; WX 281 ; N caron ; B 0 558 282 746 ;
C -1 ; WX 281 ; N breve ; B 0 585 282 746 ;
C -1 ; WX 132 ; N dotaccent ; B 0 600 133 729 ;
C -1 ; WX 214 ; N ring ; B 0 547 215 780 ;
C -1 ; WX 211 ; N ogonek ; B 0 -145 212 13 ;
C -1 ; WX 283 ; N tilde ; B 0 583 284 701 ;
C -1 ; WX 352 ; N hungarumlaut ; B 0 591 353 763 ;
C -1 ; WX 185 ; N uni0312 ; B 28 474 152 694 ;
C -1 ; WX 185 ; N uni0315 ; B 38 470 162 690 ;
C -1 ; WX 192 ; N uni0326 ; B 32 -253 156 -33 ;
C -1 ; WX 666 ; N mu ; B 24 -219 643 487 ;
C -1 ; WX 643 ; N Wgrave ; B 8 0 614 895 ;
C -1 ; WX 678 ; N wgrave ; B 5 -10 674 688 ;
C -1 ; WX 643 ; N Wacute ; B 8 0 614 898 ;
C -1 ; WX 678 ; N wacute ; B 5 -10 674 682 ;
C -1 ; WX 643 ; N Wdieresis ; B 8 0 614 868 ;
C -1 ; WX 678 ; N wdieresis ; B 5 -10 674 649 ;
C -1 ; WX 561 ; N Ygrave ; B -21 -2 562 900 ;
C -1 ; WX 592 ; N ygrave ; B 0 -232 596 666 ;
C -1 ; WX 611 ; N endash ; B 50 270 551 391 ;
C -1 ; WX 1113 ; N emdash ; B 51 270 1052 391 ;
C -1 ; WX 265 ; N quoteleft ; B 41 390 217 704 ;
C -1 ; WX 264 ; N quoteright ; B 54 390 230 704 ;
C -1 ; WX 274 ; N quotesinglbase ; B 46 -138 223 176 ;
C -1 ; WX 470 ; N quotedblleft ; B 41 390 422 704 ;
C -1 ; WX 469 ; N quotedblright ; B 54 390 436 704 ;
C -1 ; WX 479 ; N quotedblbase ; B 46 -138 428 176 ;
C -1 ; WX 389 ; N dagger ; B 30 -16 359 724 ;
C -1 ; WX 396 ; N daggerdbl ; B 35 -16 364 728 ;
C -1 ; WX 316 ; N bullet ; B 50 246 266 479 ;
C -1 ; WX 1063 ; N ellipsis ; B 52 -3 1016 245 ;
C -1 ; WX 897 ; N perthousand ; B 33 -230 873 828 ;
C -1 ; WX 296 ; N guilsinglleft ; B 44 149 232 434 ;
C -1 ; WX 295 ; N guilsinglright ; B 63 149 251 434 ;
C -1 ; WX 486 ; N fraction ; B -11 -53 501 748 ;
C -1 ; WX 732 ; N Euro ; B 31 71 683 590 ;
C -1 ; WX 757 ; N trademark ; B 60 303 703 693 ;
C -1 ; WX 585 ; N partialdiff ; B 36 -47 553 772 ;
C -1 ; WX 564 ; N product ; B 26 -17 534 707 ;
C -1 ; WX 577 ; N minus ; B 63 282 514 395 ;
C -1 ; WX 565 ; N approxequal ; B 59 137 513 522 ;
C -1 ; WX 593 ; N notequal ; B 44 71 554 644 ;
C -1 ; WX 1041 ; N fi ; B 20 -42 1041 702 ;
C -1 ; WX 1013 ; N fl ; B 20 -29 1011 702 ;
C -1 ; WX 292 ; N .notdef ; B 0 0 0 0 ;
C -1 ; WX 0 ; N .null ; B 0 0 0 0 ;
C -1 ; WX 292 ; N nonmarkingreturn ; B 0 0 0 0 ;
EndCharMetrics
StartKernData
StartKernPairs 6408
KPX quotedbl period -104
KPX quotedbl comma -103
KPX quotedbl Jcircumflex -34
KPX quotedbl Aogonek -31
KPX quotedbl Abreve -31
KPX quotedbl Amacron -31
KPX quotedbl AEacute -31
KPX quotedbl Aacute -31
KPX quotedbl Acircumflex -31
KPX quotedbl Atilde -31
KPX quotedbl Agrave -31
KPX quotedbl Aring -31
KPX quotedbl Adieresis -31
KPX quotedbl AE -31
KPX quotedbl J -34
KPX quotedbl A -31
KPX quotedbl quotedblbase -117
KPX quotedbl quotesinglbase -117
KPX quotedbl ellipsis -104
KPX quotedbl slash -73
KPX quotedbl ampersand -22
KPX quotedbl four -27
KPX ampersand Ycircumflex -40
KPX ampersand Ygrave -40
KPX ampersand Ydieresis -40
KPX ampersand Yacute -40
KPX ampersand Y -40
KPX ampersand V -36
KPX quotesingle period -97
KPX quotesingle comma -97
KPX quotesingle Jcircumflex -34
KPX quotesingle Aogonek -31
KPX quotesingle Abreve -31
KPX quotesingle Amacron -31
KPX hyphen T -28
KPX hyphen one -68
KPX hyphen B -25
KPX hyphen seven -56
KPX slash rcommaaccent -27
KPX slash ncommaaccent -29
KPX slash gcommaaccent -61
KPX slash Jcircumflex -29
KPX slash iogonek -26
KPX slash ibreve -26
KPX slash imacron -26
KPX slash itilde -26
KPX slash oslashacute -54
KPX slash nacute -29
KPX slash eng -29
KPX slash ncaron -29
KPX slash racute -27
KPX slash scedilla -43
KPX slash scircumflex -43
KPX slash sacute -43
KPX slash rcaron -27
KPX slash ohungarumlaut -54
KPX slash obreve -54
KPX slash omacron -54
KPX slash wgrave -23
KPX slash wcircumflex -23
KPX slash wdieresis -23
KPX slash wacute -23
KPX slash zdotaccent -41
KPX J ebreve -32
KPX J emacron -32
KPX J edieresis -32
KPX J ecircumflex -32
KPX J egrave -32
KPX J eacute -32
KPX J e -32
KPX J Aogonek -34
KPX J Abreve -34
KPX J Amacron -34
KPX J AEacute -34
KPX J Aacute -34
KPX J Acircumflex -34
KPX J Atilde -34
KPX J Agrave -34
KPX J Aring -34
KPX J Adieresis -34
KPX J AE -34
KPX J A -34
KPX J comma -29
KPX J period -30
KPX J v -29
KPX J hyphen -30
KPX J quotedblbase -34
KPX J quotesinglbase -34
KPX J guilsinglright -25
KPX J guilsinglleft -25
KPX J emdash -30
KPX J endash -30
KPX J guillemotright -25
KPX J guillemotleft -25
KPX J germandbls -36
KPX J ellipsis -30
KPX J slash -34
KPX J p -28
KPX J m -35
KPX J b 54
KPX K ycircumflex -60
KPX K ygrave -60
KPX K ydieresis -60
KPX K yacute -60
KPX K y -60
KPX K wgrave -36
KPX K wcircumflex -36
KPX K wdieresis -36
KPX K wacute -36
KPX K w -36
KPX K uogonek -25
KPX K uhungarumlaut -25
KPX K uring -25
KPX K ubreve -25
KPX K umacron -25
KPX K utilde -25
KPX K udieresis -25
KPX K ucircumflex -25
KPX K ugrave -25
KPX K uacute -25
KPX K u -25
KPX K q -23
KPX K oslashacute -28
KPX K ohungarumlaut -28
KPX K obreve -28
KPX K omacron -28
KPX K otilde -28
KPX K odieresis -28
KPX K ocircumflex -28
KPX K ograve -28
KPX K oacute -28
KPX K eth -28
KPX K oe -28
KPX K oslash -28
KPX K o -28
KPX K dcaron -24
KPX K d -24
KPX K ccaron -27
KPX K cdotaccent -27
KPX K ccircumflex -27
KPX K cacute -27
KPX K ccedilla -27
KPX K c -27
KPX K ecaron -27
KPX K eogonek -27
KPX K edotaccent -27
KPX K ebreve -27
KPX K emacron -27
KPX K edieresis -27
KPX K ecircumflex -27
KPX K egrave -27
KPX K eacute -27
KPX K e -27
KPX K v -49
KPX K hyphen -38
KPX K guilsinglleft -24
KPX K emdash -38
KPX K endash -38
KPX K guillemotleft -24
KPX K b 49
KPX L ycircumflex -36
KPX L ygrave -36
KPX L ydieresis -36
KPX L yacute -36
KPX L y -36
KPX L wgrave -23
KPX L wcircumflex -23
KPX L wdieresis -23
KPX L wacute -23
KPX L w -23
KPX L V -43
KPX L Tcommaaccent -36
KPX L Tbar -36
KPX L Tcaron -36
KPX L T -36
KPX L quoteright -49
KPX L v -32
KPX L quoteleft -54
KPX L quotedblright -49
KPX L quotedblleft -54
KPX L trademark -29
KPX L backslash -50
KPX L asterisk -30
KPX trademark Aring -24
KPX trademark Adieresis -24
KPX trademark Yacute 29
KPX trademark AE -24
KPX trademark Y 29
KPX trademark A -24
KPX trademark b 31
EndKernPairs
EndKernData
EndFontMetrics

View File

@@ -1,23 +0,0 @@
StartFontMetrics 2.0
Comment Generated by FontForge 20170719
Comment Creation Date: Sun Jul 23 23:14:02 2017
FontName Greek_Lambda_Character-Regular
FullName Greek_Lambda_Character Regular
FamilyName Greek_Lambda_Character
Weight Regular
Notice (NONE. NADA. PUBLIC DOMAIN, BOI)
ItalicAngle 0
IsFixedPitch false
UnderlinePosition -175
UnderlineThickness 90
Version 020.017
EncodingScheme ISO10646-1
FontBBox 33 -177 566 760
StartCharMetrics 5
C 13 ; WX 602 ; N uni000D ; B 0 0 0 0 ;
C 32 ; WX 602 ; N space ; B 0 0 0 0 ;
C -1 ; WX 602 ; N lambda ; B 33 0 566 760 ;
C -1 ; WX 602 ; N .notdef ; B 50 -177 551 706 ;
C -1 ; WX 0 ; N NULL ; B 0 0 0 0 ;
EndCharMetrics
EndFontMetrics

View File

@@ -1,70 +0,0 @@
# Contributor: Natanael Copa <ncopa@alpinelinux.org>
# Maintainer: Natanael Copa <ncopa@alpinelinux.org>
pkgname=abuild
pkgver=2.27.0
_ver=${pkgver%_git*}
pkgrel=0
pkgdesc="Script to build Alpine Packages"
url="http://git.alpinelinux.org/cgit/abuild/"
arch="all"
license="GPL2"
depends="fakeroot sudo pax-utils openssl apk-tools>=2.0.7-r1 libc-utils
attr tar pkgconf patch"
if [ "$CBUILD" = "$CHOST" ]; then
depends="$depends curl"
fi
makedepends_build="pkgconfig"
makedepends_host="openssl-dev"
makedepends="$makedepends_host $makedepends_build"
install="$pkgname.pre-install $pkgname.pre-upgrade"
subpackages="apkbuild-cpan:cpan apkbuild-gem-resolver:gems"
options="suid"
pkggroups="abuild"
source="http://dev.alpinelinux.org/archive/abuild/abuild-$_ver.tar.xz
"
_builddir="$srcdir/$pkgname-$_ver"
prepare() {
cd "$_builddir"
for i in $source; do
case $i in
*.patch)
msg "Applying $i"
patch -p1 -i "$srcdir"/$i || return 1
;;
esac
done
sed -i -e "/^CHOST=/s/=.*/=$CHOST/" abuild.conf
}
build() {
cd "$_builddir"
make || return 1
}
package() {
cd "$_builddir"
make install DESTDIR="$pkgdir" || return 1
install -m 644 abuild.conf "$pkgdir"/etc/abuild.conf || return 1
install -d -m 775 -g abuild "$pkgdir"/var/cache/distfiles || return 1
}
cpan() {
pkgdesc="Script to generate perl APKBUILD from CPAN"
depends="perl perl-libwww perl-json"
arch="noarch"
mkdir -p "$subpkgdir"/usr/bin
mv "$pkgdir"/usr/bin/apkbuild-cpan "$subpkgdir"/usr/bin/
}
gems() {
pkgdesc="APKBUILD dependency resolver for RubyGems"
depends="ruby ruby-augeas"
arch="noarch"
mkdir -p "$subpkgdir"/usr/bin
mv "$pkgdir"/usr/bin/apkbuild-gem-resolver "$subpkgdir"/usr/bin/
}
md5sums="c67e4c971c54b4d550e16db3ba331f96 abuild-2.27.0.tar.xz"
sha256sums="c8db017e3dd168edb20ceeb91971535cf66b8c95f29d3288f88ac755bffc60e5 abuild-2.27.0.tar.xz"
sha512sums="98e1da4e47f3ab68700b3bc992c83e103f770f3196e433788ee74145f57cd33e5239c87f0a7a15f7266840d5bad893fc8c0d4c826d663df53deaee2678c56984 abuild-2.27.0.tar.xz"

View File

@@ -1,77 +0,0 @@
/*
* This is a sample script.
*/
#include "BotManagerInterface.acs"
BotManager::BotManager g_BotManager( @CreateDumbBot );
CConCommand@ m_pAddBot;
void PluginInit()
{
g_BotManager.PluginInit();
@m_pAddBot = @CConCommand( "addbot", "Adds a new bot with the given name", @AddBotCallback );
}
void AddBotCallback( const CCommand@ args )
{
if( args.ArgC() < 2 )
{
g_Game.AlertMessage( at_console, "Usage: addbot <name>" );
return;
}
BotManager::BaseBot@ pBot = g_BotManager.CreateBot( args[ 1 ] );
if( pBot !is null )
{
g_Game.AlertMessage( at_console, "Created bot " + args[ 1 ] + "\n" );
}
else
{
g_Game.AlertMessage( at_console, "Could not create bot\n" );
}
}
final class DumbBot : BotManager::BaseBot
{
DumbBot( CBasePlayer@ pPlayer )
{
super( pPlayer );
}
void Think()
{
BotManager::BaseBot::Think();
// If the bot is dead and can be respawned, send a button press
if( Player.pev.deadflag >= DEAD_RESPAWNABLE )
{
Player.pev.button |= IN_ATTACK;
}
else
Player.pev.button &= ~IN_ATTACK;
KeyValueBuffer@ pInfoBuffer = g_EngineFuncs.GetInfoKeyBuffer( Player.edict() );
pInfoBuffer.SetValue( "topcolor", Math.RandomLong( 0, 255 ) );
pInfoBuffer.SetValue( "bottomcolor", Math.RandomLong( 0, 255 ) );
if( Math.RandomLong( 0, 100 ) > 10 )
Player.pev.button |= IN_ATTACK;
else
Player.pev.button &= ~IN_ATTACK;
for( uint uiIndex = 0; uiIndex < 3; ++uiIndex )
{
m_vecVelocity[ uiIndex ] = Math.RandomLong( -50, 50 );
}
}
}
BotManager::BaseBot@ CreateDumbBot( CBasePlayer@ pPlayer )
{
return @DumbBot( pPlayer );
}

View File

@@ -1,396 +0,0 @@
// Sample script.
// Source: https://github.com/codecat/ssbd-payload
array<WorldScript::PayloadBeginTrigger@> g_payloadBeginTriggers;
array<WorldScript::PayloadTeamForcefield@> g_teamForceFields;
[GameMode]
class Payload : TeamVersusGameMode
{
[Editable]
UnitFeed PayloadUnit;
[Editable]
UnitFeed FirstNode;
[Editable default=10]
int PrepareTime;
[Editable default=300]
int TimeLimit;
[Editable default=90]
int TimeAddCheckpoint;
[Editable default=2]
float TimeOvertime;
[Editable default=1000]
int TimePayloadHeal;
[Editable default=1]
int PayloadHeal;
PayloadBehavior@ m_payload;
int m_tmStarting;
int m_tmStarted;
int m_tmLimitCustom;
int m_tmOvertime;
int m_tmInOvertime;
PayloadHUD@ m_payloadHUD;
PayloadClassSwitchWindow@ m_switchClass;
array<SValue@>@ m_switchedSidesData;
Payload(Scene@ scene)
{
super(scene);
m_tmRespawnCountdown = 5000;
@m_payloadHUD = PayloadHUD(m_guiBuilder);
@m_switchTeam = PayloadTeamSwitchWindow(m_guiBuilder);
@m_switchClass = PayloadClassSwitchWindow(m_guiBuilder);
}
void UpdateFrame(int ms, GameInput& gameInput, MenuInput& menuInput) override
{
TeamVersusGameMode::UpdateFrame(ms, gameInput, menuInput);
m_payloadHUD.Update(ms);
if (Network::IsServer())
{
uint64 tmNow = CurrPlaytimeLevel();
if (m_tmStarting == 0)
{
if (GetPlayersInTeam(0) > 0 && GetPlayersInTeam(1) > 0)
{
m_tmStarting = tmNow;
(Network::Message("GameStarting") << m_tmStarting).SendToAll();
}
}
if (m_tmStarting > 0 && m_tmStarted == 0 && tmNow - m_tmStarting > PrepareTime * 1000)
{
m_tmStarted = tmNow;
(Network::Message("GameStarted") << m_tmStarted).SendToAll();
for (uint i = 0; i < g_payloadBeginTriggers.length(); i++)
{
WorldScript@ ws = WorldScript::GetWorldScript(g_scene, g_payloadBeginTriggers[i]);
ws.Execute();
}
}
}
if (!m_ended && m_tmStarted > 0)
CheckTimeReached(ms);
}
string NameForTeam(int index) override
{
if (index == 0)
return "Defenders";
else if (index == 1)
return "Attackers";
return "Unknown";
}
void CheckTimeReached(int dt)
{
// Check if time limit is not reached yet
if (m_tmLimitCustom - (CurrPlaytimeLevel() - m_tmStarted) > 0)
{
// Don't need to continue checking
m_tmOvertime = 0;
m_tmInOvertime = 0;
return;
}
// Count how long we're in overtime for later time limit fixing when we reach a checkpoint
if (m_tmOvertime > 0)
m_tmInOvertime += dt;
// Check if there are any attackers still inside
if (m_payload.AttackersInside() > 0)
{
// We have overtime
m_tmOvertime = int(TimeOvertime * 1000);
return;
}
// If we have overtime
if (m_tmOvertime > 0)
{
// Decrease timer
m_tmOvertime -= dt;
if (m_tmOvertime <= 0)
{
// Overtime countdown reached, time limit reached
TimeReached();
}
}
else
{
// No overtime, so time limit is reached
TimeReached();
}
}
void TimeReached()
{
if (!Network::IsServer())
return;
(Network::Message("TimeReached")).SendToAll();
SetWinner(false);
}
bool ShouldFreezeControls() override
{
return m_switchClass.m_visible
|| TeamVersusGameMode::ShouldFreezeControls();
}
bool ShouldDisplayCursor() override
{
return m_switchClass.m_visible
|| TeamVersusGameMode::ShouldDisplayCursor();
}
bool CanSwitchTeams() override
{
return m_tmStarted == 0;
}
PlayerRecord@ CreatePlayerRecord() override
{
return PayloadPlayerRecord();
}
int GetPlayerClassCount(PlayerClass playerClass, TeamVersusScore@ team)
{
if (team is null)
return 0;
int ret = 0;
for (uint i = 0; i < team.m_players.length(); i++)
{
if (team.m_players[i].peer == 255)
continue;
auto record = cast<PayloadPlayerRecord>(team.m_players[i]);
if (record.playerClass == playerClass)
ret++;
}
return ret;
}
void PlayerClassesUpdated()
{
m_switchClass.PlayerClassesUpdated();
}
void SetWinner(bool attackers)
{
if (attackers)
print("Attackers win!");
else
print("Defenders win!");
m_payloadHUD.Winner(attackers);
EndMatch();
}
void DisplayPlayerName(int idt, SpriteBatch& sb, PlayerRecord@ record, PlayerHusk@ plr, vec2 pos) override
{
TeamVersusGameMode::DisplayPlayerName(idt, sb, record, plr, pos);
m_payloadHUD.DisplayPlayerName(idt, sb, cast<PayloadPlayerRecord>(record), plr, pos);
}
void RenderFrame(int idt, SpriteBatch& sb) override
{
Player@ player = GetLocalPlayer();
if (player !is null)
{
PlayerHealgun@ healgun = cast<PlayerHealgun>(player.m_currWeapon);
if (healgun !is null)
healgun.RenderMarkers(idt, sb);
}
TeamVersusGameMode::RenderFrame(idt, sb);
}
void RenderWidgets(PlayerRecord@ player, int idt, SpriteBatch& sb) override
{
m_payloadHUD.Draw(sb, idt);
TeamVersusGameMode::RenderWidgets(player, idt, sb);
m_switchClass.Draw(sb, idt);
}
void GoNextMap() override
{
if (m_switchedSidesData !is null)
{
TeamVersusGameMode::GoNextMap();
return;
}
ChangeLevel(GetCurrentLevelFilename());
}
void SpawnPlayers() override
{
if (m_switchedSidesData is null)
{
TeamVersusGameMode::SpawnPlayers();
return;
}
if (Network::IsServer())
{
for (uint i = 0; i < m_switchedSidesData.length(); i += 2)
{
uint peer = uint(m_switchedSidesData[i].GetInteger());
uint team = uint(m_switchedSidesData[i + 1].GetInteger());
TeamVersusScore@ joinScore = FindTeamScore(team);
if (joinScore is m_teamScores[0])
@joinScore = m_teamScores[1];
else
@joinScore = m_teamScores[0];
for (uint j = 0; j < g_players.length(); j++)
{
if (g_players[j].peer != peer)
continue;
SpawnPlayer(j, vec2(), 0, joinScore.m_team);
break;
}
}
}
}
void Save(SValueBuilder& builder) override
{
if (m_switchedSidesData is null)
{
builder.PushArray("teams");
for (uint i = 0; i < g_players.length(); i++)
{
if (g_players[i].peer == 255)
continue;
builder.PushInteger(g_players[i].peer);
builder.PushInteger(g_players[i].team);
}
builder.PopArray();
}
TeamVersusGameMode::Save(builder);
}
void Start(uint8 peer, SValue@ save, StartMode sMode) override
{
if (save !is null)
@m_switchedSidesData = GetParamArray(UnitPtr(), save, "teams", false);
TeamVersusGameMode::Start(peer, save, sMode);
m_tmLimit = 0; // infinite time limit as far as VersusGameMode is concerned
m_tmLimitCustom = TimeLimit * 1000; // 5 minutes by default
@m_payload = cast<PayloadBehavior>(PayloadUnit.FetchFirst().GetScriptBehavior());
if (m_payload is null)
PrintError("PayloadUnit is not a PayloadBehavior!");
UnitPtr unitFirstNode = FirstNode.FetchFirst();
if (unitFirstNode.IsValid())
{
auto node = cast<WorldScript::PayloadNode>(unitFirstNode.GetScriptBehavior());
if (node !is null)
@m_payload.m_targetNode = node;
else
PrintError("First target node is not a PayloadNode script!");
}
else
PrintError("First target node was not set!");
WorldScript::PayloadNode@ prevNode;
float totalDistance = 0.0f;
UnitPtr unitNode = unitFirstNode;
while (unitNode.IsValid())
{
auto node = cast<WorldScript::PayloadNode>(unitNode.GetScriptBehavior());
if (node is null)
break;
unitNode = node.NextNode.FetchFirst();
@node.m_prevNode = prevNode;
@node.m_nextNode = cast<WorldScript::PayloadNode>(unitNode.GetScriptBehavior());
if (prevNode !is null)
totalDistance += dist(prevNode.Position, node.Position);
@prevNode = node;
}
float currDistance = 0.0f;
auto distNode = cast<WorldScript::PayloadNode>(unitFirstNode.GetScriptBehavior());
while (distNode !is null)
{
if (distNode.m_prevNode is null)
distNode.m_locationFactor = 0.0f;
else
{
currDistance += dist(distNode.m_prevNode.Position, distNode.Position);
distNode.m_locationFactor = currDistance / totalDistance;
}
@distNode = distNode.m_nextNode;
}
m_payloadHUD.AddCheckpoints();
}
void SpawnPlayer(int i, vec2 pos = vec2(), int unitId = 0, uint team = 0) override
{
TeamVersusGameMode::SpawnPlayer(i, pos, unitId, team);
PayloadPlayerRecord@ record = cast<PayloadPlayerRecord>(g_players[i]);
record.HandlePlayerClass();
if (g_players[i].local)
{
//TODO: This doesn't work well
bool localAttackers = (team == HashString("player_1"));
for (uint j = 0; j < g_teamForceFields.length(); j++)
{
bool hasCollision = (localAttackers != g_teamForceFields[j].Attackers);
auto units = g_teamForceFields[j].Units.FetchAll();
for (uint k = 0; k < units.length(); k++)
{
PhysicsBody@ body = units[k].GetPhysicsBody();
if (body is null)
{
PrintError("PhysicsBody for unit " + units[k].GetDebugName() + "is null");
continue;
}
body.SetActive(hasCollision);
}
}
}
}
}

View File

@@ -1,110 +0,0 @@
<?xml version="1.0" encoding="iso-8859-1"?>
<project name="WebBuild">
<!-- generate timestamps -->
<tstamp />
<!-- Debugging Macro -->
<import file="echopath.xml" />
<!-- JS build files macro -->
<import file="rhinoscript.xml" />
<!-- Component Build Files -->
<import file="setup.xml" />
<import file="clean.xml" />
<import file="copy.xml" />
<import file="file.transform.xml" />
<import file="external.tools.xml" />
<import file="rename.xml" />
<import file="js.xml" />
<import file="css.xml" />
<import file="img.xml" />
<import file="png8.xml" />
<import file="yui.xml" />
<import file="cdn.xml" />
<import file="datauri.xml" />
<import file="devlive.xml" />
<!-- This dirname is the only complete path we know for sure, everything builds off of it -->
<dirname property="dir.build" file="${ant.file.WebBuild}" />
<!-- get name for newly built folder -->
<basename property="app.name" file="${basedir}" />
<!-- read global properties file -->
<property file="${dir.build}\build.properties" />
<!-- Build Directories -->
<property name="dir.build.js" location="${dir.build}/js" />
<!-- App Directories -->
<property name="dir.app" location="${dir.result}/${app.name}" />
<property name="dir.app.temp" location="${dir.temp}/${app.name}" />
<property name="dir.app.files" location="${dir.app.temp}/${dir.files}" />
<!-- Files -->
<property name="mapping.js" location="${dir.app.temp}/${mapping.file.js}" />
<property name="mapping.css" location="${dir.app.temp}/${mapping.file.css}" />
<property name="mapping.img" location="${dir.app.temp}/${mapping.file.img}" />
<property name="mapping.swf" location="${dir.app.temp}/${mapping.file.swf}" />
<property name="mapping.fonts" location="${dir.app.temp}/${mapping.file.fonts}" />
<!-- Tool Directories -->
<property name="dir.bin" location="${dir.build}/Bin" />
<property name="dir.jar" location="${dir.bin}/jar" />
<!-- Tool Files -->
<property name="tools.compressor" location="${dir.jar}/${tools.file.compressor}" />
<property name="tools.cssembed" location="${dir.jar}/${tools.file.cssembed}" />
<property name="tools.filetransform" location="${dir.jar}/${tools.file.filetransform}" />
<property name="tools.optipng" location="${dir.bin}/${tools.file.optipng}" />
<property name="tools.jpegtran" location="${dir.bin}/${tools.file.jpegtran}" />
<!-- BUILD TARGETS -->
<!-- low level utility build targets -->
<!-- Build the tools -->
<target name="-setup.build.tools"
depends="-define.filetransform, -define.cssembed, -define.yuicompressor, -define.jsclasspath"
/>
<!-- set up filesystem properties -->
<target
name="-setup"
depends="-setup.mode, -setup.conditions, -setup.js, -setup.css, -setup.swf, -setup.img, -setup.fonts, -setup.yui"
/>
<!-- utility-ish targets -->
<target name="copy" depends="clean, tools, -copy" />
<target name="tools" depends="-setup.build.tools" />
<target name="finalize" depends="copy, -finalize" />
<target name="-prepare" depends="copy, -setup" />
<!-- individual component build targets (empty descriptions are to make sure they show in "ant -p") -->
<target name="devlive" depends="-prepare, -devlive" description="" />
<target name="js" depends="-prepare, -js" description="" />
<target name="css" depends="-prepare, -css" description="" />
<target name="rename" depends="-prepare, -rename" description="" />
<target name="yui" depends="-prepare, rename, -yui" description="" />
<target name="cdn" depends="-prepare, -cdn" description="" />
<!-- high level build targets (Excluding of images is on purpose here, it's slow) -->
<target name="core"
depends="devlive, js, css, cdn, rename, yui, -js.inline"
description="Core build work"
/>
<target name="prod"
depends="core, finalize"
description="Full Production Build"
/>
<!-- debug target -->
<target name="debug" depends="-setup">
<echoproperties/>
</target>
</project>

View File

@@ -1 +0,0 @@
ant.xml

View File

@@ -1,17 +0,0 @@
#######################
# HOSTNAME
######################
<VirtualHost 127.0.0.1:PORT>
ServerAdmin patrick@heysparkbox.com
DocumentRoot "/var/www/HOSTNAME"
ServerName HOSTNAME
<Directory "/var/www/HOSTNAME">
Options Indexes MultiViews FollowSymLinks
AllowOverride All
Order allow,deny
Allow from all
DirectoryIndex index.php
</Directory>
</VirtualHost>

View File

@@ -1,9 +1,6 @@
AsciiDoc Home Page
==================
Title
-----
Example Articles
~~~~~~~~~~~~~~~~
- Item 1

View File

@@ -1,66 +0,0 @@
ORG 0000h
SJMP START
ORG 0003h
LCALL INT0_ISR
RETI
ORG 000Bh
LCALL T0_ISR
RETI
ORG 0013h
LCALL INT1_ISR
RETI
ORG 001Bh
LCALL T1_ISR
RETI
ORG 0023h
LCALL UART_ISR
RETI
ORG 0030h
START:
MOV A,#11111110b
SETB IT0 ; Set External Interrupt 0 to be falling edge triggered
SETB EX0 ; Enable External Interrut 0
SETB EA ; Enable Interrupt
LEFT:
CJNE A,#01111111b,LOOP1
JMP RIGHT
LOOP1:
MOV P1,A
RL A
LCALL DELAY
SJMP LEFT
RIGHT:
CJNE A,#11111110b,LOOP2
JMP LEFT
LOOP2:
MOV P1,A
RR A
LCALL DELAY
SJMP RIGHT
INT0_ISR:
MOV R1,#3
FLASH:
MOV P1,#00h
LCALL DELAY
MOV P1,#0FFh
LCALL DELAY
DJNZ R1,FLASH
RET
T0_ISR:
RET
INT1_ISR:
RET
T1_ISR:
RET
UART_ISR:
RET
DELAY: MOV R5,#20 ;R5*20 mS
D1: MOV R6,#40
D2: MOV R7,#249
DJNZ R7,$
DJNZ R6,D2
DJNZ R5,D1
RET
END

File diff suppressed because it is too large Load Diff

View File

@@ -1,245 +0,0 @@
push r2
dint
nop
bis #MPYDLYWRTEN,&MPY32CTL0
bic #MPYDLY32,&MPY32CTL0
mov #SUMEXT,r13
clr r12
mov @r15+,r4
mov @r15+,r5
mov @r15+,r6
mov @r15+,r7
mov @r15+,r8
mov @r15+,r9
mov @r15+,r10
mov @r15+,r11
sub #2*8,r15
/* SELF_STEP_FIRST */
mov r4,&MPY32L
mov r5,&MPY32H
mov r4,&OP2L
mov r5,&OP2H
/* COLUMN_END */
mov &RES0,2*0(r14)
mov &RES1,2*(0+1)(r14)
mov &RES2,&RES0
mov &RES3,&RES1
mov r12,&RES2
clr &RES3
/* STEP_1 */
mov r4,&MAC32L
mov r5,&MAC32H
mov r6,&OP2L
mov r7,&OP2H
add &SUMEXT,r12
mov r6,&OP2L
mov r7,&OP2H
/* COLUMN_END */
mov &RES0,2*2(r14)
add @r13,r12
mov &RES1,2*(2+1)(r14)
mov &RES2,&RES0
mov &RES3,&RES1
mov r12,&RES2
clr &RES3
clr r12
/* STEP_1 */
mov r4,&MAC32L
mov r5,&MAC32H
mov r8,&OP2L
mov r9,&OP2H
add &SUMEXT,r12
mov r8,&OP2L
mov r9,&OP2H
/* SELF_STEP */
mov r6,&MAC32L
mov r7,&MAC32H
add @r13,r12
mov r6,&OP2L
mov r7,&OP2H
/* COLUMN_END */
mov &RES0,2*4(r14)
add @r13,r12
mov &RES1,2*(4+1)(r14)
mov &RES2,&RES0
mov &RES3,&RES1
mov r12,&RES2
clr &RES3
clr r12
/* STEP_1 */
mov r4,&MAC32L
mov r5,&MAC32H
mov r10,&OP2L
mov r11,&OP2H
add &SUMEXT,r12
mov r10,&OP2L
mov r11,&OP2H
/* STEP_2MORE */
mov r6,&MAC32L
mov r7,&MAC32H
add @r13,r12
mov r8,&OP2L
mov r9,&OP2H
add &SUMEXT,r12
mov r8,&OP2L
mov r9,&OP2H
/* COLUMN_END */
mov &RES0,2*6(r14)
add @r13,r12
mov &RES1,2*(6+1)(r14)
mov &RES2,&RES0
mov &RES3,&RES1
mov r12,&RES2
clr &RES3
clr r12
/* STEP_1 */
mov r4,&MAC32L
mov r5,&MAC32H
mov 2*8(r15),&OP2L
mov 2*9(r15),&OP2H
add &SUMEXT,r12
mov 2*8(r15),&OP2L
mov 2*9(r15),&OP2H
/* STEP_2MORE */
mov r6,&MAC32L
mov r7,&MAC32H
add @r13,r12
mov r10,&OP2L
mov r11,&OP2H
add &SUMEXT,r12
mov r10,&OP2L
mov r11,&OP2H
/* SELF_STEP */
mov r8,&MAC32L
mov r9,&MAC32H
add @r13,r12
mov r8,&OP2L
mov r9,&OP2H
/* COLUMN_END */
mov &RES0,2*8(r14)
add @r13,r12
mov &RES1,2*(8+1)(r14)
mov &RES2,&RES0
mov &RES3,&RES1
mov r12,&RES2
clr &RES3
clr r12
mov 2*8(r15),r4
mov 2*(8+1)(r15),r5
/* STEP_1 */
mov r6,&MAC32L
mov r7,&MAC32H
mov r4,&OP2L
mov r5,&OP2H
add &SUMEXT,r12
mov r4,&OP2L
mov r5,&OP2H
/* STEP_2MORE */
mov r8,&MAC32L
mov r9,&MAC32H
add @r13,r12
mov r10,&OP2L
mov r11,&OP2H
add &SUMEXT,r12
mov r10,&OP2L
mov r11,&OP2H
/* COLUMN_END */
mov &RES0,2*10(r14)
add @r13,r12
mov &RES1,2*(10+1)(r14)
mov &RES2,&RES0
mov &RES3,&RES1
mov r12,&RES2
clr &RES3
clr r12
/* STEP_1 */
mov r8,&MAC32L
mov r9,&MAC32H
mov r4,&OP2L
mov r5,&OP2H
add &SUMEXT,r12
mov r4,&OP2L
mov r5,&OP2H
/* SELF_STEP */
mov r10,&MAC32L
mov r11,&MAC32H
add @r13,r12
mov r10,&OP2L
mov r11,&OP2H
/* COLUMN_END */
mov &RES0,2*12(r14)
add @r13,r12
mov &RES1,2*(12+1)(r14)
mov &RES2,&RES0
mov &RES3,&RES1
mov r12,&RES2
clr &RES3
clr r12
/* STEP_1 */
mov r10,&MAC32L
mov r11,&MAC32H
mov r4,&OP2L
mov r5,&OP2H
add &SUMEXT,r12
mov r4,&OP2L
mov r5,&OP2H
/* COLUMN_END */
mov &RES0,2*14(r14)
add @r13,r12
mov &RES1,2*(14+1)(r14)
mov &RES2,&RES0
mov &RES3,&RES1
mov r12,&RES2
clr &RES3
clr r12
/* SELF_STEP_1 */
mov r4,&MAC32L
mov r5,&MAC32H
mov r4,&OP2L
mov r5,&OP2H
/* COLUMN_END */
mov &RES0,2*16(r14)
add @r13,r12
mov &RES1,2*(16+1)(r14)
mov &RES2,&RES0
mov &RES3,&RES1
mov r12,&RES2
clr &RES3
clr r12
/* END */
mov &RES0,2*18(r14)
mov &RES1,2*(18+1)(r14)
pop r2
eint

View File

@@ -1,170 +0,0 @@
; ------------------------------------------------------------------------
; 显示 AL 中的数字
; ------------------------------------------------------------------------
DispAL:
push ecx
push edx
push edi
mov edi, [dwDispPos]
mov ah, 0Fh ; 0000b: 黑底 1111b: 白字
mov dl, al
shr al, 4
mov ecx, 2
.begin:
and al, 01111b
cmp al, 9
ja .1
add al, '0'
jmp .2
.1:
sub al, 0Ah
add al, 'A'
.2:
mov [gs:edi], ax
add edi, 2
mov al, dl
loop .begin
;add edi, 2
mov [dwDispPos], edi
pop edi
pop edx
pop ecx
ret
; DispAL 结束-------------------------------------------------------------
; ------------------------------------------------------------------------
; 显示一个整形数
; ------------------------------------------------------------------------
DispInt:
mov eax, [esp + 4]
shr eax, 24
call DispAL
mov eax, [esp + 4]
shr eax, 16
call DispAL
mov eax, [esp + 4]
shr eax, 8
call DispAL
mov eax, [esp + 4]
call DispAL
mov ah, 07h ; 0000b: 黑底 0111b: 灰字
mov al, 'h'
push edi
mov edi, [dwDispPos]
mov [gs:edi], ax
add edi, 4
mov [dwDispPos], edi
pop edi
ret
; DispInt 结束------------------------------------------------------------
; ------------------------------------------------------------------------
; 显示一个字符串
; ------------------------------------------------------------------------
DispStr:
push ebp
mov ebp, esp
push ebx
push esi
push edi
mov esi, [ebp + 8] ; pszInfo
mov edi, [dwDispPos]
mov ah, 0Fh
.1:
lodsb
test al, al
jz .2
cmp al, 0Ah ; 是回车吗?
jnz .3
push eax
mov eax, edi
mov bl, 160
div bl
and eax, 0FFh
inc eax
mov bl, 160
mul bl
mov edi, eax
pop eax
jmp .1
.3:
mov [gs:edi], ax
add edi, 2
jmp .1
.2:
mov [dwDispPos], edi
pop edi
pop esi
pop ebx
pop ebp
ret
; DispStr 结束------------------------------------------------------------
; ------------------------------------------------------------------------
; 换行
; ------------------------------------------------------------------------
DispReturn:
push szReturn
call DispStr ;printf("\n");
add esp, 4
ret
; DispReturn 结束---------------------------------------------------------
; ------------------------------------------------------------------------
; 内存拷贝,仿 memcpy
; ------------------------------------------------------------------------
; void* MemCpy(void* es:pDest, void* ds:pSrc, int iSize);
; ------------------------------------------------------------------------
MemCpy:
push ebp
mov ebp, esp
push esi
push edi
push ecx
mov edi, [ebp + 8] ; Destination
mov esi, [ebp + 12] ; Source
mov ecx, [ebp + 16] ; Counter
.1:
cmp ecx, 0 ; 判断计数器
jz .2 ; 计数器为零时跳出
mov al, [ds:esi] ;
inc esi ;
; 逐字节移动
mov byte [es:edi], al ;
inc edi ;
dec ecx ; 计数器减一
jmp .1 ; 循环
.2:
mov eax, [ebp + 8] ; 返回值
pop ecx
pop edi
pop esi
mov esp, ebp
pop ebp
ret ; 函数结束,返回
; MemCpy 结束-------------------------------------------------------------

View File

@@ -1,321 +0,0 @@
BLARGG_MACROS_INCLUDED = 1
; Allows extra error checking with modified version
; of ca65. Otherwise acts like a constant of 0.
ADDR = 0
; Switches to Segment and places Line there.
; Line can be an .align directive, .res, .byte, etc.
; Examples:
; seg_data BSS, .align 256
; seg_data RODATA, {message: .byte "Test",0}
.macro seg_data Segment, Line
.pushseg
.segment .string(Segment)
Line
.popseg
.endmacro
; Reserves Size bytes in Segment for Name.
; If Size is omitted, reserves one byte.
.macro seg_res Segment, Name, Size
.ifblank Size
seg_data Segment, Name: .res 1
.else
seg_data Segment, Name: .res Size
.endif
.endmacro
; Shortcuts for zeropage, bss, and stack
.define zp_res seg_res ZEROPAGE,
.define nv_res seg_res NVRAM,
.define bss_res seg_res BSS,
.define sp_res seg_res STACK,
.define zp_byte zp_res
; Copies byte from Src to Addr. If Src begins with #,
; it sets Addr to the immediate value.
; Out: A = byte copied
; Preserved: X, Y
.macro mov Addr, Src
lda Src
sta Addr
.endmacro
; Copies word from Src to Addr. If Src begins with #,
; it sets Addr the immediate value.
; Out: A = high byte of word
; Preserved: X, Y
.macro movw Addr, Src
.if .match( .left( 1, {Src} ), # )
lda #<(.right( .tcount( {Src} )-1, {Src} ))
sta Addr
lda #>(.right( .tcount( {Src} )-1, {Src} ))
sta 1+(Addr)
.else
lda Src
sta Addr
lda 1+(Src)
sta 1+(Addr)
.endif
.endmacro
; Increments 16-bit value at Addr.
; Out: EQ/NE based on resulting 16-bit value
; Preserved: A, X, Y
.macro incw Addr
.local @Skip
inc Addr
bne @Skip
inc 1+(Addr)
@Skip:
.endmacro
; Adds Src to word at Addr.
; Out: A = high byte of result, carry set appropriately
; Preserved: X, Y
.macro addw Addr, Src
.if .match( .left( 1, {Src} ), # )
addw_ Addr,(.right( .tcount( {Src} )-1, {Src} ))
.else
lda Addr
clc
adc Src
sta Addr
lda 1+(Addr)
adc 1+(Src)
sta 1+(Addr)
.endif
.endmacro
.macro addw_ Addr, Imm
lda Addr
clc
adc #<Imm
sta Addr
;.if (Imm >> 8) <> 0
lda 1+(Addr)
adc #>Imm
sta 1+(Addr)
;.else
; .local @Skip
; bcc @Skip
; inc 1+(Addr)
;@Skip:
;.endif
.endmacro
; Splits list of words into tables of low and high bytes
; Example: split_words foo, {$1234, $5678}
; expands to:
; foo_l: $34, $78
; foo_h: $12, $56
; foo_count = 2
.macro split_words Label, Words
.ident (.concat (.string(Label), "_l")): .lobytes Words
.ident (.concat (.string(Label), "_h")): .hibytes Words
.ident (.concat (.string(Label), "_count")) = * - .ident (.concat (.string(Label), "_h"))
.endmacro
.macro SELECT Bool, True, False, Extra
.ifndef Bool
False Extra
.elseif Bool <> 0
True Extra
.else
False Extra
.endif
.endmacro
.macro DEFAULT Name, Value
.ifndef Name
Name = Value
.endif
.endmacro
.ifp02
; 6502 doesn't define these alternate names
.define blt bcc
.define bge bcs
.endif
.define jlt jcc
.define jge jcs
; Jxx Target = Bxx Target, except it can go farther than
; 128 bytes. Implemented via branch around a JMP.
; Don't use ca65's longbranch, because they fail for @labels
;.macpack longbranch
.macro jeq Target
bne *+5
jmp Target
.endmacro
.macro jne Target
beq *+5
jmp Target
.endmacro
.macro jmi Target
bpl *+5
jmp Target
.endmacro
.macro jpl Target
bmi *+5
jmp Target
.endmacro
.macro jcs Target
bcc *+5
jmp Target
.endmacro
.macro jcc Target
bcs *+5
jmp Target
.endmacro
.macro jvs Target
bvc *+5
jmp Target
.endmacro
.macro jvc Target
bvs *+5
jmp Target
.endmacro
; Passes constant data to routine in addr
; Preserved: A, X, Y
.macro jsr_with_addr routine,data
.local Addr
pha
lda #<Addr
sta addr
lda #>Addr
sta addr+1
pla
jsr routine
seg_data RODATA,{Addr: data}
.endmacro
; Calls routine multiple times, with A having the
; value 'start' the first time, 'start+step' the
; second time, up to 'end' for the last time.
.macro for_loop routine,start,end,step
.local @for_loop
lda #start
@for_loop:
pha
jsr routine
pla
clc
adc #step
cmp #<((end)+(step))
bne @for_loop
.endmacro
; Calls routine n times. The value of A in the routine
; counts from 0 to n-1.
.macro loop_n_times routine,n
for_loop routine,0,n-1,+1
.endmacro
; Same as for_loop, except uses 16-bit value in YX.
; -256 <= step <= 255
.macro for_loop16 routine,start,end,step
.if (step) < -256 || (step) > 255
.error "Step must be within -256 to 255"
.endif
.local @for_loop_skip
.local @for_loop
ldy #>(start)
lda #<(start)
@for_loop:
tax
pha
tya
pha
jsr routine
pla
tay
pla
clc
adc #step
.if (step) > 0
bcc @for_loop_skip
iny
.else
bcs @for_loop_skip
dey
.endif
@for_loop_skip:
cmp #<((end)+(step))
bne @for_loop
cpy #>((end)+(step))
bne @for_loop
.endmacro
; Stores byte at addr
; Preserved: X, Y
.macro setb addr, byte
lda #byte
sta addr
.endmacro
; Stores word at addr
; Preserved: X, Y
.macro setw addr, word
lda #<(word)
sta addr
lda #>(word)
sta addr+1
.endmacro
; Loads XY with 16-bit immediate or value at address
.macro ldxy Arg
.if .match( .left( 1, {Arg} ), # )
ldy #<(.right( .tcount( {Arg} )-1, {Arg} ))
ldx #>(.right( .tcount( {Arg} )-1, {Arg} ))
.else
ldy (Arg)
ldx (Arg)+1
.endif
.endmacro
; Increments XY as 16-bit register, in CONSTANT time.
; Z flag set based on entire result.
; Preserved: A
; Time: 7 clocks
.macro inxy
iny ; 2
beq *+4 ; 3
; -1
bne *+3 ; 3
; -1
inx ; 2
.endmacro
; Negates A and adds it to operand
.macro subaf Operand
eor #$FF
sec
adc Operand
.endmacro
; Initializes CPU registers to reasonable values
; Preserved: A, Y
.macro init_cpu_regs
sei
cld ; unnecessary on NES, but might help on clone
ldx #$FF
txs
.ifndef BUILD_NSF
inx
stx PPUCTRL
.endif
.endmacro

View File

@@ -1,16 +0,0 @@
import ballerina.lang.messages;
import ballerina.net.http;
import ballerina.doc;
@doc:Description {value:"By default Ballerina assumes that the service is to be exposed via HTTP/1.1 using the system default port and that all requests coming to the HTTP server will be delivered to this service."}
service<http> helloWorld {
@doc:Description {value:"All resources are invoked with an argument of type message, the built-in reference type representing a network invocation."}
resource sayHello (message m) {
// Creates an empty message.
message response = {};
// A util method that can be used to set string payload.
messages:setStringPayload(response, "Hello, World!");
// Reply keyword sends the response back to the client.
reply response;
}
}

View File

@@ -1,6 +0,0 @@
import ballerina.lang.system;
function main (string[] args) {
system:println("Hello, World!");
}

View File

@@ -1,31 +0,0 @@
import ballerina.lang.system;
function main (string[] args) {
// JSON string value.
json j1 = "Apple";
system:println(j1);
// JSON number value.
json j2 = 5.36;
system:println(j2);
// JSON true value.
json j3 = true;
system:println(j3);
// JSON false value.
json j4 = false;
system:println(j4);
// JSON null value.
json j5 = null;
//JSON Objects.
json j6 = {name:"apple", color:"red", price:j2};
system:println(j6);
//JSON Arrays. They are arrays of any JSON value.
json j7 = [1, false, null, "foo",
{first:"John", last:"Pala"}];
system:println(j7);
}

View File

@@ -1,28 +0,0 @@
import ballerina.lang.system;
function divideBy10 (int d) (int, int) {
return d / 10, d % 10;
}
function main (string[] args) {
//Here the variable type is inferred type from the initial value. This is same as "int k = 5";
var k = 5;
system:println(10 + k);
//Here the type of the 'strVar' is 'string'.
var strVar = "Hello!";
system:println(strVar);
//Multiple assignment with 'var' allows you to define the variable then and there.
//Variable type is inferred from the right-hand side.
var q, r = divideBy10(6);
system:println("06/10: " + "quotient=" + q + " " +
"remainder=" + r);
//To ignore a particular return value in a multiple assignment statement, use '_'.
var q1, _ = divideBy10(57);
system:println("57/10: " + "quotient=" + q1);
var _, r1 = divideBy10(9);
system:println("09/10: " + "remainder=" + r1);
}

View File

@@ -1,26 +0,0 @@
import ballerina.lang.system;
function main (string[] args) {
// XML element. Can only have one root element.
xml x1 = xml `<book>The Lost World</book>`;
system:println(x1);
// XML text
xml x2 = xml `Hello, world!`;
system:println(x2);
// XML comment
xml x3 = xml `<!--I am a comment-->`;
system:println(x3);
// XML processing instruction
xml x4 = xml `<?target data?>`;
system:println(x4);
// Multiple XML items can be combined to form a sequence of XML. The resulting sequence is again an XML on its own.
xml x5 = x1 + x2 + x3 + x4;
system:println("\nResulting XML sequence:");
system:println(x5);
}

View File

Before

Width:  |  Height:  |  Size: 5.4 KiB

After

Width:  |  Height:  |  Size: 5.4 KiB

Some files were not shown because too many files have changed in this diff Show More