mirror of
https://github.com/KevinMidboe/linguist.git
synced 2025-10-29 17:50:22 +00:00
Compare commits
57 Commits
revert-384
...
lildude/ex
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
fb77fdcd53 | ||
|
|
5306d65e79 | ||
|
|
24fa4c82f3 | ||
|
|
2abf488e65 | ||
|
|
812797b51d | ||
|
|
dc32876113 | ||
|
|
a18ad1d489 | ||
|
|
25ac140d58 | ||
|
|
f7835f7119 | ||
|
|
a7f835a653 | ||
|
|
6220286f42 | ||
|
|
15e2b74dec | ||
|
|
969333610c | ||
|
|
8438c6cd3e | ||
|
|
60f748d47b | ||
|
|
8da6ddf9d9 | ||
|
|
fef7a12c85 | ||
|
|
b80ca35b75 | ||
|
|
c8171322f5 | ||
|
|
4c1e61892a | ||
|
|
4db659dede | ||
|
|
ed73a72cbe | ||
|
|
512f077da8 | ||
|
|
3260b06241 | ||
|
|
ef3b0b6af3 | ||
|
|
434023460e | ||
|
|
8e628ecc36 | ||
|
|
ca714340e8 | ||
|
|
a4e6fc78c8 | ||
|
|
db1d4f7893 | ||
|
|
bee7e55618 | ||
|
|
5fbe9c0902 | ||
|
|
a840668599 | ||
|
|
38cb8871ba | ||
|
|
d0b906f128 | ||
|
|
d4c2d83af9 | ||
|
|
0b81b21983 | ||
|
|
1a769c4665 | ||
|
|
e7e64bf39a | ||
|
|
e4b9430024 | ||
|
|
a76805e40d | ||
|
|
8d27845f8c | ||
|
|
9a8ab45b6f | ||
|
|
e335d48625 | ||
|
|
4f46155c05 | ||
|
|
38901d51d2 | ||
|
|
ded0dc74e0 | ||
|
|
c5d1bb5370 | ||
|
|
c8ca48856b | ||
|
|
7be6fb0138 | ||
|
|
8c516655bc | ||
|
|
9dceffce2f | ||
|
|
33be70eb28 | ||
|
|
9c4dc3047c | ||
|
|
d8e5f3c965 | ||
|
|
71bf640a47 | ||
|
|
c9b3d19c6f |
26
.github/ISSUE_TEMPLATE.md
vendored
Normal file
26
.github/ISSUE_TEMPLATE.md
vendored
Normal file
@@ -0,0 +1,26 @@
|
||||
<!--- Provide a general summary of the issue in the Title above -->
|
||||
|
||||
## Preliminary Steps
|
||||
|
||||
Please confirm you have...
|
||||
- [ ] reviewed [How Linguist Works](https://github.com/github/linguist#how-linguist-works),
|
||||
- [ ] reviewed the [Troubleshooting](https://github.com/github/linguist#troubleshooting) docs,
|
||||
- [ ] considered implementing an [override](https://github.com/github/linguist#overrides),
|
||||
- [ ] verified an issue has not already been logged for your issue ([linguist issues](https://github.com/issues?utf8=%E2%9C%93&q=is%3Aissue+repo%3Agithub/linguist)).
|
||||
|
||||
<!-- Please review these preliminary steps before logging your issue. You may find the information referenced may answer or explain the behaviour you are seeing. It'll help us to know you've reviewed this information. -->
|
||||
|
||||
## Problem Description
|
||||
|
||||
<!--- Provide a more detailed introduction to the issue itself, and why you consider it to be a bug -->
|
||||
|
||||
### URL of the affected repository:
|
||||
|
||||
### Last modified on:
|
||||
<!-- YYYY-MM-DD -->
|
||||
|
||||
### Expected language:
|
||||
<!-- expected language -->
|
||||
|
||||
### Detected language:
|
||||
<!-- detected language -->
|
||||
46
.github/PULL_REQUEST_TEMPLATE.md
vendored
Normal file
46
.github/PULL_REQUEST_TEMPLATE.md
vendored
Normal file
@@ -0,0 +1,46 @@
|
||||
<!--- Briefly describe what you're changing. -->
|
||||
|
||||
## Description
|
||||
<!--- If necessary, go into depth of what this pull request is doing. -->
|
||||
|
||||
## Checklist:
|
||||
<!--- Go over all the following points, and put an `x` in all the boxes that apply. -->
|
||||
<!--- If you're unsure about any of these, don't hesitate to ask. We're here to help! -->
|
||||
- [ ] **I am associating a language with a new file extension.**
|
||||
- [ ] The new extension is used in hundreds of repositories on GitHub.com
|
||||
- Search results for each extension:
|
||||
<!-- Replace FOOBAR with the new extension, and KEYWORDS with keywords unique to the language. Repeat for each extension added. -->
|
||||
- https://github.com/search?utf8=%E2%9C%93&type=Code&ref=searchresults&q=extension%3AFOOBAR+KEYWORDS+NOT+nothack
|
||||
- [ ] I have included a real-world usage sample for all extensions added in this PR:
|
||||
- Sample source(s):
|
||||
- [URL to each sample source, if applicable]
|
||||
- Sample license(s):
|
||||
- [ ] I have included a change to the heuristics to distinguish my language from others using the same extension.
|
||||
|
||||
- [ ] **I am adding a new language.**
|
||||
- [ ] The extension of the new language is used in hundreds of repositories on GitHub.com.
|
||||
- Search results for each extension:
|
||||
<!-- Replace FOOBAR with the new extension, and KEYWORDS with keywords unique to the language. Repeat for each extension added. -->
|
||||
- https://github.com/search?utf8=%E2%9C%93&type=Code&ref=searchresults&q=extension%3AFOOBAR+KEYWORDS+NOT+nothack
|
||||
- [ ] I have included a real-world usage sample for all extensions added in this PR:
|
||||
- Sample source(s):
|
||||
- [URL to each sample source, if applicable]
|
||||
- Sample license(s):
|
||||
- [ ] I have included a syntax highlighting grammar.
|
||||
- [ ] I have included a change to the heuristics to distinguish my language from others using the same extension.
|
||||
|
||||
- [ ] **I am fixing a misclassified language**
|
||||
- [ ] I have included a new sample for the misclassified language:
|
||||
- Sample source(s):
|
||||
- [URL to each sample source, if applicable]
|
||||
- Sample license(s):
|
||||
- [ ] I have included a change to the heuristics to distinguish my language from others using the same extension.
|
||||
|
||||
- [ ] **I am changing the source of a syntax highlighting grammar**
|
||||
<!-- Update the Lightshow URLs below to show the new and old grammars in action. -->
|
||||
- Old: https://github-lightshow.herokuapp.com/
|
||||
- New: https://github-lightshow.herokuapp.com/
|
||||
|
||||
- [ ] **I am adding new or changing current functionality**
|
||||
<!-- This includes modifying the vendor, documentation, and generated lists. -->
|
||||
- [ ] I have added or updated the tests for the new or changed functionality.
|
||||
40
.gitmodules
vendored
40
.gitmodules
vendored
@@ -301,9 +301,6 @@
|
||||
[submodule "vendor/grammars/make.tmbundle"]
|
||||
path = vendor/grammars/make.tmbundle
|
||||
url = https://github.com/textmate/make.tmbundle
|
||||
[submodule "vendor/grammars/matlab.tmbundle"]
|
||||
path = vendor/grammars/matlab.tmbundle
|
||||
url = https://github.com/textmate/matlab.tmbundle
|
||||
[submodule "vendor/grammars/maven.tmbundle"]
|
||||
path = vendor/grammars/maven.tmbundle
|
||||
url = https://github.com/textmate/maven.tmbundle
|
||||
@@ -412,9 +409,6 @@
|
||||
[submodule "vendor/grammars/sublime-bsv"]
|
||||
path = vendor/grammars/sublime-bsv
|
||||
url = https://github.com/thotypous/sublime-bsv
|
||||
[submodule "vendor/grammars/Sublime-HTTP"]
|
||||
path = vendor/grammars/Sublime-HTTP
|
||||
url = https://github.com/httpspec/sublime-highlighting
|
||||
[submodule "vendor/grammars/sass-textmate-bundle"]
|
||||
path = vendor/grammars/sass-textmate-bundle
|
||||
url = https://github.com/nathos/sass-textmate-bundle
|
||||
@@ -439,9 +433,6 @@
|
||||
[submodule "vendor/grammars/sublime-golo"]
|
||||
path = vendor/grammars/sublime-golo
|
||||
url = https://github.com/TypeUnsafe/sublime-golo
|
||||
[submodule "vendor/grammars/JSyntax"]
|
||||
path = vendor/grammars/JSyntax
|
||||
url = https://github.com/bcj/JSyntax
|
||||
[submodule "vendor/grammars/TXL"]
|
||||
path = vendor/grammars/TXL
|
||||
url = https://github.com/MikeHoffert/Sublime-Text-TXL-syntax
|
||||
@@ -748,9 +739,6 @@
|
||||
[submodule "vendor/grammars/language-emacs-lisp"]
|
||||
path = vendor/grammars/language-emacs-lisp
|
||||
url = https://github.com/Alhadis/language-emacs-lisp
|
||||
[submodule "vendor/grammars/language-babel"]
|
||||
path = vendor/grammars/language-babel
|
||||
url = https://github.com/github-linguist/language-babel
|
||||
[submodule "vendor/CodeMirror"]
|
||||
path = vendor/CodeMirror
|
||||
url = https://github.com/codemirror/CodeMirror
|
||||
@@ -868,6 +856,9 @@
|
||||
[submodule "vendor/grammars/language-ballerina"]
|
||||
path = vendor/grammars/language-ballerina
|
||||
url = https://github.com/ballerinalang/plugin-vscode
|
||||
[submodule "vendor/grammars/language-yara"]
|
||||
path = vendor/grammars/language-yara
|
||||
url = https://github.com/blacktop/language-yara
|
||||
[submodule "vendor/grammars/language-ruby"]
|
||||
path = vendor/grammars/language-ruby
|
||||
url = https://github.com/atom/language-ruby
|
||||
@@ -883,3 +874,28 @@
|
||||
[submodule "vendor/grammars/atom-language-julia"]
|
||||
path = vendor/grammars/atom-language-julia
|
||||
url = https://github.com/JuliaEditorSupport/atom-language-julia
|
||||
[submodule "vendor/grammars/language-cwl"]
|
||||
path = vendor/grammars/language-cwl
|
||||
url = https://github.com/manabuishii/language-cwl
|
||||
[submodule "vendor/grammars/Syntax-highlighting-for-PostCSS"]
|
||||
path = vendor/grammars/Syntax-highlighting-for-PostCSS
|
||||
url = https://github.com/hudochenkov/Syntax-highlighting-for-PostCSS
|
||||
[submodule "vendor/grammars/MATLAB-Language-grammar"]
|
||||
path = vendor/grammars/MATLAB-Language-grammar
|
||||
url = https://github.com/mathworks/MATLAB-Language-grammar
|
||||
[submodule "vendor/grammars/javadoc.tmbundle"]
|
||||
path = vendor/grammars/javadoc.tmbundle
|
||||
url = https://github.com/textmate/javadoc.tmbundle
|
||||
[submodule "vendor/grammars/JSyntax"]
|
||||
path = vendor/grammars/JSyntax
|
||||
url = https://github.com/tikkanz/JSyntax
|
||||
[submodule "vendor/grammars/Sublime-HTTP"]
|
||||
path = vendor/grammars/Sublime-HTTP
|
||||
url = https://github.com/samsalisbury/Sublime-HTTP
|
||||
[submodule "vendor/grammars/atom-language-nextflow"]
|
||||
path = vendor/grammars/atom-language-nextflow
|
||||
url = https://github.com/nextflow-io/atom-language-nextflow
|
||||
[submodule "vendor/grammars/language-babel"]
|
||||
path = vendor/grammars/language-babel
|
||||
url = https://github.com/lildude/language-babel
|
||||
branch = make-pcre-friendly
|
||||
|
||||
152
CONTRIBUTING.md
152
CONTRIBUTING.md
@@ -4,64 +4,9 @@ Hi there! We're thrilled that you'd like to contribute to this project. Your hel
|
||||
|
||||
The majority of contributions won't need to touch any Ruby code at all.
|
||||
|
||||
## Adding an extension to a language
|
||||
## Getting started
|
||||
|
||||
We try only to add new extensions once they have some usage on GitHub. In most cases we prefer that extensions be in use in hundreds of repositories before supporting them in Linguist.
|
||||
|
||||
To add support for a new extension:
|
||||
|
||||
1. Add your extension to the language entry in [`languages.yml`][languages], keeping the extensions in alphabetical order.
|
||||
1. Add at least one sample for your extension to the [samples directory][samples] in the correct subdirectory.
|
||||
1. Open a pull request, linking to a [GitHub search result](https://github.com/search?utf8=%E2%9C%93&q=extension%3Aboot+NOT+nothack&type=Code&ref=searchresults) showing in-the-wild usage.
|
||||
|
||||
In addition, if this extension is already listed in [`languages.yml`][languages] then sometimes a few more steps will need to be taken:
|
||||
|
||||
1. Make sure that example `.yourextension` files are present in the [samples directory][samples] for each language that uses `.yourextension`.
|
||||
1. Test the performance of the Bayesian classifier with a relatively large number (1000s) of sample `.yourextension` files. (ping **@lildude** to help with this) to ensure we're not misclassifying files.
|
||||
1. If the Bayesian classifier does a bad job with the sample `.yourextension` files then a [heuristic](https://github.com/github/linguist/blob/master/lib/linguist/heuristics.rb) may need to be written to help.
|
||||
|
||||
|
||||
## Adding a language
|
||||
|
||||
We try only to add languages once they have some usage on GitHub. In most cases we prefer that each new file extension be in use in hundreds of repositories before supporting them in Linguist.
|
||||
|
||||
To add support for a new language:
|
||||
|
||||
1. Add an entry for your language to [`languages.yml`][languages]. Omit the `language_id` field for now.
|
||||
1. Add a grammar for your language: `script/add-grammar https://github.com/JaneSmith/MyGrammar`. Please only add grammars that have [one of these licenses][licenses].
|
||||
1. Add samples for your language to the [samples directory][samples] in the correct subdirectory.
|
||||
1. Add a `language_id` for your language using `script/set-language-ids`. **You should only ever need to run `script/set-language-ids --update`. Anything other than this risks breaking GitHub search :cry:**
|
||||
1. Open a pull request, linking to a [GitHub search result](https://github.com/search?utf8=%E2%9C%93&q=extension%3Aboot+NOT+nothack&type=Code&ref=searchresults) showing in-the-wild usage.
|
||||
|
||||
In addition, if your new language defines an extension that's already listed in [`languages.yml`][languages] (such as `.foo`) then sometimes a few more steps will need to be taken:
|
||||
|
||||
1. Make sure that example `.foo` files are present in the [samples directory][samples] for each language that uses `.foo`.
|
||||
1. Test the performance of the Bayesian classifier with a relatively large number (1000s) of sample `.foo` files. (ping **@lildude** to help with this) to ensure we're not misclassifying files.
|
||||
1. If the Bayesian classifier does a bad job with the sample `.foo` files then a [heuristic](https://github.com/github/linguist/blob/master/lib/linguist/heuristics.rb) may need to be written to help.
|
||||
|
||||
Remember, the goal here is to try and avoid false positives!
|
||||
|
||||
|
||||
## Fixing a misclassified language
|
||||
|
||||
Most languages are detected by their file extension defined in [languages.yml][languages]. For disambiguating between files with common extensions, linguist applies some [heuristics](/lib/linguist/heuristics.rb) and a [statistical classifier](lib/linguist/classifier.rb). This process can help differentiate between, for example, `.h` files which could be either C, C++, or Obj-C.
|
||||
|
||||
Misclassifications can often be solved by either adding a new filename or extension for the language or adding more [samples][samples] to make the classifier smarter.
|
||||
|
||||
|
||||
## Fixing syntax highlighting
|
||||
|
||||
Syntax highlighting in GitHub is performed using TextMate-compatible grammars. These are the same grammars that TextMate, Sublime Text and Atom use. Every language in [languages.yml][languages] is mapped to its corresponding TM `scope`. This scope will be used when picking up a grammar for highlighting.
|
||||
|
||||
Assuming your code is being detected as the right language, in most cases this is due to a bug in the language grammar rather than a bug in Linguist. [`grammars.yml`][grammars] lists all the grammars we use for syntax highlighting on github.com. Find the one corresponding to your code's programming language and submit a bug report upstream. If you can, try to reproduce the highlighting problem in the text editor that the grammar is designed for (TextMate, Sublime Text, or Atom) and include that information in your bug report.
|
||||
|
||||
You can also try to fix the bug yourself and submit a Pull Request. [TextMate's documentation](https://manual.macromates.com/en/language_grammars) offers a good introduction on how to work with TextMate-compatible grammars. You can test grammars using [Lightshow](https://github-lightshow.herokuapp.com).
|
||||
|
||||
Once the bug has been fixed upstream, we'll pick it up for GitHub in the next release of Linguist.
|
||||
|
||||
## Testing
|
||||
|
||||
For development you are going to want to checkout out the source. To get it, clone the repo and run [Bundler](http://gembundler.com/) to install its dependencies.
|
||||
Before you can start contributing to Linguist, you'll need to set up your environment first. Clone the repo and run `script/bootstrap` to install its dependencies.
|
||||
|
||||
git clone https://github.com/github/linguist.git
|
||||
cd linguist/
|
||||
@@ -77,7 +22,91 @@ To run Linguist from the cloned repository:
|
||||
|
||||
bundle exec bin/linguist --breakdown
|
||||
|
||||
To run the tests:
|
||||
### Dependencies
|
||||
|
||||
Linguist uses the [`charlock_holmes`](https://github.com/brianmario/charlock_holmes) character encoding detection library which in turn uses [ICU](http://site.icu-project.org/), and the libgit2 bindings for Ruby provided by [`rugged`](https://github.com/libgit2/rugged). These components have their own dependencies - `icu4c`, and `cmake` and `pkg-config` respectively - which you may need to install before you can install Linguist.
|
||||
|
||||
For example, on macOS with [Homebrew](http://brew.sh/): `brew install cmake pkg-config icu4c` and on Ubuntu: `apt-get install cmake pkg-config libicu-dev`.
|
||||
|
||||
## Adding an extension to a language
|
||||
|
||||
We try only to add new extensions once they have some usage on GitHub. In most cases we prefer that extensions be in use in hundreds of repositories before supporting them in Linguist.
|
||||
|
||||
To add support for a new extension:
|
||||
|
||||
1. Add your extension to the language entry in [`languages.yml`][languages], keeping the extensions in alphabetical and case-sensitive (uppercase before lowercase) order, with the exception of the primary extension; the primary extension should be first.
|
||||
1. Add at least one sample for your extension to the [samples directory][samples] in the correct subdirectory. We'd prefer examples of real-world code showing common usage. The more representative of the structure of the language, the better.
|
||||
1. Open a pull request, linking to a [GitHub search result](https://github.com/search?utf8=%E2%9C%93&q=extension%3Aboot+NOT+nothack&type=Code&ref=searchresults) showing in-the-wild usage.
|
||||
If you are adding a sample, please state clearly the license covering the code in the sample, and if possible, link to the original source of the sample.
|
||||
|
||||
Additionally, if this extension is already listed in [`languages.yml`][languages] and associated with another language, then sometimes a few more steps will need to be taken:
|
||||
|
||||
1. Make sure that example `.yourextension` files are present in the [samples directory][samples] for each language that uses `.yourextension`.
|
||||
1. Test the performance of the Bayesian classifier with a relatively large number (1000s) of sample `.yourextension` files. (ping **@lildude** to help with this) to ensure we're not misclassifying files.
|
||||
1. If the Bayesian classifier does a bad job with the sample `.yourextension` files then a [heuristic](https://github.com/github/linguist/blob/master/lib/linguist/heuristics.rb) may need to be written to help.
|
||||
|
||||
|
||||
## Adding a language
|
||||
|
||||
We try only to add languages once they have some usage on GitHub. In most cases we prefer that each new file extension be in use in hundreds of repositories before supporting them in Linguist.
|
||||
|
||||
To add support for a new language:
|
||||
|
||||
1. Add an entry for your language to [`languages.yml`][languages]. Omit the `language_id` field for now.
|
||||
1. Add a syntax-highlighting grammar for your language using: `script/add-grammar https://github.com/JaneSmith/MyGrammar`
|
||||
This command will analyze the grammar and, if no problems are found, add it to the repository. If problems are found, please report them to the grammar maintainer as you will not be able to add the grammar if problems are found.
|
||||
**Please only add grammars that have [one of these licenses][licenses].**
|
||||
1. Add samples for your language to the [samples directory][samples] in the correct subdirectory.
|
||||
1. Add a `language_id` for your language using `script/set-language-ids`.
|
||||
**You should only ever need to run `script/set-language-ids --update`. Anything other than this risks breaking GitHub search :cry:**
|
||||
1. Open a pull request, linking to a [GitHub search results](https://github.com/search?utf8=%E2%9C%93&q=extension%3Aboot+NOT+nothack&type=Code&ref=searchresults) showing in-the-wild usage.
|
||||
Please state clearly the license covering the code in the samples. Link directly to the original source if possible.
|
||||
|
||||
In addition, if your new language defines an extension that's already listed in [`languages.yml`][languages] (such as `.foo`) then sometimes a few more steps will need to be taken:
|
||||
|
||||
1. Make sure that example `.foo` files are present in the [samples directory][samples] for each language that uses `.foo`.
|
||||
1. Test the performance of the Bayesian classifier with a relatively large number (1000s) of sample `.foo` files. (ping **@lildude** to help with this) to ensure we're not misclassifying files.
|
||||
1. If the Bayesian classifier does a bad job with the sample `.foo` files then a [heuristic](https://github.com/github/linguist/blob/master/lib/linguist/heuristics.rb) may need to be written to help.
|
||||
|
||||
Remember, the goal here is to try and avoid false positives!
|
||||
|
||||
|
||||
## Fixing a misclassified language
|
||||
|
||||
Most languages are detected by their file extension defined in [`languages.yml`][languages]. For disambiguating between files with common extensions, Linguist applies some [heuristics](/lib/linguist/heuristics.rb) and a [statistical classifier](lib/linguist/classifier.rb). This process can help differentiate between, for example, `.h` files which could be either C, C++, or Obj-C.
|
||||
|
||||
Misclassifications can often be solved by either adding a new filename or extension for the language or adding more [samples][samples] to make the classifier smarter.
|
||||
|
||||
|
||||
## Fixing syntax highlighting
|
||||
|
||||
Syntax highlighting in GitHub is performed using TextMate-compatible grammars. These are the same grammars that TextMate, Sublime Text and Atom use. Every language in [`languages.yml`][languages] is mapped to its corresponding TextMate `scopeName`. This scope name will be used when picking up a grammar for highlighting.
|
||||
|
||||
Assuming your code is being detected as the right language, in most cases syntax highlighting problems are due to a bug in the language grammar rather than a bug in Linguist. [`vendor/README.md`][grammars] lists all the grammars we use for syntax highlighting on GitHub.com. Find the one corresponding to your code's programming language and submit a bug report upstream. If you can, try to reproduce the highlighting problem in the text editor that the grammar is designed for (TextMate, Sublime Text, or Atom) and include that information in your bug report.
|
||||
|
||||
You can also try to fix the bug yourself and submit a Pull Request. [TextMate's documentation](https://manual.macromates.com/en/language_grammars) offers a good introduction on how to work with TextMate-compatible grammars. You can test grammars using [Lightshow](https://github-lightshow.herokuapp.com).
|
||||
|
||||
Once the bug has been fixed upstream, we'll pick it up for GitHub in the next release of Linguist.
|
||||
|
||||
|
||||
## Changing the source of a syntax highlighting grammar
|
||||
|
||||
We'd like to ensure Linguist and GitHub.com are using the latest and greatest grammars that are consistent with the current usage but understand that sometimes a grammar can lag behind the evolution of a language or even stop being developed. This often results in someone grasping the opportunity to create a newer and better and more actively maintained grammar, and we'd love to use it and pass on it's functionality to our users.
|
||||
|
||||
Switching the source of a grammar is really easy:
|
||||
|
||||
script/add-grammar --replace MyGrammar https://github.com/PeterPan/MyGrammar
|
||||
|
||||
This command will analyze the grammar and, if no problems are found, add it to the repository. If problems are found, please report these problems to the grammar maintainer as you will not be able to add the grammar if problems are found.
|
||||
|
||||
**Please only add grammars that have [one of these licenses][licenses].**
|
||||
|
||||
Please then open a pull request for the updated grammar.
|
||||
|
||||
|
||||
## Testing
|
||||
|
||||
You can run the tests locally with:
|
||||
|
||||
bundle exec rake test
|
||||
|
||||
@@ -93,6 +122,7 @@ Linguist is maintained with :heart: by:
|
||||
- **@BenEddy** (GitHub staff)
|
||||
- **@Caged** (GitHub staff)
|
||||
- **@grantr** (GitHub staff)
|
||||
- **@kivikakk** (GitHub staff)
|
||||
- **@larsbrinkhoff**
|
||||
- **@lildude** (GitHub staff)
|
||||
- **@pchaigno**
|
||||
@@ -101,7 +131,7 @@ Linguist is maintained with :heart: by:
|
||||
|
||||
As Linguist is a production dependency for GitHub we have a couple of workflow restrictions:
|
||||
|
||||
- Anyone with commit rights can merge Pull Requests provided that there is a :+1: from a GitHub member of staff
|
||||
- Anyone with commit rights can merge Pull Requests provided that there is a :+1: from a GitHub staff member.
|
||||
- Releases are performed by GitHub staff so we can ensure GitHub.com always stays up to date with the latest release of Linguist and there are no regressions in production.
|
||||
|
||||
### Releasing
|
||||
@@ -122,9 +152,11 @@ If you are the current maintainer of this gem:
|
||||
1. Test behavior locally, branch deploy, whatever needs to happen
|
||||
1. Merge github/linguist PR
|
||||
1. Tag and push: `git tag vx.xx.xx; git push --tags`
|
||||
1. Create a GitHub release with the pushed tag (https://github.com/github/linguist/releases/new)
|
||||
1. Build a grammars tarball (`./script/build-grammars-tarball`) and attach it to the GitHub release
|
||||
1. Push to rubygems.org -- `gem push github-linguist-3.0.0.gem`
|
||||
|
||||
[grammars]: /grammars.yml
|
||||
[grammars]: /vendor/README.md
|
||||
[languages]: /lib/linguist/languages.yml
|
||||
[licenses]: https://github.com/github/linguist/blob/257425141d4e2a5232786bf0b13c901ada075f93/vendor/licenses/config.yml#L2-L11
|
||||
[samples]: /samples
|
||||
|
||||
174
README.md
174
README.md
@@ -1,38 +1,130 @@
|
||||
# Linguist
|
||||
|
||||
[](https://travis-ci.org/github/linguist)
|
||||
|
||||
[issues]: https://github.com/github/linguist/issues
|
||||
[new-issue]: https://github.com/github/linguist/issues/new
|
||||
|
||||
This library is used on GitHub.com to detect blob languages, ignore binary or vendored files, suppress generated files in diffs, and generate language breakdown graphs.
|
||||
|
||||
See [Troubleshooting](#troubleshooting) and [`CONTRIBUTING.md`](/CONTRIBUTING.md) before filing an issue or creating a pull request.
|
||||
See [Troubleshooting](#troubleshooting) and [`CONTRIBUTING.md`](CONTRIBUTING.md) before filing an issue or creating a pull request.
|
||||
|
||||
## How Linguist works
|
||||
|
||||
Linguist takes the list of languages it knows from [`languages.yml`](/lib/linguist/languages.yml) and uses a number of methods to try and determine the language used by each file, and the overall repository breakdown.
|
||||
|
||||
Linguist starts by going through all the files in a repository and excludes all files that it determines to be binary data, [vendored code](#vendored-code), [generated code](#generated-code), [documentation](#documentation), or are defined as `data` (e.g. SQL) or `prose` (e.g. Markdown) languages, whilst taking into account any [overrides](#overrides).
|
||||
|
||||
If an [explicit language override](#using-gitattributes) has been used, that language is used for the matching files. The language of each remaining file is then determined using the following strategies, in order, with each step either identifying the precise language or reducing the number of likely languages passed down to the next strategy:
|
||||
|
||||
- Vim or Emacs modeline,
|
||||
- commonly used filename,
|
||||
- shell shebang,
|
||||
- file extension,
|
||||
- heuristics,
|
||||
- naïve Bayesian classification
|
||||
|
||||
The result of this analysis is used to produce the language stats bar which displays the languages percentages for the files in the repository. The percentages are calculated based on the bytes of code for each language as reported by the [List Languages](https://developer.github.com/v3/repos/#list-languages) API.
|
||||
|
||||

|
||||
|
||||
### How Linguist works on GitHub.com
|
||||
|
||||
When you push changes to a repository on GitHub.com, a low priority background job is enqueued to analyze your repository as explained above. The results of this analysis are cached for the lifetime of your repository and are only updated when the repository is updated. As this analysis is performed by a low priority background job, it can take a while, particularly during busy periods, for your language statistics bar to reflect your changes.
|
||||
|
||||
|
||||
## Usage
|
||||
|
||||
Install the gem:
|
||||
|
||||
$ gem install github-linguist
|
||||
|
||||
#### Dependencies
|
||||
|
||||
Linguist uses the [`charlock_holmes`](https://github.com/brianmario/charlock_holmes) character encoding detection library which in turn uses [ICU](http://site.icu-project.org/), and the libgit2 bindings for Ruby provided by [`rugged`](https://github.com/libgit2/rugged). These components have their own dependencies - `icu4c`, and `cmake` and `pkg-config` respectively - which you may need to install before you can install Linguist.
|
||||
|
||||
For example, on macOS with [Homebrew](http://brew.sh/): `brew install cmake pkg-config icu4c` and on Ubuntu: `apt-get install cmake pkg-config libicu-dev`.
|
||||
|
||||
### Application usage
|
||||
|
||||
Linguist can be used in your application as follows:
|
||||
|
||||
```ruby
|
||||
require 'rugged'
|
||||
require 'linguist'
|
||||
|
||||
repo = Rugged::Repository.new('.')
|
||||
project = Linguist::Repository.new(repo, repo.head.target_id)
|
||||
project.language #=> "Ruby"
|
||||
project.languages #=> { "Ruby" => 119387 }
|
||||
```
|
||||
|
||||
### Command line usage
|
||||
|
||||
A repository's languages stats can also be assessed from the command line using the `linguist` executable. Without any options, `linguist` will output the breakdown that correlates to what is shown in the language stats bar. The `--breakdown` flag will additionally show the breakdown of files by language.
|
||||
|
||||
You can try running `linguist` on the root directory in this repository itself:
|
||||
|
||||
```console
|
||||
$ bundle exec bin/linguist --breakdown
|
||||
68.57% Ruby
|
||||
22.90% C
|
||||
6.93% Go
|
||||
1.21% Lex
|
||||
0.39% Shell
|
||||
|
||||
Ruby:
|
||||
Gemfile
|
||||
Rakefile
|
||||
bin/git-linguist
|
||||
bin/linguist
|
||||
ext/linguist/extconf.rb
|
||||
github-linguist.gemspec
|
||||
lib/linguist.rb
|
||||
…
|
||||
```
|
||||
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### My repository is detected as the wrong language
|
||||
|
||||

|
||||
|
||||
The Language stats bar displays languages percentages for the files in the repository. The percentages are calculated based on the bytes of code for each language as reported by the [List Languages](https://developer.github.com/v3/repos/#list-languages) API. If the bar is reporting a language that you don't expect:
|
||||
If the language stats bar is reporting a language that you don't expect:
|
||||
|
||||
1. Click on the name of the language in the stats bar to see a list of the files that are identified as that language.
|
||||
1. If you see files that you didn't write, consider moving the files into one of the [paths for vendored code](/lib/linguist/vendor.yml), or use the [manual overrides](#overrides) feature to ignore them.
|
||||
1. If the files are being misclassified, search for [open issues][issues] to see if anyone else has already reported the issue. Any information you can add, especially links to public repositories, is helpful.
|
||||
Keep in mind this performs a search so the [code search restrictions](https://help.github.com/articles/searching-code/#considerations-for-code-search) may result in files identified in the language statistics not appearing in the search results. [Installing Linguist locally](#usage) and running it from the [command line](#command-line-usage) will give you accurate results.
|
||||
1. If you see files that you didn't write in the search results, consider moving the files into one of the [paths for vendored code](/lib/linguist/vendor.yml), or use the [manual overrides](#overrides) feature to ignore them.
|
||||
1. If the files are misclassified, search for [open issues][issues] to see if anyone else has already reported the issue. Any information you can add, especially links to public repositories, is helpful. You can also use the [manual overrides](#overrides) feature to correctly classify them in your repository.
|
||||
1. If there are no reported issues of this misclassification, [open an issue][new-issue] and include a link to the repository or a sample of the code that is being misclassified.
|
||||
|
||||
Keep in mind that the repository language stats are only [updated when you push changes](#how-linguist-works-on-githubcom), and the results are cached for the lifetime of your repository. If you have not made any changes to your repository in a while, you may find pushing another change will correct the stats.
|
||||
|
||||
### My repository isn't showing my language
|
||||
|
||||
Linguist does not consider [vendored code](#vendored-code), [generated code](#generated-code), [documentation](#documentation), or `data` (e.g. SQL) or `prose` (e.g. Markdown) languages (as defined by the `type` attribute in [`languages.yml`](/lib/linguist/languages.yml)) when calculating the repository language statistics.
|
||||
|
||||
If the language statistics bar is not showing your language at all, it could be for a few reasons:
|
||||
|
||||
1. Linguist doesn't know about your language.
|
||||
1. The extension you have chosen is not associated with your language in [`languages.yml`](/lib/linguist/languages.yml).
|
||||
1. All the files in your repository fall into one of the categories listed above that Linguist excludes by default.
|
||||
|
||||
If Linguist doesn't know about the language or the extension you're using, consider [contributing](CONTRIBUTING.md) to Linguist by opening a pull request to add support for your language or extension. For everything else, you can use the [manual overrides](#overrides) feature to tell Linguist to include your files in the language statistics.
|
||||
|
||||
### There's a problem with the syntax highlighting of a file
|
||||
|
||||
Linguist detects the language of a file but the actual syntax-highlighting is powered by a set of language grammars which are included in this project as a set of submodules [and may be found here](https://github.com/github/linguist/blob/master/vendor/README.md).
|
||||
Linguist detects the language of a file but the actual syntax-highlighting is powered by a set of language grammars which are included in this project as a set of submodules [as listed here](/vendor/README.md).
|
||||
|
||||
If you experience an issue with the syntax-highlighting on GitHub, **please report the issue to the upstream grammar repository, not here.** Grammars are updated every time we build the Linguist gem so upstream bug fixes are automatically incorporated as they are fixed.
|
||||
|
||||
If you experience an issue with the syntax-highlighting on GitHub, **please report the issue to the upstream grammar repository, not here.** Grammars are updated every time we build the Linguist gem and so upstream bug fixes are automatically incorporated as they are fixed.
|
||||
|
||||
## Overrides
|
||||
|
||||
Linguist supports a number of different custom overrides strategies for language definitions and vendored paths.
|
||||
Linguist supports a number of different custom override strategies for language definitions and file paths.
|
||||
|
||||
### Using gitattributes
|
||||
|
||||
Add a `.gitattributes` file to your project and use standard git-style path matchers for the files you want to override to set `linguist-documentation`, `linguist-language`, `linguist-vendored`, and `linguist-generated`. `.gitattributes` will be used to determine language statistics and will be used to syntax highlight files. You can also manually set syntax highlighting using [Vim or Emacs modelines](#using-emacs-or-vim-modelines).
|
||||
Add a `.gitattributes` file to your project and use standard git-style path matchers for the files you want to override using the `linguist-documentation`, `linguist-language`, `linguist-vendored`, `linguist-generated` and `linguist-detectable` attributes. `.gitattributes` will be used to determine language statistics and will be used to syntax highlight files. You can also manually set syntax highlighting using [Vim or Emacs modelines](#using-emacs-or-vim-modelines).
|
||||
|
||||
```
|
||||
$ cat .gitattributes
|
||||
@@ -41,7 +133,7 @@ $ cat .gitattributes
|
||||
|
||||
#### Vendored code
|
||||
|
||||
Checking code you didn't write, such as JavaScript libraries, into your git repo is a common practice, but this often inflates your project's language stats and may even cause your project to be labeled as another language. By default, Linguist treats all of the paths defined in [lib/linguist/vendor.yml](https://github.com/github/linguist/blob/master/lib/linguist/vendor.yml) as vendored and therefore doesn't include them in the language statistics for a repository.
|
||||
Checking code you didn't write, such as JavaScript libraries, into your git repo is a common practice, but this often inflates your project's language stats and may even cause your project to be labeled as another language. By default, Linguist treats all of the paths defined in [`vendor.yml`](/lib/linguist/vendor.yml) as vendored and therefore doesn't include them in the language statistics for a repository.
|
||||
|
||||
Use the `linguist-vendored` attribute to vendor or un-vendor paths.
|
||||
|
||||
@@ -53,7 +145,7 @@ jquery.js linguist-vendored=false
|
||||
|
||||
#### Documentation
|
||||
|
||||
Just like vendored files, Linguist excludes documentation files from your project's language stats. [lib/linguist/documentation.yml](lib/linguist/documentation.yml) lists common documentation paths and excludes them from the language statistics for your repository.
|
||||
Just like vendored files, Linguist excludes documentation files from your project's language stats. [`documentation.yml`](/lib/linguist/documentation.yml) lists common documentation paths and excludes them from the language statistics for your repository.
|
||||
|
||||
Use the `linguist-documentation` attribute to mark or unmark paths as documentation.
|
||||
|
||||
@@ -65,13 +157,28 @@ docs/formatter.rb linguist-documentation=false
|
||||
|
||||
#### Generated code
|
||||
|
||||
Not all plain text files are true source files. Generated files like minified js and compiled CoffeeScript can be detected and excluded from language stats. As an added bonus, unlike vendored and documentation files, these files are suppressed in diffs.
|
||||
Not all plain text files are true source files. Generated files like minified JavaScript and compiled CoffeeScript can be detected and excluded from language stats. As an added bonus, unlike vendored and documentation files, these files are suppressed in diffs. [`generated.rb`](/lib/linguist/generated.rb) lists common generated paths and excludes them from the language statistics of your repository.
|
||||
|
||||
Use the `linguist-generated` attribute to mark or unmark paths as generated.
|
||||
|
||||
```
|
||||
$ cat .gitattributes
|
||||
Api.elm linguist-generated=true
|
||||
```
|
||||
|
||||
#### Detectable
|
||||
|
||||
Only programming languages are included in the language statistics. Languages of a different type (as defined in [`languages.yml`](/lib/linguist/languages.yml)) are not "detectable" causing them not to be included in the language statistics.
|
||||
|
||||
Use the `linguist-detectable` attribute to mark or unmark paths as detectable.
|
||||
|
||||
```
|
||||
$ cat .gitattributes
|
||||
*.kicad_pcb linguist-detectable=true
|
||||
*.sch linguist-detectable=true
|
||||
tools/export_bom.py linguist-detectable=false
|
||||
```
|
||||
|
||||
### Using Emacs or Vim modelines
|
||||
|
||||
If you do not want to use `.gitattributes` to override the syntax highlighting used on GitHub.com, you can use Vim or Emacs style modelines to set the language for a single file. Modelines can be placed anywhere within a file and are respected when determining how to syntax-highlight a file on GitHub.com
|
||||
@@ -90,52 +197,15 @@ vim: set ft=cpp:
|
||||
-*- mode: php;-*-
|
||||
```
|
||||
|
||||
## Usage
|
||||
|
||||
Install the gem:
|
||||
|
||||
```
|
||||
$ gem install github-linguist
|
||||
```
|
||||
|
||||
Then use it in your application:
|
||||
|
||||
```ruby
|
||||
require 'rugged'
|
||||
require 'linguist'
|
||||
|
||||
repo = Rugged::Repository.new('.')
|
||||
project = Linguist::Repository.new(repo, repo.head.target_id)
|
||||
project.language #=> "Ruby"
|
||||
project.languages #=> { "Ruby" => 119387 }
|
||||
```
|
||||
|
||||
These stats are also printed out by the `linguist` executable. You can use the
|
||||
`--breakdown` flag, and the binary will also output the breakdown of files by language.
|
||||
|
||||
You can try running `linguist` on the root directory in this repository itself:
|
||||
|
||||
```
|
||||
$ bundle exec linguist --breakdown
|
||||
|
||||
100.00% Ruby
|
||||
|
||||
Ruby:
|
||||
Gemfile
|
||||
Rakefile
|
||||
bin/linguist
|
||||
github-linguist.gemspec
|
||||
lib/linguist.rb
|
||||
…
|
||||
```
|
||||
|
||||
## Contributing
|
||||
|
||||
Please check out our [contributing guidelines](CONTRIBUTING.md).
|
||||
|
||||
|
||||
## License
|
||||
|
||||
The language grammars included in this gem are covered by their repositories'
|
||||
respective licenses. `grammars.yml` specifies the repository for each grammar.
|
||||
respective licenses. [`vendor/README.md`](/vendor/README.md) lists the repository for each grammar.
|
||||
|
||||
All other files are covered by the MIT license, see `LICENSE`.
|
||||
|
||||
9
Rakefile
9
Rakefile
@@ -47,21 +47,16 @@ task :samples => :compile do
|
||||
File.write 'lib/linguist/samples.json', json
|
||||
end
|
||||
|
||||
FLEX_MIN_VER = [2, 5, 39]
|
||||
task :flex do
|
||||
if `flex -V` !~ /^flex (\d+)\.(\d+)\.(\d+)/
|
||||
if `flex -V` !~ /^flex \d+\.\d+\.\d+/
|
||||
fail "flex not detected"
|
||||
end
|
||||
maj, min, rev = $1.to_i, $2.to_i, $3.to_i
|
||||
if maj < FLEX_MIN_VER[0] || (maj == FLEX_MIN_VER[0] && (min < FLEX_MIN_VER[1] || (min == FLEX_MIN_VER[1] && rev < FLEX_MIN_VER[2])))
|
||||
fail "building linguist's lexer requires at least flex #{FLEX_MIN_VER.join(".")}"
|
||||
end
|
||||
system "cd ext/linguist && flex tokenizer.l"
|
||||
end
|
||||
|
||||
task :build_gem => :samples do
|
||||
rm_rf "grammars"
|
||||
sh "script/convert-grammars"
|
||||
sh "script/grammar-compiler compile -o grammars || true"
|
||||
languages = YAML.load_file("lib/linguist/languages.yml")
|
||||
File.write("lib/linguist/languages.json", Yajl.dump(languages))
|
||||
`gem build github-linguist.gemspec`
|
||||
|
||||
@@ -117,9 +117,8 @@ def git_linguist(args)
|
||||
end
|
||||
|
||||
parser.parse!(args)
|
||||
|
||||
git_dir = `git rev-parse --git-dir`.strip
|
||||
raise "git-linguist must be run in a Git repository (#{Dir.pwd})" unless $?.success?
|
||||
raise "git-linguist must be run in a Git repository" unless $?.success?
|
||||
wrapper = GitLinguist.new(git_dir, commit, incremental)
|
||||
|
||||
case args.pop
|
||||
@@ -141,6 +140,10 @@ def git_linguist(args)
|
||||
$stderr.print(parser.help)
|
||||
exit 1
|
||||
end
|
||||
rescue Exception => e
|
||||
$stderr.puts e.message
|
||||
$stderr.puts e.backtrace
|
||||
exit 1
|
||||
end
|
||||
|
||||
git_linguist(ARGV)
|
||||
|
||||
File diff suppressed because it is too large
Load Diff
@@ -11,7 +11,7 @@
|
||||
#define FLEX_SCANNER
|
||||
#define YY_FLEX_MAJOR_VERSION 2
|
||||
#define YY_FLEX_MINOR_VERSION 5
|
||||
#define YY_FLEX_SUBMINOR_VERSION 39
|
||||
#define YY_FLEX_SUBMINOR_VERSION 35
|
||||
#if YY_FLEX_SUBMINOR_VERSION > 0
|
||||
#define FLEX_BETA
|
||||
#endif
|
||||
@@ -49,6 +49,7 @@ typedef int16_t flex_int16_t;
|
||||
typedef uint16_t flex_uint16_t;
|
||||
typedef int32_t flex_int32_t;
|
||||
typedef uint32_t flex_uint32_t;
|
||||
typedef uint64_t flex_uint64_t;
|
||||
#else
|
||||
typedef signed char flex_int8_t;
|
||||
typedef short int flex_int16_t;
|
||||
@@ -56,6 +57,7 @@ typedef int flex_int32_t;
|
||||
typedef unsigned char flex_uint8_t;
|
||||
typedef unsigned short int flex_uint16_t;
|
||||
typedef unsigned int flex_uint32_t;
|
||||
#endif /* ! C99 */
|
||||
|
||||
/* Limits of integral types. */
|
||||
#ifndef INT8_MIN
|
||||
@@ -86,8 +88,6 @@ typedef unsigned int flex_uint32_t;
|
||||
#define UINT32_MAX (4294967295U)
|
||||
#endif
|
||||
|
||||
#endif /* ! C99 */
|
||||
|
||||
#endif /* ! FLEXINT_H */
|
||||
|
||||
#ifdef __cplusplus
|
||||
@@ -130,15 +130,7 @@ typedef void* yyscan_t;
|
||||
|
||||
/* Size of default input buffer. */
|
||||
#ifndef YY_BUF_SIZE
|
||||
#ifdef __ia64__
|
||||
/* On IA-64, the buffer size is 16k, not 8k.
|
||||
* Moreover, YY_BUF_SIZE is 2*YY_READ_BUF_SIZE in the general case.
|
||||
* Ditto for the __ia64__ case accordingly.
|
||||
*/
|
||||
#define YY_BUF_SIZE 32768
|
||||
#else
|
||||
#define YY_BUF_SIZE 16384
|
||||
#endif /* __ia64__ */
|
||||
#endif
|
||||
|
||||
#ifndef YY_TYPEDEF_YY_BUFFER_STATE
|
||||
@@ -277,10 +269,6 @@ int linguist_yyget_lineno (yyscan_t yyscanner );
|
||||
|
||||
void linguist_yyset_lineno (int line_number ,yyscan_t yyscanner );
|
||||
|
||||
int linguist_yyget_column (yyscan_t yyscanner );
|
||||
|
||||
void linguist_yyset_column (int column_no ,yyscan_t yyscanner );
|
||||
|
||||
/* Macros after this point can all be overridden by user definitions in
|
||||
* section 1.
|
||||
*/
|
||||
@@ -307,12 +295,7 @@ static int yy_flex_strlen (yyconst char * ,yyscan_t yyscanner);
|
||||
|
||||
/* Amount of stuff to slurp up with each read. */
|
||||
#ifndef YY_READ_BUF_SIZE
|
||||
#ifdef __ia64__
|
||||
/* On IA-64, the buffer size is 16k, not 8k */
|
||||
#define YY_READ_BUF_SIZE 16384
|
||||
#else
|
||||
#define YY_READ_BUF_SIZE 8192
|
||||
#endif /* __ia64__ */
|
||||
#endif
|
||||
|
||||
/* Number of entries by which start-condition stack grows. */
|
||||
@@ -345,9 +328,9 @@ extern int linguist_yylex (yyscan_t yyscanner);
|
||||
#undef YY_DECL
|
||||
#endif
|
||||
|
||||
#line 117 "tokenizer.l"
|
||||
#line 118 "tokenizer.l"
|
||||
|
||||
|
||||
#line 352 "lex.linguist_yy.h"
|
||||
#line 335 "lex.linguist_yy.h"
|
||||
#undef linguist_yyIN_HEADER
|
||||
#endif /* linguist_yyHEADER_H */
|
||||
|
||||
@@ -2,6 +2,9 @@
|
||||
#include "linguist.h"
|
||||
#include "lex.linguist_yy.h"
|
||||
|
||||
// Anything longer is unlikely to be useful.
|
||||
#define MAX_TOKEN_LEN 32
|
||||
|
||||
int linguist_yywrap(yyscan_t yyscanner) {
|
||||
return 1;
|
||||
}
|
||||
@@ -32,19 +35,27 @@ static VALUE rb_tokenizer_extract_tokens(VALUE self, VALUE rb_data) {
|
||||
case NO_ACTION:
|
||||
break;
|
||||
case REGULAR_TOKEN:
|
||||
rb_ary_push(ary, rb_str_new2(extra.token));
|
||||
len = strlen(extra.token);
|
||||
if (len <= MAX_TOKEN_LEN)
|
||||
rb_ary_push(ary, rb_str_new(extra.token, len));
|
||||
free(extra.token);
|
||||
break;
|
||||
case SHEBANG_TOKEN:
|
||||
s = rb_str_new2("SHEBANG#!");
|
||||
rb_str_cat2(s, extra.token);
|
||||
rb_ary_push(ary, s);
|
||||
len = strlen(extra.token);
|
||||
if (len <= MAX_TOKEN_LEN) {
|
||||
s = rb_str_new2("SHEBANG#!");
|
||||
rb_str_cat(s, extra.token, len);
|
||||
rb_ary_push(ary, s);
|
||||
}
|
||||
free(extra.token);
|
||||
break;
|
||||
case SGML_TOKEN:
|
||||
s = rb_str_new2(extra.token);
|
||||
rb_str_cat2(s, ">");
|
||||
rb_ary_push(ary, s);
|
||||
len = strlen(extra.token);
|
||||
if (len <= MAX_TOKEN_LEN) {
|
||||
s = rb_str_new(extra.token, len);
|
||||
rb_str_cat2(s, ">");
|
||||
rb_ary_push(ary, s);
|
||||
}
|
||||
free(extra.token);
|
||||
break;
|
||||
}
|
||||
|
||||
@@ -9,25 +9,25 @@
|
||||
|
||||
#define eat_until_eol() do { \
|
||||
int c; \
|
||||
while ((c = input(yyscanner)) != '\n' && c != EOF); \
|
||||
if (c == EOF) \
|
||||
yyterminate(); \
|
||||
while ((c = input(yyscanner)) != '\n' && c != EOF && c); \
|
||||
if (c == EOF || !c) \
|
||||
return 0; \
|
||||
} while (0)
|
||||
|
||||
#define eat_until_unescaped(q) do { \
|
||||
int c; \
|
||||
while ((c = input(yyscanner)) != EOF) { \
|
||||
while ((c = input(yyscanner)) != EOF && c) { \
|
||||
if (c == '\n') \
|
||||
break; \
|
||||
if (c == '\\') { \
|
||||
c = input(yyscanner); \
|
||||
if (c == EOF) \
|
||||
yyterminate(); \
|
||||
if (c == EOF || !c) \
|
||||
return 0; \
|
||||
} else if (c == q) \
|
||||
break; \
|
||||
} \
|
||||
if (c == EOF) \
|
||||
yyterminate(); \
|
||||
if (c == EOF || !c) \
|
||||
return 0; \
|
||||
} while (0)
|
||||
|
||||
%}
|
||||
@@ -84,7 +84,7 @@
|
||||
\" { eat_until_unescaped('"'); }
|
||||
' { eat_until_unescaped('\''); }
|
||||
(0x[0-9a-fA-F]([0-9a-fA-F]|\.)*|[0-9]([0-9]|\.)*)([uU][lL]{0,2}|([eE][-+][0-9]*)?[fFlL]*) { /* nothing */ }
|
||||
\<[^ \t\n\r<>]+/>|" "[^<>\n]{0,2048}> {
|
||||
\<[[:alnum:]_!./?-]+ {
|
||||
if (strcmp(yytext, "<!--") == 0) {
|
||||
BEGIN(xml_comment);
|
||||
} else {
|
||||
@@ -93,8 +93,8 @@
|
||||
return 1;
|
||||
}
|
||||
}
|
||||
<sgml>[[:alnum:]_]+=/\" { feed_token(strdup(yytext), REGULAR_TOKEN); input(yyscanner); eat_until_unescaped('"'); return 1; }
|
||||
<sgml>[[:alnum:]_]+=/' { feed_token(strdup(yytext), REGULAR_TOKEN); input(yyscanner); eat_until_unescaped('\''); return 1; }
|
||||
<sgml>[[:alnum:]_]+=\" { feed_token(strndup(yytext, strlen(yytext) - 1), REGULAR_TOKEN); eat_until_unescaped('"'); return 1; }
|
||||
<sgml>[[:alnum:]_]+=' { feed_token(strndup(yytext, strlen(yytext) - 1), REGULAR_TOKEN); eat_until_unescaped('\''); return 1; }
|
||||
<sgml>[[:alnum:]_]+=[[:alnum:]_]* { feed_token(strdup(yytext), REGULAR_TOKEN); *(strchr(yyextra->token, '=') + 1) = 0; return 1; }
|
||||
<sgml>[[:alnum:]_]+ { feed_token(strdup(yytext), REGULAR_TOKEN); return 1; }
|
||||
<sgml>\> { BEGIN(INITIAL); }
|
||||
|
||||
@@ -2,7 +2,7 @@ require File.expand_path('../lib/linguist/version', __FILE__)
|
||||
|
||||
Gem::Specification.new do |s|
|
||||
s.name = 'github-linguist'
|
||||
s.version = Linguist::VERSION
|
||||
s.version = ENV['GEM_VERSION'] || Linguist::VERSION
|
||||
s.summary = "GitHub Language detection"
|
||||
s.description = 'We use this library at GitHub to detect blob languages, highlight code, ignore binary files, suppress generated files in diffs, and generate language breakdown graphs.'
|
||||
|
||||
@@ -15,7 +15,7 @@ Gem::Specification.new do |s|
|
||||
s.extensions = ['ext/linguist/extconf.rb']
|
||||
|
||||
s.add_dependency 'charlock_holmes', '~> 0.7.5'
|
||||
s.add_dependency 'escape_utils', '~> 1.1.0'
|
||||
s.add_dependency 'escape_utils', '~> 1.2.0'
|
||||
s.add_dependency 'mime-types', '>= 1.19'
|
||||
s.add_dependency 'rugged', '>= 0.25.1'
|
||||
|
||||
|
||||
20
grammars.yml
20
grammars.yml
@@ -1,4 +1,3 @@
|
||||
---
|
||||
https://bitbucket.org/Clams/sublimesystemverilog/get/default.tar.gz:
|
||||
- source.systemverilog
|
||||
- source.ucfconstraints
|
||||
@@ -49,6 +48,8 @@ vendor/grammars/Lean.tmbundle:
|
||||
- source.lean
|
||||
vendor/grammars/LiveScript.tmbundle:
|
||||
- source.livescript
|
||||
vendor/grammars/MATLAB-Language-grammar:
|
||||
- source.matlab
|
||||
vendor/grammars/MQL5-sublime:
|
||||
- source.mql5
|
||||
vendor/grammars/MagicPython:
|
||||
@@ -128,6 +129,9 @@ vendor/grammars/SublimePuppet:
|
||||
- source.puppet
|
||||
vendor/grammars/SublimeXtend:
|
||||
- source.xtend
|
||||
vendor/grammars/Syntax-highlighting-for-PostCSS:
|
||||
- source.css.postcss.sugarss
|
||||
- source.postcss
|
||||
vendor/grammars/TLA:
|
||||
- source.tla
|
||||
vendor/grammars/TXL:
|
||||
@@ -193,6 +197,9 @@ vendor/grammars/atom-language-clean:
|
||||
vendor/grammars/atom-language-julia:
|
||||
- source.julia
|
||||
- source.julia.console
|
||||
vendor/grammars/atom-language-nextflow:
|
||||
- source.nextflow
|
||||
- source.nextflow-groovy
|
||||
vendor/grammars/atom-language-p4:
|
||||
- source.p4
|
||||
vendor/grammars/atom-language-perl6:
|
||||
@@ -341,6 +348,8 @@ vendor/grammars/java.tmbundle:
|
||||
- source.java-properties
|
||||
- text.html.jsp
|
||||
- text.junit-test-report
|
||||
vendor/grammars/javadoc.tmbundle:
|
||||
- text.html.javadoc
|
||||
vendor/grammars/javascript-objective-j.tmbundle:
|
||||
- source.js.objj
|
||||
vendor/grammars/jflex.tmbundle:
|
||||
@@ -387,6 +396,8 @@ vendor/grammars/language-csound:
|
||||
- source.csound-score
|
||||
vendor/grammars/language-css:
|
||||
- source.css
|
||||
vendor/grammars/language-cwl:
|
||||
- source.cwl
|
||||
vendor/grammars/language-emacs-lisp:
|
||||
- source.emacs.lisp
|
||||
vendor/grammars/language-fontforge:
|
||||
@@ -501,6 +512,8 @@ vendor/grammars/language-yaml:
|
||||
- source.yaml
|
||||
vendor/grammars/language-yang:
|
||||
- source.yang
|
||||
vendor/grammars/language-yara:
|
||||
- source.yara
|
||||
vendor/grammars/latex.tmbundle:
|
||||
- text.bibtex
|
||||
- text.log.latex
|
||||
@@ -530,9 +543,6 @@ vendor/grammars/marko-tmbundle:
|
||||
- text.marko
|
||||
vendor/grammars/mathematica-tmbundle:
|
||||
- source.mathematica
|
||||
vendor/grammars/matlab.tmbundle:
|
||||
- source.matlab
|
||||
- source.octave
|
||||
vendor/grammars/maven.tmbundle:
|
||||
- text.xml.pom
|
||||
vendor/grammars/mediawiki.tmbundle:
|
||||
@@ -567,7 +577,7 @@ vendor/grammars/opa.tmbundle:
|
||||
- source.opa
|
||||
vendor/grammars/openscad.tmbundle:
|
||||
- source.scad
|
||||
vendor/grammars/oz-tmbundle/Syntaxes/Oz.tmLanguage:
|
||||
vendor/grammars/oz-tmbundle:
|
||||
- source.oz
|
||||
vendor/grammars/parrot:
|
||||
- source.parrot.pir
|
||||
|
||||
@@ -11,11 +11,13 @@ module Linguist
|
||||
#
|
||||
# path - A path String (does not necessarily exists on the file system).
|
||||
# content - Content of the file.
|
||||
# symlink - Whether the file is a symlink.
|
||||
#
|
||||
# Returns a Blob.
|
||||
def initialize(path, content)
|
||||
def initialize(path, content, symlink: false)
|
||||
@path = path
|
||||
@content = content
|
||||
@symlink = symlink
|
||||
end
|
||||
|
||||
# Public: Filename
|
||||
@@ -69,5 +71,12 @@ module Linguist
|
||||
"." + segments[index..-1].join(".")
|
||||
end
|
||||
end
|
||||
|
||||
# Public: Is this a symlink?
|
||||
#
|
||||
# Returns true or false.
|
||||
def symlink?
|
||||
@symlink
|
||||
end
|
||||
end
|
||||
end
|
||||
|
||||
@@ -383,7 +383,10 @@ module Linguist
|
||||
!vendored? &&
|
||||
!documentation? &&
|
||||
!generated? &&
|
||||
language && DETECTABLE_TYPES.include?(language.type)
|
||||
language && ( defined?(detectable?) && !detectable?.nil? ?
|
||||
detectable? :
|
||||
DETECTABLE_TYPES.include?(language.type)
|
||||
)
|
||||
end
|
||||
end
|
||||
end
|
||||
|
||||
@@ -26,6 +26,11 @@ module Linguist
|
||||
@mode ||= File.stat(@fullpath).mode.to_s(8)
|
||||
end
|
||||
|
||||
def symlink?
|
||||
return @symlink if defined? @symlink
|
||||
@symlink = (File.symlink?(@fullpath) rescue false)
|
||||
end
|
||||
|
||||
# Public: Read file contents.
|
||||
#
|
||||
# Returns a String.
|
||||
|
||||
@@ -52,6 +52,8 @@ module Linguist
|
||||
# Return true or false
|
||||
def generated?
|
||||
xcode_file? ||
|
||||
cocoapods? ||
|
||||
carthage_build? ||
|
||||
generated_net_designer_file? ||
|
||||
generated_net_specflow_feature_file? ||
|
||||
composer_lock? ||
|
||||
@@ -95,6 +97,20 @@ module Linguist
|
||||
['.nib', '.xcworkspacedata', '.xcuserstate'].include?(extname)
|
||||
end
|
||||
|
||||
# Internal: Is the blob part of Pods/, which contains dependencies not meant for humans in pull requests.
|
||||
#
|
||||
# Returns true or false.
|
||||
def cocoapods?
|
||||
!!name.match(/(^Pods|\/Pods)\//)
|
||||
end
|
||||
|
||||
# Internal: Is the blob part of Carthage/Build/, which contains dependencies not meant for humans in pull requests.
|
||||
#
|
||||
# Returns true or false.
|
||||
def carthage_build?
|
||||
!!name.match(/(^|\/)Carthage\/Build\//)
|
||||
end
|
||||
|
||||
# Internal: Is the blob minified files?
|
||||
#
|
||||
# Consider a file minified if the average line length is
|
||||
|
||||
@@ -16,6 +16,8 @@ module Linguist
|
||||
#
|
||||
# Returns an Array of languages, or empty if none matched or were inconclusive.
|
||||
def self.call(blob, candidates)
|
||||
return [] if blob.symlink?
|
||||
|
||||
data = blob.data[0...HEURISTICS_CONSIDER_BYTES]
|
||||
|
||||
@heuristics.each do |heuristic|
|
||||
@@ -73,7 +75,6 @@ module Linguist
|
||||
end
|
||||
|
||||
# Common heuristics
|
||||
ObjectiveCRegex = /^\s*(@(interface|class|protocol|property|end|synchronised|selector|implementation)\b|#import\s+.+\.h[">])/
|
||||
CPlusPlusRegex = Regexp.union(
|
||||
/^\s*#\s*include <(cstdint|string|vector|map|list|array|bitset|queue|stack|forward_list|unordered_map|unordered_set|(i|o|io)stream)>/,
|
||||
/^\s*template\s*</,
|
||||
@@ -82,6 +83,9 @@ module Linguist
|
||||
/^[ \t]*(class|(using[ \t]+)?namespace)\s+\w+/,
|
||||
/^[ \t]*(private|public|protected):$/,
|
||||
/std::\w+/)
|
||||
ObjectiveCRegex = /^\s*(@(interface|class|protocol|property|end|synchronised|selector|implementation)\b|#import\s+.+\.h[">])/
|
||||
Perl5Regex = /\buse\s+(?:strict\b|v?5\.)/
|
||||
Perl6Regex = /^\s*(?:use\s+v6\b|\bmodule\b|\b(?:my\s+)?class\b)/
|
||||
|
||||
disambiguate ".as" do |data|
|
||||
if /^\s*(package\s+[a-z0-9_\.]+|import\s+[a-zA-Z0-9_\.]+;|class\s+[A-Za-z0-9_]+\s+extends\s+[A-Za-z0-9_]+)/.match(data)
|
||||
@@ -359,17 +363,17 @@ module Linguist
|
||||
disambiguate ".pl" do |data|
|
||||
if /^[^#]*:-/.match(data)
|
||||
Language["Prolog"]
|
||||
elsif /use strict|use\s+v?5\./.match(data)
|
||||
elsif Perl5Regex.match(data)
|
||||
Language["Perl"]
|
||||
elsif /^(use v6|(my )?class|module)/.match(data)
|
||||
elsif Perl6Regex.match(data)
|
||||
Language["Perl 6"]
|
||||
end
|
||||
end
|
||||
|
||||
disambiguate ".pm" do |data|
|
||||
if /\buse\s+(?:strict\b|v?5\.)/.match(data)
|
||||
if Perl5Regex.match(data)
|
||||
Language["Perl"]
|
||||
elsif /^\s*(?:use\s+v6\s*;|(?:\bmy\s+)?class|module)\b/.match(data)
|
||||
elsif Perl6Regex.match(data)
|
||||
Language["Perl 6"]
|
||||
elsif /^\s*\/\* XPM \*\//.match(data)
|
||||
Language["XPM"]
|
||||
@@ -377,7 +381,7 @@ module Linguist
|
||||
end
|
||||
|
||||
disambiguate ".pro" do |data|
|
||||
if /^[^#]+:-/.match(data)
|
||||
if /^[^\[#]+:-/.match(data)
|
||||
Language["Prolog"]
|
||||
elsif data.include?("last_client=")
|
||||
Language["INI"]
|
||||
@@ -459,12 +463,12 @@ module Linguist
|
||||
end
|
||||
|
||||
disambiguate ".t" do |data|
|
||||
if /^\s*%[ \t]+|^\s*var\s+\w+\s*:=\s*\w+/.match(data)
|
||||
Language["Turing"]
|
||||
elsif /^\s*(?:use\s+v6\s*;|\bmodule\b|\b(?:my\s+)?class\b)/.match(data)
|
||||
Language["Perl 6"]
|
||||
elsif /\buse\s+(?:strict\b|v?5\.)/.match(data)
|
||||
if Perl5Regex.match(data)
|
||||
Language["Perl"]
|
||||
elsif Perl6Regex.match(data)
|
||||
Language["Perl 6"]
|
||||
elsif /^\s*%[ \t]+|^\s*var\s+\w+\s*:=\s*\w+/.match(data)
|
||||
Language["Turing"]
|
||||
end
|
||||
end
|
||||
|
||||
@@ -509,5 +513,13 @@ module Linguist
|
||||
end
|
||||
end
|
||||
|
||||
disambiguate ".x" do |data|
|
||||
if /\b(program|version)\s+\w+\s*{|\bunion\s+\w+\s+switch\s*\(/.match(data)
|
||||
Language["RPC"]
|
||||
elsif /^%(end|ctor|hook|group)\b/.match(data)
|
||||
Language["Logos"]
|
||||
end
|
||||
end
|
||||
|
||||
end
|
||||
end
|
||||
|
||||
@@ -110,7 +110,7 @@ module Linguist
|
||||
# Returns the Language or nil if none was found.
|
||||
def self.find_by_name(name)
|
||||
return nil if !name.is_a?(String) || name.to_s.empty?
|
||||
name && (@name_index[name.downcase] || @name_index[name.split(',').first.downcase])
|
||||
name && (@name_index[name.downcase] || @name_index[name.split(',', 2).first.downcase])
|
||||
end
|
||||
|
||||
# Public: Look up Language by one of its aliases.
|
||||
@@ -125,7 +125,7 @@ module Linguist
|
||||
# Returns the Language or nil if none was found.
|
||||
def self.find_by_alias(name)
|
||||
return nil if !name.is_a?(String) || name.to_s.empty?
|
||||
name && (@alias_index[name.downcase] || @alias_index[name.split(',').first.downcase])
|
||||
name && (@alias_index[name.downcase] || @alias_index[name.split(',', 2).first.downcase])
|
||||
end
|
||||
|
||||
# Public: Look up Languages by filename.
|
||||
@@ -219,10 +219,7 @@ module Linguist
|
||||
lang = @index[name.downcase]
|
||||
return lang if lang
|
||||
|
||||
name = name.split(',').first
|
||||
return nil if name.to_s.empty?
|
||||
|
||||
@index[name.downcase]
|
||||
@index[name.split(',', 2).first.downcase]
|
||||
end
|
||||
|
||||
# Public: A List of popular languages
|
||||
|
||||
@@ -279,16 +279,6 @@ Arc:
|
||||
tm_scope: none
|
||||
ace_mode: text
|
||||
language_id: 20
|
||||
Arduino:
|
||||
type: programming
|
||||
color: "#bd79d1"
|
||||
extensions:
|
||||
- ".ino"
|
||||
tm_scope: source.c++
|
||||
ace_mode: c_cpp
|
||||
codemirror_mode: clike
|
||||
codemirror_mime_type: text/x-c++src
|
||||
language_id: 21
|
||||
AsciiDoc:
|
||||
type: prose
|
||||
ace_mode: asciidoc
|
||||
@@ -311,6 +301,7 @@ Assembly:
|
||||
type: programming
|
||||
color: "#6E4C13"
|
||||
aliases:
|
||||
- asm
|
||||
- nasm
|
||||
extensions:
|
||||
- ".asm"
|
||||
@@ -528,6 +519,7 @@ C++:
|
||||
- ".hxx"
|
||||
- ".inc"
|
||||
- ".inl"
|
||||
- ".ino"
|
||||
- ".ipp"
|
||||
- ".re"
|
||||
- ".tcc"
|
||||
@@ -807,6 +799,19 @@ Common Lisp:
|
||||
codemirror_mode: commonlisp
|
||||
codemirror_mime_type: text/x-common-lisp
|
||||
language_id: 66
|
||||
Common Workflow Language:
|
||||
alias: cwl
|
||||
type: programming
|
||||
ace_mode: yaml
|
||||
codemirror_mode: yaml
|
||||
codemirror_mime_type: text/x-yaml
|
||||
extensions:
|
||||
- ".cwl"
|
||||
interpreters:
|
||||
- cwl-runner
|
||||
color: "#B5314C"
|
||||
tm_scope: source.cwl
|
||||
language_id: 988547172
|
||||
Component Pascal:
|
||||
type: programming
|
||||
color: "#B0CE4E"
|
||||
@@ -1147,7 +1152,7 @@ Ecere Projects:
|
||||
Edje Data Collection:
|
||||
type: data
|
||||
extensions:
|
||||
- ".edc"
|
||||
- ".edc"
|
||||
tm_scope: source.json
|
||||
ace_mode: json
|
||||
codemirror_mode: javascript
|
||||
@@ -1688,6 +1693,7 @@ HCL:
|
||||
extensions:
|
||||
- ".hcl"
|
||||
- ".tf"
|
||||
- ".tfvars"
|
||||
ace_mode: ruby
|
||||
codemirror_mode: ruby
|
||||
codemirror_mime_type: text/x-ruby
|
||||
@@ -2245,9 +2251,9 @@ Kotlin:
|
||||
language_id: 189
|
||||
LFE:
|
||||
type: programming
|
||||
color: "#4C3023"
|
||||
extensions:
|
||||
- ".lfe"
|
||||
group: Erlang
|
||||
tm_scope: source.lisp
|
||||
ace_mode: lisp
|
||||
codemirror_mode: commonlisp
|
||||
@@ -2892,6 +2898,18 @@ NewLisp:
|
||||
codemirror_mode: commonlisp
|
||||
codemirror_mime_type: text/x-common-lisp
|
||||
language_id: 247
|
||||
Nextflow:
|
||||
type: programming
|
||||
ace_mode: groovy
|
||||
tm_scope: source.nextflow
|
||||
color: "#3ac486"
|
||||
extensions:
|
||||
- ".nf"
|
||||
filenames:
|
||||
- "nextflow.config"
|
||||
interpreters:
|
||||
- nextflow
|
||||
language_id: 506780613
|
||||
Nginx:
|
||||
type: data
|
||||
extensions:
|
||||
@@ -3349,6 +3367,8 @@ Perl 6:
|
||||
- Rexfile
|
||||
interpreters:
|
||||
- perl6
|
||||
aliases:
|
||||
- perl6
|
||||
tm_scope: source.perl6fe
|
||||
ace_mode: perl
|
||||
codemirror_mode: perl
|
||||
@@ -3427,6 +3447,14 @@ Pony:
|
||||
tm_scope: source.pony
|
||||
ace_mode: text
|
||||
language_id: 290
|
||||
PostCSS:
|
||||
type: markup
|
||||
tm_scope: source.postcss
|
||||
group: CSS
|
||||
extensions:
|
||||
- ".pcss"
|
||||
ace_mode: text
|
||||
language_id: 262764437
|
||||
PostScript:
|
||||
type: markup
|
||||
color: "#da291c"
|
||||
@@ -3592,6 +3620,7 @@ Python:
|
||||
- ".gclient"
|
||||
- BUCK
|
||||
- BUILD
|
||||
- BUILD.bazel
|
||||
- SConscript
|
||||
- SConstruct
|
||||
- Snakefile
|
||||
@@ -3603,6 +3632,7 @@ Python:
|
||||
- python3
|
||||
aliases:
|
||||
- rusthon
|
||||
- python3
|
||||
language_id: 303
|
||||
Python console:
|
||||
type: programming
|
||||
@@ -3725,6 +3755,17 @@ RMarkdown:
|
||||
- ".rmd"
|
||||
tm_scope: source.gfm
|
||||
language_id: 313
|
||||
RPC:
|
||||
type: programming
|
||||
aliases:
|
||||
- rpcgen
|
||||
- oncrpc
|
||||
- xdr
|
||||
ace_mode: c_cpp
|
||||
extensions:
|
||||
- ".x"
|
||||
tm_scope: source.c
|
||||
language_id: 1031374237
|
||||
RPM Spec:
|
||||
type: data
|
||||
tm_scope: source.rpm-spec
|
||||
@@ -4148,6 +4189,7 @@ Scala:
|
||||
color: "#c22d40"
|
||||
extensions:
|
||||
- ".scala"
|
||||
- ".kojo"
|
||||
- ".sbt"
|
||||
- ".sc"
|
||||
interpreters:
|
||||
@@ -4312,6 +4354,12 @@ Smarty:
|
||||
codemirror_mime_type: text/x-smarty
|
||||
tm_scope: text.html.smarty
|
||||
language_id: 353
|
||||
Solidity:
|
||||
type: programming
|
||||
color: "#AA6746"
|
||||
ace_mode: text
|
||||
tm_scope: source.solidity
|
||||
language_id: 237469032
|
||||
SourcePawn:
|
||||
type: programming
|
||||
color: "#5c7611"
|
||||
@@ -4413,6 +4461,14 @@ Sublime Text Config:
|
||||
- ".sublime_metrics"
|
||||
- ".sublime_session"
|
||||
language_id: 423
|
||||
SugarSS:
|
||||
type: markup
|
||||
tm_scope: source.css.postcss.sugarss
|
||||
group: CSS
|
||||
extensions:
|
||||
- ".sss"
|
||||
ace_mode: text
|
||||
language_id: 826404698
|
||||
SuperCollider:
|
||||
type: programming
|
||||
color: "#46390b"
|
||||
@@ -5119,6 +5175,14 @@ YANG:
|
||||
tm_scope: source.yang
|
||||
ace_mode: text
|
||||
language_id: 408
|
||||
YARA:
|
||||
type: data
|
||||
ace_mode: text
|
||||
extensions:
|
||||
- ".yar"
|
||||
- ".yara"
|
||||
tm_scope: source.yara
|
||||
language_id: 805122868
|
||||
Yacc:
|
||||
type: programming
|
||||
extensions:
|
||||
|
||||
@@ -7,7 +7,8 @@ module Linguist
|
||||
GIT_ATTR = ['linguist-documentation',
|
||||
'linguist-language',
|
||||
'linguist-vendored',
|
||||
'linguist-generated']
|
||||
'linguist-generated',
|
||||
'linguist-detectable']
|
||||
|
||||
GIT_ATTR_OPTS = { :priority => [:index], :skip_system => true }
|
||||
GIT_ATTR_FLAGS = Rugged::Repository::Attributes.parse_opts(GIT_ATTR_OPTS)
|
||||
@@ -70,6 +71,14 @@ module Linguist
|
||||
end
|
||||
end
|
||||
|
||||
def detectable?
|
||||
if attr = git_attributes['linguist-detectable']
|
||||
return boolean_attribute(attr)
|
||||
else
|
||||
nil
|
||||
end
|
||||
end
|
||||
|
||||
def data
|
||||
load_blob!
|
||||
@data
|
||||
@@ -80,6 +89,11 @@ module Linguist
|
||||
@size
|
||||
end
|
||||
|
||||
def symlink?
|
||||
# We don't create LazyBlobs for symlinks.
|
||||
false
|
||||
end
|
||||
|
||||
def cleanup!
|
||||
@data.clear if @data
|
||||
end
|
||||
|
||||
@@ -11,6 +11,8 @@ module Linguist
|
||||
# Returns an Array with one Language if the blob has a shebang with a valid
|
||||
# interpreter, or empty if there is no shebang.
|
||||
def self.call(blob, _ = nil)
|
||||
return [] if blob.symlink?
|
||||
|
||||
Language.find_by_interpreter interpreter(blob.data)
|
||||
end
|
||||
|
||||
|
||||
@@ -109,6 +109,8 @@ module Linguist
|
||||
# Returns an Array with one Language if the blob has a Vim or Emacs modeline
|
||||
# that matches a Language name or alias. Returns an empty array if no match.
|
||||
def self.call(blob, _ = nil)
|
||||
return [] if blob.symlink?
|
||||
|
||||
header = blob.first_lines(SEARCH_SCOPE).join("\n")
|
||||
footer = blob.last_lines(SEARCH_SCOPE).join("\n")
|
||||
Array(Language.find_by_alias(modeline(header + footer)))
|
||||
|
||||
@@ -19,9 +19,7 @@
|
||||
- (^|/)dist/
|
||||
|
||||
# C deps
|
||||
# https://github.com/joyent/node
|
||||
- ^deps/
|
||||
- ^tools/
|
||||
- (^|/)configure$
|
||||
- (^|/)config.guess$
|
||||
- (^|/)config.sub$
|
||||
@@ -65,6 +63,7 @@
|
||||
|
||||
# Font Awesome
|
||||
- (^|/)font-awesome\.(css|less|scss|styl)$
|
||||
- (^|/)font-awesome/.*\.(css|less|scss|styl)$
|
||||
|
||||
# Foundation css
|
||||
- (^|/)foundation\.(css|less|scss|styl)$
|
||||
@@ -81,6 +80,9 @@
|
||||
# Animate.css
|
||||
- (^|/)animate\.(css|less|scss|styl)$
|
||||
|
||||
# Materialize.css
|
||||
- (^|/)materialize\.(css|less|scss|styl|js)$
|
||||
|
||||
# Select2
|
||||
- (^|/)select2/.*\.(css|scss|js)$
|
||||
|
||||
@@ -242,10 +244,7 @@
|
||||
- \.imageset/
|
||||
|
||||
# Carthage
|
||||
- ^Carthage/
|
||||
|
||||
# Cocoapods
|
||||
- ^Pods/
|
||||
- (^|/)Carthage/
|
||||
|
||||
# Sparkle
|
||||
- (^|/)Sparkle/
|
||||
|
||||
@@ -1,3 +1,3 @@
|
||||
module Linguist
|
||||
VERSION = "5.3.2"
|
||||
VERSION = "6.0.1"
|
||||
end
|
||||
|
||||
@@ -1,7 +0,0 @@
|
||||
{
|
||||
"repository": "https://github.com/github/linguist",
|
||||
"dependencies": {
|
||||
"season": "~>5.4"
|
||||
},
|
||||
"license": "MIT"
|
||||
}
|
||||
36
samples/Common Workflow Language/trunk-peak-score.cwl
Normal file
36
samples/Common Workflow Language/trunk-peak-score.cwl
Normal file
@@ -0,0 +1,36 @@
|
||||
#!/usr/bin/env cwl-runner
|
||||
# Originally from
|
||||
# https://github.com/Duke-GCB/GGR-cwl/blob/54e897263a702ff1074c8ac814b4bf7205d140dd/utils/trunk-peak-score.cwl
|
||||
# Released under the MIT License:
|
||||
# https://github.com/Duke-GCB/GGR-cwl/blob/54e897263a702ff1074c8ac814b4bf7205d140dd/LICENSE
|
||||
# Converted to CWL v1.0 syntax using
|
||||
# https://github.com/common-workflow-language/cwl-upgrader
|
||||
# and polished by Michael R. Crusoe <mrc@commonwl.org>
|
||||
# All modifications also released under the MIT License
|
||||
cwlVersion: v1.0
|
||||
class: CommandLineTool
|
||||
doc: Trunk scores in ENCODE bed6+4 files
|
||||
|
||||
hints:
|
||||
DockerRequirement:
|
||||
dockerPull: dukegcb/workflow-utils
|
||||
|
||||
inputs:
|
||||
peaks:
|
||||
type: File
|
||||
sep:
|
||||
type: string
|
||||
default: \t
|
||||
|
||||
outputs:
|
||||
trunked_scores_peaks:
|
||||
type: stdout
|
||||
|
||||
baseCommand: awk
|
||||
|
||||
arguments:
|
||||
- -F $(inputs.sep)
|
||||
- BEGIN{OFS=FS}$5>1000{$5=1000}{print}
|
||||
- $(inputs.peaks.path)
|
||||
|
||||
stdout: $(inputs.peaks.nameroot).trunked_scores$(inputs.peaks.nameext)
|
||||
58
samples/HCL/terraform.tfvars
Normal file
58
samples/HCL/terraform.tfvars
Normal file
@@ -0,0 +1,58 @@
|
||||
# Terragrunt is a thin wrapper for Terraform that provides extra tools for working with multiple Terraform modules,
|
||||
# remote state, and locking: https://github.com/gruntwork-io/terragrunt
|
||||
terragrunt = {
|
||||
# Configure Terragrunt to automatically store tfstate files in an S3 bucket
|
||||
remote_state {
|
||||
backend = "s3"
|
||||
config {
|
||||
encrypt = true
|
||||
bucket = "acme-main-terraform-state"
|
||||
key = "${path_relative_to_include()}/terraform.tfstate"
|
||||
region = "us-east-1"
|
||||
dynamodb_table = "terraform-locks"
|
||||
}
|
||||
}
|
||||
|
||||
# Configure Terragrunt to use common var files to help you keep often-repeated variables (e.g., account ID) DRY.
|
||||
# Note that even though Terraform automatically pulls in terraform.tfvars, we include it explicitly at the end of the
|
||||
# list to make sure its variables override anything in the common var files.
|
||||
terraform {
|
||||
extra_arguments "common_vars" {
|
||||
commands = ["${get_terraform_commands_that_need_vars()}"]
|
||||
|
||||
optional_var_files = [
|
||||
"${get_tfvars_dir()}/${find_in_parent_folders("account.tfvars", "skip-account-if-does-not-exist")}",
|
||||
"${get_tfvars_dir()}/${find_in_parent_folders("region.tfvars", "skip-region-if-does-not-exist")}",
|
||||
"${get_tfvars_dir()}/${find_in_parent_folders("env.tfvars", "skip-env-if-does-not-exist")}",
|
||||
"${get_tfvars_dir()}/terraform.tfvars"
|
||||
]
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
key1 = "val1"
|
||||
key2 = 0
|
||||
key3 = 1
|
||||
key4 = true
|
||||
|
||||
# Sample comments
|
||||
key5 = false
|
||||
|
||||
key6 = ["hello", "from", "gruntwork.io"]
|
||||
|
||||
key7 = {
|
||||
key1 = "hello"
|
||||
key2 = "from"
|
||||
key3 = "gruntwork.io"
|
||||
}
|
||||
|
||||
key8 = [
|
||||
{
|
||||
keyA = "hello"
|
||||
keyB = "there"
|
||||
},
|
||||
{
|
||||
keyA = "hello"
|
||||
keyB = "there"
|
||||
}
|
||||
]
|
||||
57
samples/Logos/NCHax.x
Normal file
57
samples/Logos/NCHax.x
Normal file
@@ -0,0 +1,57 @@
|
||||
#import <UIKit/UIKit.h>
|
||||
#import <BulletinBoard/BBSectionInfo.h>
|
||||
#import <UIKit/UIImage+Private.h>
|
||||
#import <version.h>
|
||||
|
||||
static NSString *const kHBDPWeeAppIdentifier = @"ws.hbang.dailypaperweeapp";
|
||||
|
||||
#pragma mark - Change section header and icon
|
||||
|
||||
// courtesy of benno
|
||||
|
||||
BOOL isDailyPaper = NO;
|
||||
|
||||
%hook SBBulletinObserverViewController
|
||||
|
||||
- (void)_addSection:(BBSectionInfo *)section toCategory:(NSInteger)category widget:(id)widget {
|
||||
if ([section.sectionID isEqualToString:kHBDPWeeAppIdentifier]) {
|
||||
isDailyPaper = YES;
|
||||
%orig;
|
||||
isDailyPaper = NO;
|
||||
} else {
|
||||
%orig;
|
||||
}
|
||||
}
|
||||
|
||||
%end
|
||||
|
||||
%hook SBBulletinListSection
|
||||
|
||||
- (void)setDisplayName:(NSString *)displayName {
|
||||
%orig(isDailyPaper ? @"Current Wallpaper" : displayName);
|
||||
}
|
||||
|
||||
- (void)setIconImage:(UIImage *)iconImage {
|
||||
%orig(isDailyPaper ? [UIImage imageNamed:@"icon" inBundle:[NSBundle bundleWithPath:@"/Library/PreferenceBundles/DailyPaper.bundle"]] : iconImage);
|
||||
}
|
||||
|
||||
%end
|
||||
|
||||
#pragma mark - Enable by default
|
||||
|
||||
%hook SBNotificationCenterDataProviderController
|
||||
|
||||
- (NSArray *)_copyDefaultEnabledWidgetIDs {
|
||||
NSArray *defaultWidgets = %orig;
|
||||
return [[defaultWidgets arrayByAddingObject:kHBDPWeeAppIdentifier] copy];
|
||||
}
|
||||
|
||||
%end
|
||||
|
||||
#pragma mark - Constructor
|
||||
|
||||
%ctor {
|
||||
if (!IS_IOS_OR_NEWER(iOS_8_0)) {
|
||||
%init;
|
||||
}
|
||||
}
|
||||
42
samples/Logos/NoCarrier.x
Normal file
42
samples/Logos/NoCarrier.x
Normal file
@@ -0,0 +1,42 @@
|
||||
//
|
||||
// NoCarrier.x
|
||||
// NoCarrier
|
||||
//
|
||||
// Created by Jonas Gessner on 27.01.2014.
|
||||
// Copyright (c) 2014 Jonas Gessner. All rights reserved.
|
||||
//
|
||||
|
||||
#import <CoreGraphics/CoreGraphics.h>
|
||||
|
||||
#include <substrate.h>
|
||||
|
||||
%group main
|
||||
|
||||
%hook UIStatusBarServiceItemView
|
||||
|
||||
- (id)_serviceContentsImage {
|
||||
return nil;
|
||||
}
|
||||
|
||||
- (CGFloat)extraLeftPadding {
|
||||
return 0.0f;
|
||||
}
|
||||
|
||||
- (CGFloat)extraRightPadding {
|
||||
return 0.0f;
|
||||
}
|
||||
|
||||
- (CGFloat)standardPadding {
|
||||
return 2.0f;
|
||||
}
|
||||
|
||||
%end
|
||||
|
||||
%end
|
||||
|
||||
|
||||
%ctor {
|
||||
@autoreleasepool {
|
||||
%init(main);
|
||||
}
|
||||
}
|
||||
239
samples/Logos/Tweak.x
Normal file
239
samples/Logos/Tweak.x
Normal file
@@ -0,0 +1,239 @@
|
||||
/*
|
||||
* ShadowSocks Per-App Proxy Plugin
|
||||
* https://github.com/linusyang/MobileShadowSocks
|
||||
*
|
||||
* Copyright (c) 2014 Linus Yang <laokongzi@gmail.com>
|
||||
*
|
||||
* This program is free software: you can redistribute it and/or modify
|
||||
* it under the terms of the GNU General Public License as published by
|
||||
* the Free Software Foundation, either version 3 of the License, or
|
||||
* (at your option) any later version.
|
||||
*
|
||||
* This program is distributed in the hope that it will be useful,
|
||||
* but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
* GNU General Public License for more details.
|
||||
*
|
||||
* You should have received a copy of the GNU General Public License
|
||||
* along with this program. If not, see <http://www.gnu.org/licenses/>.
|
||||
*
|
||||
*/
|
||||
|
||||
#include <UIKit/UIKit.h>
|
||||
#include <libfinder/LFFinderController.h>
|
||||
|
||||
#define FUNC_NAME SCDynamicStoreCopyProxies
|
||||
#define ORIG_FUNC original_ ## FUNC_NAME
|
||||
#define CUST_FUNC custom_ ## FUNC_NAME
|
||||
|
||||
#define DECL_FUNC(ret, ...) \
|
||||
extern ret FUNC_NAME(__VA_ARGS__); \
|
||||
static ret (*ORIG_FUNC)(__VA_ARGS__); \
|
||||
ret CUST_FUNC(__VA_ARGS__)
|
||||
|
||||
#define HOOK_FUNC() \
|
||||
MSHookFunction(FUNC_NAME, (void *) CUST_FUNC, (void **) &ORIG_FUNC)
|
||||
|
||||
typedef const struct __SCDynamicStore *SCDynamicStoreRef;
|
||||
void MSHookFunction(void *symbol, void *replace, void **result);
|
||||
|
||||
static BOOL proxyEnabled = YES;
|
||||
static BOOL spdyDisabled = YES;
|
||||
static BOOL finderEnabled = YES;
|
||||
|
||||
static BOOL getValue(NSDictionary *dict, NSString *key, BOOL defaultVal)
|
||||
{
|
||||
if (dict == nil || key == nil) {
|
||||
return defaultVal;
|
||||
}
|
||||
NSNumber *valObj = [dict objectForKey:key];
|
||||
if (valObj == nil) {
|
||||
return defaultVal;
|
||||
}
|
||||
return [valObj boolValue];
|
||||
}
|
||||
|
||||
static void updateSettings(void)
|
||||
{
|
||||
proxyEnabled = YES;
|
||||
spdyDisabled = YES;
|
||||
finderEnabled = YES;
|
||||
NSDictionary *dict = [[NSDictionary alloc] initWithContentsOfFile:@"/var/mobile/Library/Preferences/com.linusyang.ssperapp.plist"];
|
||||
if (dict != nil) {
|
||||
NSString *bundleName = [[NSBundle mainBundle] bundleIdentifier];
|
||||
if (getValue(dict, @"SSPerAppEnabled", NO) && bundleName != nil) {
|
||||
NSString *entry = [[NSString alloc] initWithFormat:@"Enabled-%@", bundleName];
|
||||
proxyEnabled = getValue(dict, entry, NO);
|
||||
if (getValue(dict, @"SSPerAppReversed", NO)) {
|
||||
proxyEnabled = !proxyEnabled;
|
||||
}
|
||||
[entry release];
|
||||
}
|
||||
spdyDisabled = getValue(dict, @"SSPerAppDisableSPDY", YES);
|
||||
finderEnabled = getValue(dict, @"SSPerAppFinder", YES);
|
||||
[dict release];
|
||||
}
|
||||
}
|
||||
|
||||
DECL_FUNC(CFDictionaryRef, SCDynamicStoreRef store)
|
||||
{
|
||||
if (proxyEnabled) {
|
||||
return ORIG_FUNC(store);
|
||||
}
|
||||
CFMutableDictionaryRef proxyDict = CFDictionaryCreateMutable(kCFAllocatorDefault, 0, &kCFTypeDictionaryKeyCallBacks, &kCFTypeDictionaryValueCallBacks);
|
||||
int zero = 0;
|
||||
CFNumberRef zeroNumber = CFNumberCreate(kCFAllocatorDefault, kCFNumberIntType, &zero);
|
||||
CFDictionarySetValue(proxyDict, CFSTR("HTTPEnable"), zeroNumber);
|
||||
CFDictionarySetValue(proxyDict, CFSTR("HTTPProxyType"), zeroNumber);
|
||||
CFDictionarySetValue(proxyDict, CFSTR("HTTPSEnable"), zeroNumber);
|
||||
CFDictionarySetValue(proxyDict, CFSTR("ProxyAutoConfigEnable"), zeroNumber);
|
||||
CFRelease(zeroNumber);
|
||||
return proxyDict;
|
||||
}
|
||||
|
||||
@interface SettingTableViewController <LFFinderActionDelegate>
|
||||
|
||||
- (BOOL)useLibFinder;
|
||||
- (UIViewController *)allocFinderController;
|
||||
- (void)finderSelectedFilePath:(NSString *)path checkSanity:(BOOL)check;
|
||||
|
||||
@end
|
||||
|
||||
%group FinderHook
|
||||
|
||||
%hook SettingTableViewController
|
||||
- (BOOL)useLibFinder
|
||||
{
|
||||
return finderEnabled;
|
||||
}
|
||||
|
||||
- (UIViewController *)allocFinderController
|
||||
{
|
||||
LFFinderController* finder = [[LFFinderController alloc] initWithMode:LFFinderModeDefault];
|
||||
finder.actionDelegate = self;
|
||||
return finder;
|
||||
}
|
||||
|
||||
%new
|
||||
-(void)finder:(LFFinderController*)finder didSelectItemAtPath:(NSString*)path
|
||||
{
|
||||
[self finderSelectedFilePath:path checkSanity:NO];
|
||||
}
|
||||
%end
|
||||
|
||||
%end
|
||||
|
||||
%group TwitterHook
|
||||
|
||||
%hook T1SPDYConfigurationChangeListener
|
||||
- (BOOL)_shouldEnableSPDY
|
||||
{
|
||||
if (spdyDisabled) {
|
||||
return NO;
|
||||
} else {
|
||||
return %orig;
|
||||
}
|
||||
}
|
||||
%end
|
||||
|
||||
%end
|
||||
|
||||
%group FacebookHook
|
||||
|
||||
%hook FBRequester
|
||||
- (BOOL)allowSPDY
|
||||
{
|
||||
if (spdyDisabled) {
|
||||
return NO;
|
||||
} else {
|
||||
return %orig;
|
||||
}
|
||||
}
|
||||
|
||||
- (BOOL)useDNSCache
|
||||
{
|
||||
if (spdyDisabled) {
|
||||
return NO;
|
||||
} else {
|
||||
return %orig;
|
||||
}
|
||||
}
|
||||
%end
|
||||
|
||||
%hook FBNetworkerRequest
|
||||
- (BOOL)disableSPDY
|
||||
{
|
||||
if (spdyDisabled) {
|
||||
return YES;
|
||||
} else {
|
||||
return %orig;
|
||||
}
|
||||
}
|
||||
%end
|
||||
|
||||
%hook FBRequesterState
|
||||
- (BOOL)didUseSPDY
|
||||
{
|
||||
if (spdyDisabled) {
|
||||
return NO;
|
||||
} else {
|
||||
return %orig;
|
||||
}
|
||||
}
|
||||
%end
|
||||
|
||||
%hook FBAppConfigService
|
||||
- (BOOL)disableDNSCache
|
||||
{
|
||||
if (spdyDisabled) {
|
||||
return YES;
|
||||
} else {
|
||||
return %orig;
|
||||
}
|
||||
}
|
||||
%end
|
||||
|
||||
%hook FBNetworker
|
||||
- (BOOL)_shouldAllowUseOfDNSCache:(id)arg
|
||||
{
|
||||
if (spdyDisabled) {
|
||||
return NO;
|
||||
} else {
|
||||
return %orig;
|
||||
}
|
||||
}
|
||||
%end
|
||||
|
||||
%hook FBAppSessionController
|
||||
- (BOOL)networkerShouldAllowUseOfDNSCache:(id)arg
|
||||
{
|
||||
if (spdyDisabled) {
|
||||
return NO;
|
||||
} else {
|
||||
return %orig;
|
||||
}
|
||||
}
|
||||
%end
|
||||
|
||||
%end
|
||||
|
||||
%ctor
|
||||
{
|
||||
NSAutoreleasePool *pool = [[NSAutoreleasePool alloc] init];
|
||||
NSString *bundleName = [[NSBundle mainBundle] bundleIdentifier];
|
||||
if (bundleName != nil && ![bundleName isEqualToString:@"com.apple.springboard"]) {
|
||||
updateSettings();
|
||||
CFNotificationCenterAddObserver(CFNotificationCenterGetDarwinNotifyCenter(), NULL, (CFNotificationCallback) updateSettings, CFSTR("com.linusyang.ssperapp.settingschanged"), NULL, CFNotificationSuspensionBehaviorCoalesce);
|
||||
if ([bundleName isEqualToString:@"com.linusyang.MobileShadowSocks"]) {
|
||||
%init(FinderHook);
|
||||
} else {
|
||||
HOOK_FUNC();
|
||||
if ([bundleName isEqualToString:@"com.atebits.Tweetie2"]) {
|
||||
%init(TwitterHook);
|
||||
} else if ([bundleName isEqualToString:@"com.facebook.Facebook"]) {
|
||||
%init(FacebookHook);
|
||||
}
|
||||
}
|
||||
}
|
||||
[pool drain];
|
||||
}
|
||||
5
samples/Logos/string1.x
Normal file
5
samples/Logos/string1.x
Normal file
@@ -0,0 +1,5 @@
|
||||
# APPLE LOCAL file string workaround 4943900
|
||||
if { [istarget "*-*-darwin\[9123\]*"] } {
|
||||
set additional_flags "-framework Foundation -fconstant-cfstrings"
|
||||
}
|
||||
return 0
|
||||
1
samples/Markdown/symlink.md
Symbolic link
1
samples/Markdown/symlink.md
Symbolic link
@@ -0,0 +1 @@
|
||||
README.mdown
|
||||
83
samples/Monkey/encodeToPng.monkey2
Normal file
83
samples/Monkey/encodeToPng.monkey2
Normal file
@@ -0,0 +1,83 @@
|
||||
|
||||
#Import "<std>"
|
||||
Using std..
|
||||
|
||||
'Set your own path here. Defaults to build folder.
|
||||
Global path:= AppDir() + "encodeToPng.png"
|
||||
|
||||
Function Main()
|
||||
|
||||
'Write from PixMap
|
||||
Local source := New Pixmap( 100, 100 )
|
||||
For Local y := 0 Until source.Width
|
||||
For Local x := 0 Until source.Height
|
||||
'Generates random pixels
|
||||
source.SetPixelARGB( x, y, ARGB( 255, Rnd(0, 255), 0, Rnd(0, 255) ) )
|
||||
Next
|
||||
Next
|
||||
source.Save( path )
|
||||
|
||||
'Read from png to PixMap
|
||||
Local dest := Pixmap.Load( path )
|
||||
Local a := ""
|
||||
Local r := ""
|
||||
Local g := ""
|
||||
Local b := ""
|
||||
For Local y := 0 Until dest.Width
|
||||
For Local x := 0 Until source.Height
|
||||
Local argb := dest.GetPixelARGB(x,y)
|
||||
a += ARGBToAlpha( argb ) + " "
|
||||
r += ARGBToRed( argb ) + " "
|
||||
g += ARGBToGreen( argb ) + " "
|
||||
b += ARGBToBlue( argb ) + " "
|
||||
Next
|
||||
a += "~n"
|
||||
r += "~n"
|
||||
g += "~n"
|
||||
b += "~n"
|
||||
Next
|
||||
|
||||
'Print resulting pixels
|
||||
Print( " ~nAlpha:~n" + a )
|
||||
Print( " ~nRed:~n" + r )
|
||||
Print( " ~nGreen:~n" + g )
|
||||
Print( " ~nBlue:~n" + b )
|
||||
|
||||
End
|
||||
|
||||
|
||||
'**************** Color Functions ****************
|
||||
|
||||
|
||||
Function ARGB:UInt( a:Float, r:Float, g:Float, b:Float )
|
||||
Assert ( a<=1.0, "Alpha max value is 1.0" )
|
||||
Assert ( r<=1.0, "Red max value is 1.0" )
|
||||
Assert ( g<=1.0, "Green max value is 1.0" )
|
||||
Assert ( b<=1.0, "Blue max value is 1.0" )
|
||||
Return UInt(a*255) Shl 24 | UInt(r*255) Shl 16 | UInt(g*255) Shl 8 | UInt(b*255)
|
||||
End
|
||||
|
||||
Function ARGB:UInt( a:Int, r:Int, g:Int, b:Int )
|
||||
Assert ( a<256, "Alpha can't be higher than 255" )
|
||||
Assert ( r<256, "Red can't be higher than 255" )
|
||||
Assert ( g<256, "Green can't be higher than 255" )
|
||||
Assert ( b<256, "Blue can't be higher than 255" )
|
||||
Return( a Shl 24 | r Shl 16 | g Shl 8 | b )
|
||||
End
|
||||
|
||||
Function ARGBToAlpha:Int( argb:UInt )
|
||||
Return argb Shr 24 & $ff
|
||||
End
|
||||
|
||||
Function ARGBToRed:Int( argb:UInt )
|
||||
Return argb Shr 16 & $ff
|
||||
End
|
||||
|
||||
Function ARGBToGreen:Int( argb:UInt )
|
||||
Return argb Shr 8 & $ff
|
||||
End
|
||||
|
||||
Function ARGBToBlue:Int( argb:UInt )
|
||||
Return argb & $ff
|
||||
End
|
||||
|
||||
185
samples/Monkey/example.monkey2
Normal file
185
samples/Monkey/example.monkey2
Normal file
@@ -0,0 +1,185 @@
|
||||
|
||||
Namespace example
|
||||
|
||||
#rem
|
||||
multi
|
||||
line
|
||||
comment
|
||||
#end
|
||||
|
||||
#rem
|
||||
nested
|
||||
#rem
|
||||
multi
|
||||
line
|
||||
#end
|
||||
comment
|
||||
#end
|
||||
|
||||
'Importing a module pre-compile in the modules folder
|
||||
#Import "<mojo>"
|
||||
|
||||
'Setting search paths for namespaces
|
||||
Using mojo..
|
||||
Using std..
|
||||
|
||||
Const ONECONST:Int = 1
|
||||
Const TWOCONST := 2
|
||||
Const THREECONST := 3, FOURCONST:Int = 4
|
||||
|
||||
Global someVariable:Int = 4
|
||||
|
||||
Function Main()
|
||||
'creating arrays
|
||||
Local scores:Int[]= New Int[](10,20,30)
|
||||
Local text:String[]= New String[]( "Hello","There","World" )
|
||||
|
||||
' string type
|
||||
Local string1:String = "Hello world"
|
||||
Local string2:= "Hello world"
|
||||
|
||||
' escape characers in strings
|
||||
Local string4 := "~qHello World~q"
|
||||
Local string5 := "~tIndented~n"
|
||||
Local string6 := "tilde is wavey... ~~"
|
||||
Print string4
|
||||
Print string5
|
||||
Print string6
|
||||
|
||||
' string pseudofunctions
|
||||
Print " Hello World ~n".Trim() ' prints "Hello World" whithout whitespace
|
||||
Print "Hello World".ToUpper() ' prints "HELLO WORLD"
|
||||
|
||||
' preprocessor keywords
|
||||
#If __TARGET__ = "android"
|
||||
'DoStuff()
|
||||
#ElseIf __TARGET__ = "ios"
|
||||
'DoOtherStuff()
|
||||
#End
|
||||
|
||||
' operators
|
||||
Local a := 32
|
||||
Local b := 32 ~ 0
|
||||
b ~= 16
|
||||
b |= 16
|
||||
b &= 16
|
||||
Local c := a | b
|
||||
Print c
|
||||
|
||||
'Creates a new Window class and starts the main App loop, using the Mojo module
|
||||
New AppInstance
|
||||
New GameWindow
|
||||
App.Run()
|
||||
End
|
||||
|
||||
|
||||
'------------------------------------------ Class Examples ------------------------------------------
|
||||
|
||||
|
||||
'You can extend the Window class to customize its behavior
|
||||
Class GameWindow Extends Window
|
||||
|
||||
Private
|
||||
Field _spiral :Float[]
|
||||
Field _circle :Float[]
|
||||
|
||||
Public
|
||||
Method New()
|
||||
Super.New( "Test", 800, 600, WindowFlags.Resizable )
|
||||
End
|
||||
|
||||
'Properties can be used to create "read-only" values
|
||||
Property Spiral:Float[]()
|
||||
Return _spiral
|
||||
End
|
||||
|
||||
'Or to control what happens to a value when assigned
|
||||
Property Circle:Float[]()
|
||||
Return _circle
|
||||
Setter( values:Float[] )
|
||||
If( values.Length > 2 ) And ( values.Length Mod 2 = 0 )
|
||||
_circle = values
|
||||
Else
|
||||
Print( "Circle values need to be an even number larger than 1" )
|
||||
End
|
||||
End
|
||||
|
||||
'Methods require a class instance. The keyword Self is optional when accessing fields and properties
|
||||
'The method Window.OnRender is virtual, and can be overriden
|
||||
'Width and Height are Propreties inherited from the Window class
|
||||
Method OnRender( canvas:Canvas ) Override
|
||||
RequestRender()
|
||||
canvas.Clear( Color.DarkGrey )
|
||||
canvas.Translate( Width/2.0, Height/2.0 )
|
||||
canvas.Rotate( -Millisecs()/200.0 )
|
||||
canvas.Color = New Color( 1, 0.8, 0.2 )
|
||||
DrawLines( canvas, Spiral )
|
||||
DrawLines( canvas, Circle, True )
|
||||
End
|
||||
|
||||
'This method is called whenever the window layout changes, like when resizing
|
||||
Method OnLayout() Override
|
||||
_spiral = CreateSpiral( 0, 0, Height/1.5, Height/1.5, 100 )
|
||||
Circle = CreateCircle( 0, 0, Height/1.5, Height/1.5, 100 )
|
||||
End
|
||||
|
||||
'Functions can be called without a GameWindow instance, like "Static Functions" in other languages.
|
||||
Function DrawLines( canvas:Canvas, lines:Float[], closedShape:Bool = False )
|
||||
For Local n := 0 Until lines.Length Step 2
|
||||
Local l := lines.Length - 3
|
||||
Local x0 := lines[n]
|
||||
Local y0 := lines[n+1]
|
||||
Local x1 := n<l? lines[n+2] Else (closedShape? lines[0] Else x0 ) 'Conditional assignment, uses the "?" symbol to test a condition
|
||||
Local y1 := n<l? lines[n+3] Else (closedShape? lines[1] Else y0 )
|
||||
canvas.DrawLine( x0, y0, x1, y1 )
|
||||
Next
|
||||
End
|
||||
|
||||
Function CreateSpiral:Float[]( x:Double, y:Double, width:Double, height:Double, sides:Int = 32, turns:Float = 3.0 )
|
||||
Local stack := New Stack<Float>
|
||||
Local radStep := (Pi*2.0)/Float(sides)
|
||||
Local xMult := 0.0
|
||||
Local yMult := 0.0
|
||||
Local radiusX:Float = width/2.0
|
||||
Local radiusY:Float = height/2.0
|
||||
Local stepX:Float = radiusX / sides
|
||||
Local stepY:Float = radiusY / sides
|
||||
For Local a := 0.0 To Pi*2 Step radStep
|
||||
stack.Push( ( ( Sin( a*turns ) * radiusX )* xMult ) + x )
|
||||
stack.Push( ( ( Cos( a*turns ) * radiusY )* yMult ) + y )
|
||||
xMult += stepX/radiusX
|
||||
yMult += stepY/radiusY
|
||||
Next
|
||||
Return stack.ToArray()
|
||||
End
|
||||
|
||||
Function CreateCircle:Float[]( x:Double, y:Double, width:Double, height:Double, sides:Int = 32 )
|
||||
Local stack := New Stack<Float>
|
||||
Local radStep := (Pi*2.0)/Float(sides)
|
||||
Local radiusX:Float = width/2.0
|
||||
Local radiusY:Float = height/2.0
|
||||
For Local a := 0.0 To Pi*2 Step radStep
|
||||
stack.Push( ( Sin( a ) * radiusX ) + x )
|
||||
stack.Push( ( Cos( a ) * radiusY ) + y )
|
||||
Next
|
||||
Return stack.ToArray()
|
||||
End
|
||||
|
||||
End
|
||||
|
||||
'--------- extending with generics -----------------------------------------------------------------------------
|
||||
|
||||
Class MyList Extends List<Double>
|
||||
End
|
||||
|
||||
'--------- interfaces ------------------------------------------------------------------------------------------
|
||||
|
||||
Interface Computer
|
||||
Method Boot ()
|
||||
Method Process ()
|
||||
Method Display ()
|
||||
End
|
||||
'
|
||||
Class PC Implements Computer
|
||||
End
|
||||
|
||||
115
samples/Monkey/gui.monkey2
Normal file
115
samples/Monkey/gui.monkey2
Normal file
@@ -0,0 +1,115 @@
|
||||
#Import "<mojo>"
|
||||
#Import "<mojox>"
|
||||
|
||||
Using std..
|
||||
Using mojo..
|
||||
Using mojox..
|
||||
|
||||
Function Main()
|
||||
New AppInstance
|
||||
New TestGui
|
||||
App.Run()
|
||||
End
|
||||
|
||||
|
||||
Class TestGui Extends Window
|
||||
Field mainDock:DockingView
|
||||
Field rgtDock:ScrollView
|
||||
Field graphView:GraphView
|
||||
|
||||
Const smallFont:Font = Font.Load( "font::DejaVuSans.ttf", 10 )
|
||||
|
||||
Method New()
|
||||
Super.New( "Test", 1024, 640, WindowFlags.Resizable )
|
||||
mainDock = New MainDock()
|
||||
rgtDock = New RightDock()
|
||||
mainDock.AddView( rgtDock, "right", "400", True )
|
||||
ContentView = mainDock
|
||||
End
|
||||
End
|
||||
|
||||
|
||||
Class MainDock Extends DockingView
|
||||
Method New()
|
||||
Layout="fill"
|
||||
Local newStyle := Style.Copy()
|
||||
newStyle.BackgroundColor = Color.DarkGrey
|
||||
newStyle.BorderColor = Color.Black
|
||||
newStyle.Font = TestGui.smallFont
|
||||
Style = newStyle
|
||||
End
|
||||
|
||||
Method OnRender( canvas:Canvas ) Override
|
||||
Super.OnRender( canvas )
|
||||
canvas.Color = New Color( Rnd(), Rnd(), Rnd() )
|
||||
canvas.DrawCircle( Frame.Width/4, Frame.Height/2, Frame.Height/4 )
|
||||
canvas.Color = Color.Aluminum
|
||||
canvas.DrawText( "gameview:" + App.FPS + " fps", 5, 5 )
|
||||
End
|
||||
End
|
||||
|
||||
|
||||
|
||||
Class RightDock Extends ScrollView
|
||||
Private
|
||||
Field _panSpeed := 10.0
|
||||
|
||||
Public
|
||||
Method New()
|
||||
Layout="fill"
|
||||
ScrollBarsVisible = True
|
||||
|
||||
Local newStyle := Style.Copy()
|
||||
newStyle.BackgroundColor = Color.Grey
|
||||
newStyle.BorderColor = Color.Black
|
||||
newStyle.Font = TestGui.smallFont
|
||||
Style = newStyle
|
||||
|
||||
Local graph:=New GraphView
|
||||
ContentView = graph
|
||||
|
||||
Scroll = New Vec2i( graph.Frame.Width/2, graph.Frame.Height/2 ) 'Doesn't work!
|
||||
End
|
||||
|
||||
Method OnRender( canvas:Canvas ) Override
|
||||
Super.OnRender( canvas )
|
||||
canvas.Color = Color.Aluminum
|
||||
canvas.DrawText( "size:" + Frame + " ,scroll:" + Scroll , 5, 5 )
|
||||
End
|
||||
|
||||
Method OnMouseEvent( event:MouseEvent ) Override
|
||||
Select event.Type
|
||||
Case EventType.MouseWheel
|
||||
Scroll = New Vec2i( Scroll.X+(event.Wheel.X*_panSpeed), Scroll.Y-(event.Wheel.Y*_panSpeed) )
|
||||
App.RequestRender()
|
||||
End
|
||||
End
|
||||
End
|
||||
|
||||
|
||||
Class GraphView Extends View
|
||||
Private
|
||||
Field _panSpeed := 5.0
|
||||
Field _size := New Vec2i( 1024, 1024 )
|
||||
|
||||
Public
|
||||
Method New()
|
||||
MinSize=New Vec2i( _size.X, _size.Y )
|
||||
End
|
||||
|
||||
Method OnRender( canvas:Canvas ) Override
|
||||
Local r:= 20.0
|
||||
For Local x := 1 Until 10
|
||||
For Local y := 1 Until 10
|
||||
Local v := (x/10.0) -0.05
|
||||
canvas.Color = New Color( v, v, v )
|
||||
canvas.DrawCircle( (x*100)+r, (y*100)+r, r )
|
||||
Next
|
||||
Next
|
||||
End
|
||||
End
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
29
samples/Monkey/sorting.monkey2
Normal file
29
samples/Monkey/sorting.monkey2
Normal file
@@ -0,0 +1,29 @@
|
||||
'Showcases use of Lambda functions and Generics.
|
||||
|
||||
#Import "<std>"
|
||||
Using std..
|
||||
|
||||
Function Main()
|
||||
|
||||
Local testStack := New Stack< MyObject >
|
||||
|
||||
For Local n := 1 To 20
|
||||
Local newItem := New MyObject
|
||||
newItem.depth = Rnd( 0, 100 )
|
||||
testStack.Push( newItem )
|
||||
Next
|
||||
|
||||
testStack.Sort( Lambda:Int( x:MyObject,y:MyObject )
|
||||
Return x.depth<=>y.depth
|
||||
End )
|
||||
|
||||
For Local n := Eachin testStack
|
||||
Print( n.depth )
|
||||
Next
|
||||
|
||||
End
|
||||
|
||||
|
||||
Struct MyObject
|
||||
Field depth := 0
|
||||
End
|
||||
67
samples/Nextflow/blast.nf
Normal file
67
samples/Nextflow/blast.nf
Normal file
@@ -0,0 +1,67 @@
|
||||
#!/usr/bin/env nextflow
|
||||
/*
|
||||
* This is free and unencumbered software released into the public domain.
|
||||
*
|
||||
* Anyone is free to copy, modify, publish, use, compile, sell, or
|
||||
* distribute this software, either in source code form or as a compiled
|
||||
* binary, for any purpose, commercial or non-commercial, and by any
|
||||
* means.
|
||||
*
|
||||
* In jurisdictions that recognize copyright laws, the author or authors
|
||||
* of this software dedicate any and all copyright interest in the
|
||||
* software to the public domain. We make this dedication for the benefit
|
||||
* of the public at large and to the detriment of our heirs and
|
||||
* successors. We intend this dedication to be an overt act of
|
||||
* relinquishment in perpetuity of all present and future rights to this
|
||||
* software under copyright law.
|
||||
*
|
||||
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
|
||||
* EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
|
||||
* MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.
|
||||
* IN NO EVENT SHALL THE AUTHORS BE LIABLE FOR ANY CLAIM, DAMAGES OR
|
||||
* OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
|
||||
* ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
|
||||
* OTHER DEALINGS IN THE SOFTWARE.
|
||||
*
|
||||
* For more information, please refer to <http://unlicense.org/>
|
||||
*/
|
||||
|
||||
/*
|
||||
* Author Paolo Di Tommaso <paolo.ditommaso@gmail.com>
|
||||
*/
|
||||
|
||||
|
||||
params.query = "$HOME/sample.fa"
|
||||
params.db = "$HOME/tools/blast-db/pdb/pdb"
|
||||
|
||||
process blast {
|
||||
output:
|
||||
file top_hits
|
||||
|
||||
"""
|
||||
blastp -query ${params.query} -db ${params.db} -outfmt 6 \
|
||||
| head -n 10 \
|
||||
| cut -f 2 > top_hits
|
||||
"""
|
||||
}
|
||||
|
||||
process extract {
|
||||
input:
|
||||
file top_hits
|
||||
output:
|
||||
file sequences
|
||||
|
||||
"""
|
||||
blastdbcmd -db ${params.db} -entry_batch $top_hits > sequences
|
||||
"""
|
||||
}
|
||||
|
||||
process align {
|
||||
input:
|
||||
file sequences
|
||||
echo true
|
||||
|
||||
"""
|
||||
t_coffee $sequences 2>&- | tee align_result
|
||||
"""
|
||||
}
|
||||
496
samples/Nextflow/callings.nf
Executable file
496
samples/Nextflow/callings.nf
Executable file
@@ -0,0 +1,496 @@
|
||||
#!/usr/bin/env nextflow
|
||||
/*
|
||||
* This is free and unencumbered software released into the public domain.
|
||||
*
|
||||
* Anyone is free to copy, modify, publish, use, compile, sell, or
|
||||
* distribute this software, either in source code form or as a compiled
|
||||
* binary, for any purpose, commercial or non-commercial, and by any
|
||||
* means.
|
||||
*
|
||||
* In jurisdictions that recognize copyright laws, the author or authors
|
||||
* of this software dedicate any and all copyright interest in the
|
||||
* software to the public domain. We make this dedication for the benefit
|
||||
* of the public at large and to the detriment of our heirs and
|
||||
* successors. We intend this dedication to be an overt act of
|
||||
* relinquishment in perpetuity of all present and future rights to this
|
||||
* software under copyright law.
|
||||
*
|
||||
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
|
||||
* EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
|
||||
* MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.
|
||||
* IN NO EVENT SHALL THE AUTHORS BE LIABLE FOR ANY CLAIM, DAMAGES OR
|
||||
* OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
|
||||
* ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
|
||||
* OTHER DEALINGS IN THE SOFTWARE.
|
||||
*
|
||||
* For more information, please refer to <http://unlicense.org/>
|
||||
*/
|
||||
|
||||
|
||||
/*
|
||||
* 'CalliNGS-NF' - A Nextflow pipeline for variant calling with NGS data
|
||||
*
|
||||
* This pipeline that reproduces steps from the GATK best practics of SNP
|
||||
* calling with RNAseq data procedure:
|
||||
* https://software.broadinstitute.org/gatk/guide/article?id=3891
|
||||
*
|
||||
* Anna Vlasova
|
||||
* Emilio Palumbo
|
||||
* Paolo Di Tommaso
|
||||
* Evan Floden
|
||||
*/
|
||||
|
||||
|
||||
/*
|
||||
* Define the default parameters
|
||||
*/
|
||||
|
||||
params.genome = "$baseDir/data/genome.fa"
|
||||
params.variants = "$baseDir/data/known_variants.vcf.gz"
|
||||
params.blacklist = "$baseDir/data/blacklist.bed"
|
||||
params.reads = "$baseDir/data/reads/rep1_{1,2}.fq.gz"
|
||||
params.results = "results"
|
||||
params.gatk = '/usr/local/bin/GenomeAnalysisTK.jar'
|
||||
params.gatk_launch = "java -jar $params.gatk"
|
||||
|
||||
log.info "C A L L I N G S - N F v 1.0"
|
||||
log.info "================================"
|
||||
log.info "genome : $params.genome"
|
||||
log.info "reads : $params.reads"
|
||||
log.info "variants : $params.variants"
|
||||
log.info "blacklist: $params.blacklist"
|
||||
log.info "results : $params.results"
|
||||
log.info "gatk : $params.gatk"
|
||||
log.info ""
|
||||
|
||||
/*
|
||||
* Parse the input parameters
|
||||
*/
|
||||
|
||||
GATK = params.gatk_launch
|
||||
genome_file = file(params.genome)
|
||||
variants_file = file(params.variants)
|
||||
blacklist_file = file(params.blacklist)
|
||||
reads_ch = Channel.fromFilePairs(params.reads)
|
||||
|
||||
|
||||
/**********
|
||||
* PART 1: Data preparation
|
||||
*
|
||||
* Process 1A: Create a FASTA genome index (.fai) with samtools for GATK
|
||||
*/
|
||||
|
||||
process '1A_prepare_genome_samtools' {
|
||||
tag "$genome.baseName"
|
||||
|
||||
input:
|
||||
file genome from genome_file
|
||||
|
||||
output:
|
||||
file "${genome}.fai" into genome_index_ch
|
||||
|
||||
script:
|
||||
"""
|
||||
samtools faidx ${genome}
|
||||
"""
|
||||
}
|
||||
|
||||
|
||||
/*
|
||||
* Process 1B: Create a FASTA genome sequence dictionary with Picard for GATK
|
||||
*/
|
||||
|
||||
process '1B_prepare_genome_picard' {
|
||||
tag "$genome.baseName"
|
||||
|
||||
input:
|
||||
file genome from genome_file
|
||||
output:
|
||||
file "${genome.baseName}.dict" into genome_dict_ch
|
||||
|
||||
script:
|
||||
"""
|
||||
PICARD=`which picard.jar`
|
||||
java -jar \$PICARD CreateSequenceDictionary R= $genome O= ${genome.baseName}.dict
|
||||
"""
|
||||
}
|
||||
|
||||
|
||||
/*
|
||||
* Process 1C: Create STAR genome index file.
|
||||
*/
|
||||
|
||||
process '1C_prepare_star_genome_index' {
|
||||
tag "$genome.baseName"
|
||||
|
||||
input:
|
||||
file genome from genome_file
|
||||
output:
|
||||
file "genome_dir" into genome_dir_ch
|
||||
|
||||
script:
|
||||
"""
|
||||
mkdir genome_dir
|
||||
|
||||
STAR --runMode genomeGenerate \
|
||||
--genomeDir genome_dir \
|
||||
--genomeFastaFiles ${genome} \
|
||||
--runThreadN ${task.cpus}
|
||||
"""
|
||||
}
|
||||
|
||||
|
||||
/*
|
||||
* Process 1D: Create a file containing the filtered and recoded set of variants
|
||||
*/
|
||||
|
||||
process '1D_prepare_vcf_file' {
|
||||
tag "$variantsFile.baseName"
|
||||
|
||||
input:
|
||||
file variantsFile from variants_file
|
||||
file blacklisted from blacklist_file
|
||||
|
||||
output:
|
||||
set file("${variantsFile.baseName}.filtered.recode.vcf.gz"), file("${variantsFile.baseName}.filtered.recode.vcf.gz.tbi") into prepared_vcf_ch
|
||||
|
||||
script:
|
||||
"""
|
||||
vcftools --gzvcf $variantsFile -c \
|
||||
--exclude-bed ${blacklisted} \
|
||||
--recode | bgzip -c \
|
||||
> ${variantsFile.baseName}.filtered.recode.vcf.gz
|
||||
|
||||
tabix ${variantsFile.baseName}.filtered.recode.vcf.gz
|
||||
"""
|
||||
}
|
||||
|
||||
/*
|
||||
* END OF PART 1
|
||||
*********/
|
||||
|
||||
|
||||
|
||||
/**********
|
||||
* PART 2: STAR RNA-Seq Mapping
|
||||
*
|
||||
* Process 2: Align RNA-Seq reads to the genome with STAR
|
||||
*/
|
||||
|
||||
process '2_rnaseq_mapping_star' {
|
||||
tag "$replicateId"
|
||||
|
||||
input:
|
||||
file genome from genome_file
|
||||
file genomeDir from genome_dir_ch
|
||||
set replicateId, file(reads) from reads_ch
|
||||
|
||||
output:
|
||||
set replicateId, file('Aligned.sortedByCoord.out.bam'), file('Aligned.sortedByCoord.out.bam.bai') into aligned_bam_ch
|
||||
|
||||
script:
|
||||
"""
|
||||
# ngs-nf-dev Align reads to genome
|
||||
STAR --genomeDir $genomeDir \
|
||||
--readFilesIn $reads \
|
||||
--runThreadN ${task.cpus} \
|
||||
--readFilesCommand zcat \
|
||||
--outFilterType BySJout \
|
||||
--alignSJoverhangMin 8 \
|
||||
--alignSJDBoverhangMin 1 \
|
||||
--outFilterMismatchNmax 999
|
||||
|
||||
# 2nd pass (improve alignmets using table of splice junctions and create a new index)
|
||||
mkdir genomeDir
|
||||
STAR --runMode genomeGenerate \
|
||||
--genomeDir genomeDir \
|
||||
--genomeFastaFiles $genome \
|
||||
--sjdbFileChrStartEnd SJ.out.tab \
|
||||
--sjdbOverhang 75 \
|
||||
--runThreadN ${task.cpus}
|
||||
|
||||
# Final read alignments
|
||||
STAR --genomeDir genomeDir \
|
||||
--readFilesIn $reads \
|
||||
--runThreadN ${task.cpus} \
|
||||
--readFilesCommand zcat \
|
||||
--outFilterType BySJout \
|
||||
--alignSJoverhangMin 8 \
|
||||
--alignSJDBoverhangMin 1 \
|
||||
--outFilterMismatchNmax 999 \
|
||||
--outSAMtype BAM SortedByCoordinate \
|
||||
--outSAMattrRGline ID:$replicateId LB:library PL:illumina PU:machine SM:GM12878
|
||||
|
||||
# Index the BAM file
|
||||
samtools index Aligned.sortedByCoord.out.bam
|
||||
"""
|
||||
}
|
||||
|
||||
/*
|
||||
* END OF PART 2
|
||||
******/
|
||||
|
||||
|
||||
/**********
|
||||
* PART 3: GATK Prepare Mapped Reads
|
||||
*
|
||||
* Process 3: Split reads that contain Ns in their CIGAR string.
|
||||
* Creates k+1 new reads (where k is the number of N cigar elements)
|
||||
* that correspond to the segments of the original read beside/between
|
||||
* the splicing events represented by the Ns in the original CIGAR.
|
||||
*/
|
||||
|
||||
process '3_rnaseq_gatk_splitNcigar' {
|
||||
tag "$replicateId"
|
||||
|
||||
input:
|
||||
file genome from genome_file
|
||||
file index from genome_index_ch
|
||||
file genome_dict from genome_dict_ch
|
||||
set replicateId, file(bam), file(index) from aligned_bam_ch
|
||||
|
||||
output:
|
||||
set replicateId, file('split.bam'), file('split.bai') into splitted_bam_ch
|
||||
|
||||
script:
|
||||
"""
|
||||
# SplitNCigarReads and reassign mapping qualities
|
||||
$GATK -T SplitNCigarReads \
|
||||
-R $genome -I $bam \
|
||||
-o split.bam \
|
||||
-rf ReassignOneMappingQuality \
|
||||
-RMQF 255 -RMQT 60 \
|
||||
-U ALLOW_N_CIGAR_READS \
|
||||
--fix_misencoded_quality_scores
|
||||
"""
|
||||
}
|
||||
|
||||
/*
|
||||
* END OF PART 3
|
||||
******/
|
||||
|
||||
|
||||
/***********
|
||||
* PART 4: GATK Base Quality Score Recalibration Workflow
|
||||
*
|
||||
* Process 4: Base recalibrate to detect systematic errors in base quality scores,
|
||||
* select unique alignments and index
|
||||
*
|
||||
*/
|
||||
|
||||
process '4_rnaseq_gatk_recalibrate' {
|
||||
tag "$replicateId"
|
||||
|
||||
input:
|
||||
file genome from genome_file
|
||||
file index from genome_index_ch
|
||||
file dict from genome_dict_ch
|
||||
set replicateId, file(bam), file(index) from splitted_bam_ch
|
||||
set file(variants_file), file(variants_file_index) from prepared_vcf_ch
|
||||
|
||||
output:
|
||||
set sampleId, file("${replicateId}.final.uniq.bam"), file("${replicateId}.final.uniq.bam.bai") into (final_output_ch, bam_for_ASE_ch)
|
||||
|
||||
script:
|
||||
sampleId = replicateId.replaceAll(/[12]$/,'')
|
||||
"""
|
||||
# Indel Realignment and Base Recalibration
|
||||
$GATK -T BaseRecalibrator \
|
||||
--default_platform illumina \
|
||||
-cov ReadGroupCovariate \
|
||||
-cov QualityScoreCovariate \
|
||||
-cov CycleCovariate \
|
||||
-knownSites ${variants_file} \
|
||||
-cov ContextCovariate \
|
||||
-R ${genome} -I ${bam} \
|
||||
--downsampling_type NONE \
|
||||
-nct ${task.cpus} \
|
||||
-o final.rnaseq.grp
|
||||
|
||||
$GATK -T PrintReads \
|
||||
-R ${genome} -I ${bam} \
|
||||
-BQSR final.rnaseq.grp \
|
||||
-nct ${task.cpus} \
|
||||
-o final.bam
|
||||
|
||||
# Select only unique alignments, no multimaps
|
||||
(samtools view -H final.bam; samtools view final.bam| grep -w 'NH:i:1') \
|
||||
|samtools view -Sb - > ${replicateId}.final.uniq.bam
|
||||
|
||||
# Index BAM files
|
||||
samtools index ${replicateId}.final.uniq.bam
|
||||
"""
|
||||
}
|
||||
|
||||
/*
|
||||
* END OF PART 4
|
||||
******/
|
||||
|
||||
|
||||
|
||||
/***********
|
||||
* PART 5: GATK Variant Calling
|
||||
*
|
||||
* Process 5: Call variants with GATK HaplotypeCaller.
|
||||
* Calls SNPs and indels simultaneously via local de-novo assembly of
|
||||
* haplotypes in an active region.
|
||||
* Filter called variants with GATK VariantFiltration.
|
||||
*/
|
||||
|
||||
|
||||
process '5_rnaseq_call_variants' {
|
||||
tag "$sampleId"
|
||||
|
||||
input:
|
||||
file genome from genome_file
|
||||
file index from genome_index_ch
|
||||
file dict from genome_dict_ch
|
||||
set sampleId, file(bam), file(bai) from final_output_ch.groupTuple()
|
||||
|
||||
output:
|
||||
set sampleId, file('final.vcf') into vcf_files
|
||||
|
||||
script:
|
||||
"""
|
||||
# fix absolute path in dict file
|
||||
sed -i 's@UR:file:.*${genome}@UR:file:${genome}@g' $dict
|
||||
echo "${bam.join('\n')}" > bam.list
|
||||
|
||||
# Variant calling
|
||||
$GATK -T HaplotypeCaller \
|
||||
-R $genome -I bam.list \
|
||||
-dontUseSoftClippedBases \
|
||||
-stand_call_conf 20.0 \
|
||||
-o output.gatk.vcf.gz
|
||||
|
||||
# Variant filtering
|
||||
$GATK -T VariantFiltration \
|
||||
-R $genome -V output.gatk.vcf.gz \
|
||||
-window 35 -cluster 3 \
|
||||
-filterName FS -filter "FS > 30.0" \
|
||||
-filterName QD -filter "QD < 2.0" \
|
||||
-o final.vcf
|
||||
"""
|
||||
}
|
||||
|
||||
/*
|
||||
* END OF PART 5
|
||||
******/
|
||||
|
||||
|
||||
/***********
|
||||
* PART 6: Post-process variants file and prepare for Allele-Specific Expression and RNA Editing Analysis
|
||||
*
|
||||
* Process 6A: Post-process the VCF result
|
||||
*/
|
||||
|
||||
process '6A_post_process_vcf' {
|
||||
tag "$sampleId"
|
||||
publishDir "$params.results/$sampleId"
|
||||
|
||||
input:
|
||||
set sampleId, file('final.vcf') from vcf_files
|
||||
set file('filtered.recode.vcf.gz'), file('filtered.recode.vcf.gz.tbi') from prepared_vcf_ch
|
||||
output:
|
||||
set sampleId, file('final.vcf'), file('commonSNPs.diff.sites_in_files') into vcf_and_snps_ch
|
||||
|
||||
script:
|
||||
'''
|
||||
grep -v '#' final.vcf | awk '$7~/PASS/' |perl -ne 'chomp($_); ($dp)=$_=~/DP\\=(\\d+)\\;/; if($dp>=8){print $_."\\n"};' > result.DP8.vcf
|
||||
|
||||
vcftools --vcf result.DP8.vcf --gzdiff filtered.recode.vcf.gz --diff-site --out commonSNPs
|
||||
'''
|
||||
}
|
||||
|
||||
/*
|
||||
* Process 6B: Prepare variants file for allele specific expression (ASE) analysis
|
||||
*/
|
||||
|
||||
process '6B_prepare_vcf_for_ase' {
|
||||
tag "$sampleId"
|
||||
publishDir "$params.results/$sampleId"
|
||||
|
||||
input:
|
||||
set sampleId, file('final.vcf'), file('commonSNPs.diff.sites_in_files') from vcf_and_snps_ch
|
||||
output:
|
||||
set sampleId, file('known_snps.vcf') into vcf_for_ASE
|
||||
file('AF.histogram.pdf') into gghist_pdfs
|
||||
|
||||
script:
|
||||
'''
|
||||
awk 'BEGIN{OFS="\t"} $4~/B/{print $1,$2,$3}' commonSNPs.diff.sites_in_files > test.bed
|
||||
|
||||
vcftools --vcf final.vcf --bed test.bed --recode --keep-INFO-all --stdout > known_snps.vcf
|
||||
|
||||
grep -v '#' known_snps.vcf | awk -F '\\t' '{print $10}' \
|
||||
|awk -F ':' '{print $2}'|perl -ne 'chomp($_); \
|
||||
@v=split(/\\,/,$_); if($v[0]!=0 ||$v[1] !=0)\
|
||||
{print $v[1]/($v[1]+$v[0])."\\n"; }' |awk '$1!=1' \
|
||||
>AF.4R
|
||||
|
||||
gghist.R -i AF.4R -o AF.histogram.pdf
|
||||
'''
|
||||
}
|
||||
|
||||
|
||||
/*
|
||||
* Group data for allele-specific expression.
|
||||
*
|
||||
* The `bam_for_ASE_ch` emites tuples having the following structure, holding the final BAM/BAI files:
|
||||
*
|
||||
* ( sample_id, file_bam, file_bai )
|
||||
*
|
||||
* The `vcf_for_ASE` channel emits tuples having the following structure, holding the VCF file:
|
||||
*
|
||||
* ( sample_id, output.vcf )
|
||||
*
|
||||
* The BAMs are grouped together and merged with VCFs having the same sample id. Finally
|
||||
* it creates a channel named `grouped_vcf_bam_bai_ch` emitting the following tuples:
|
||||
*
|
||||
* ( sample_id, file_vcf, List[file_bam], List[file_bai] )
|
||||
*/
|
||||
|
||||
bam_for_ASE_ch
|
||||
.groupTuple()
|
||||
.phase(vcf_for_ASE)
|
||||
.map{ left, right ->
|
||||
def sampleId = left[0]
|
||||
def bam = left[1]
|
||||
def bai = left[2]
|
||||
def vcf = right[1]
|
||||
tuple(sampleId, vcf, bam, bai)
|
||||
}
|
||||
.set { grouped_vcf_bam_bai_ch }
|
||||
|
||||
|
||||
/*
|
||||
* Process 6C: Allele-Specific Expression analysis with GATK ASEReadCounter.
|
||||
* Calculates allele counts at a set of positions after applying
|
||||
* filters that are tuned for enabling allele-specific expression
|
||||
* (ASE) analysis
|
||||
*/
|
||||
|
||||
process '6C_ASE_knownSNPs' {
|
||||
tag "$sampleId"
|
||||
publishDir "$params.results/$sampleId"
|
||||
|
||||
input:
|
||||
file genome from genome_file
|
||||
file index from genome_index_ch
|
||||
file dict from genome_dict_ch
|
||||
set sampleId, file(vcf), file(bam), file(bai) from grouped_vcf_bam_bai_ch
|
||||
|
||||
output:
|
||||
file "ASE.tsv"
|
||||
|
||||
script:
|
||||
"""
|
||||
echo "${bam.join('\n')}" > bam.list
|
||||
|
||||
$GATK -R ${genome} \
|
||||
-T ASEReadCounter \
|
||||
-o ASE.tsv \
|
||||
-I bam.list \
|
||||
-sites ${vcf}
|
||||
"""
|
||||
}
|
||||
50
samples/Nextflow/filenames/nextflow.config
Normal file
50
samples/Nextflow/filenames/nextflow.config
Normal file
@@ -0,0 +1,50 @@
|
||||
aws {
|
||||
region = 'eu-west-1'
|
||||
}
|
||||
|
||||
cloud {
|
||||
autoscale {
|
||||
enabled = true
|
||||
minInstances = 3
|
||||
starvingTimeout = '2 min'
|
||||
terminateWhenIdle = true
|
||||
}
|
||||
imageId = 'ami-78ds78d'
|
||||
instanceProfile = 'MyRole'
|
||||
instanceType = 'r4.large'
|
||||
sharedStorageId = 'fs-76ds76s'
|
||||
spotPrice = 0.06
|
||||
subnetId = 'subnet-8d98d7s'
|
||||
}
|
||||
|
||||
env {
|
||||
BAR = 'world'
|
||||
FOO = 'hola'
|
||||
}
|
||||
|
||||
mail {
|
||||
from = 'paolo.ditommaso@gmail.com'
|
||||
smtp {
|
||||
auth = true
|
||||
host = 'email-smtp.us-east-1.amazonaws.com'
|
||||
password = 'my-secret'
|
||||
port = 587
|
||||
starttls {
|
||||
enable = true
|
||||
required = true
|
||||
}
|
||||
user = 'my-name'
|
||||
}
|
||||
}
|
||||
|
||||
process {
|
||||
executor = 'slurm'
|
||||
queue = 'cn-el7'
|
||||
memory = '16GB'
|
||||
cpus = 8
|
||||
container = 'user/rnaseq-nf:latest'
|
||||
}
|
||||
|
||||
trace {
|
||||
fields = 'task_id,name,status,attempt,exit,queue'
|
||||
}
|
||||
135
samples/Nextflow/rnaseq.nf
Normal file
135
samples/Nextflow/rnaseq.nf
Normal file
@@ -0,0 +1,135 @@
|
||||
#!/usr/bin/env nextflow
|
||||
/*
|
||||
* This is free and unencumbered software released into the public domain.
|
||||
*
|
||||
* Anyone is free to copy, modify, publish, use, compile, sell, or
|
||||
* distribute this software, either in source code form or as a compiled
|
||||
* binary, for any purpose, commercial or non-commercial, and by any
|
||||
* means.
|
||||
*
|
||||
* In jurisdictions that recognize copyright laws, the author or authors
|
||||
* of this software dedicate any and all copyright interest in the
|
||||
* software to the public domain. We make this dedication for the benefit
|
||||
* of the public at large and to the detriment of our heirs and
|
||||
* successors. We intend this dedication to be an overt act of
|
||||
* relinquishment in perpetuity of all present and future rights to this
|
||||
* software under copyright law.
|
||||
*
|
||||
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
|
||||
* EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
|
||||
* MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.
|
||||
* IN NO EVENT SHALL THE AUTHORS BE LIABLE FOR ANY CLAIM, DAMAGES OR
|
||||
* OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
|
||||
* ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
|
||||
* OTHER DEALINGS IN THE SOFTWARE.
|
||||
*
|
||||
* For more information, please refer to <http://unlicense.org/>
|
||||
*/
|
||||
|
||||
|
||||
/*
|
||||
* Proof of concept of a RNAseq pipeline implemented with Nextflow
|
||||
*
|
||||
* Authors:
|
||||
* - Paolo Di Tommaso <paolo.ditommaso@gmail.com>
|
||||
* - Emilio Palumbo <emiliopalumbo@gmail.com>
|
||||
* - Evan Floden <evanfloden@gmail.com>
|
||||
*/
|
||||
|
||||
|
||||
params.reads = "$baseDir/data/ggal/*_{1,2}.fq"
|
||||
params.transcriptome = "$baseDir/data/ggal/ggal_1_48850000_49020000.Ggal71.500bpflank.fa"
|
||||
params.outdir = "."
|
||||
params.multiqc = "$baseDir/multiqc"
|
||||
|
||||
log.info """\
|
||||
R N A S E Q - N F P I P E L I N E
|
||||
===================================
|
||||
transcriptome: ${params.transcriptome}
|
||||
reads : ${params.reads}
|
||||
outdir : ${params.outdir}
|
||||
"""
|
||||
.stripIndent()
|
||||
|
||||
|
||||
transcriptome_file = file(params.transcriptome)
|
||||
multiqc_file = file(params.multiqc)
|
||||
|
||||
|
||||
Channel
|
||||
.fromFilePairs( params.reads )
|
||||
.ifEmpty { error "Cannot find any reads matching: ${params.reads}" }
|
||||
.into { read_pairs_ch; read_pairs2_ch }
|
||||
|
||||
|
||||
process index {
|
||||
tag "$transcriptome_file.simpleName"
|
||||
|
||||
input:
|
||||
file transcriptome from transcriptome_file
|
||||
|
||||
output:
|
||||
file 'index' into index_ch
|
||||
|
||||
script:
|
||||
"""
|
||||
salmon index --threads $task.cpus -t $transcriptome -i index
|
||||
"""
|
||||
}
|
||||
|
||||
|
||||
process quant {
|
||||
tag "$pair_id"
|
||||
|
||||
input:
|
||||
file index from index_ch
|
||||
set pair_id, file(reads) from read_pairs_ch
|
||||
|
||||
output:
|
||||
file(pair_id) into quant_ch
|
||||
|
||||
script:
|
||||
"""
|
||||
salmon quant --threads $task.cpus --libType=U -i index -1 ${reads[0]} -2 ${reads[1]} -o $pair_id
|
||||
"""
|
||||
}
|
||||
|
||||
process fastqc {
|
||||
tag "FASTQC on $sample_id"
|
||||
|
||||
input:
|
||||
set sample_id, file(reads) from read_pairs2_ch
|
||||
|
||||
output:
|
||||
file("fastqc_${sample_id}_logs") into fastqc_ch
|
||||
|
||||
|
||||
script:
|
||||
"""
|
||||
mkdir fastqc_${sample_id}_logs
|
||||
fastqc -o fastqc_${sample_id}_logs -f fastq -q ${reads}
|
||||
"""
|
||||
}
|
||||
|
||||
|
||||
process multiqc {
|
||||
publishDir params.outdir, mode:'copy'
|
||||
|
||||
input:
|
||||
file('*') from quant_ch.mix(fastqc_ch).collect()
|
||||
file(config) from multiqc_file
|
||||
|
||||
output:
|
||||
file('multiqc_report.html')
|
||||
|
||||
script:
|
||||
"""
|
||||
cp $config/* .
|
||||
echo "custom_logo: \$PWD/logo.png" >> multiqc_config.yaml
|
||||
multiqc .
|
||||
"""
|
||||
}
|
||||
|
||||
workflow.onComplete {
|
||||
println ( workflow.success ? "\nDone! Open the following report in your browser --> $params.outdir/multiqc_report.html\n" : "Oops .. something went wrong" )
|
||||
}
|
||||
13
samples/PostCSS/sample.pcss
Normal file
13
samples/PostCSS/sample.pcss
Normal file
@@ -0,0 +1,13 @@
|
||||
@define-mixin size $size {
|
||||
width: $size;
|
||||
}
|
||||
|
||||
$big: 100px;
|
||||
|
||||
/* Main block */
|
||||
.block {
|
||||
&_logo {
|
||||
background: inline("./logo.png");
|
||||
@mixin size $big;
|
||||
}
|
||||
}
|
||||
146
samples/RPC/rpc.x
Normal file
146
samples/RPC/rpc.x
Normal file
@@ -0,0 +1,146 @@
|
||||
/* rpc.x extracted from RFC5531 */
|
||||
|
||||
const RPC_VERS = 2;
|
||||
|
||||
enum auth_flavor {
|
||||
AUTH_NONE = 0,
|
||||
AUTH_SYS = 1,
|
||||
AUTH_SHORT = 2,
|
||||
AUTH_DH = 3,
|
||||
AUTH_KERB = 4, /* RFC2695 */
|
||||
AUTH_RSA = 5,
|
||||
RPCSEC_GSS = 6 /* RFC2203 */
|
||||
/* and more to be defined */
|
||||
};
|
||||
|
||||
typedef opaque opaque_auth_body<400>;
|
||||
|
||||
struct opaque_auth {
|
||||
int flavor; /* may be "pseudo" value outside enum */
|
||||
opaque_auth_body body;
|
||||
};
|
||||
|
||||
enum msg_type {
|
||||
CALL = 0,
|
||||
REPLY = 1
|
||||
};
|
||||
|
||||
enum reply_stat {
|
||||
MSG_ACCEPTED = 0,
|
||||
MSG_DENIED = 1
|
||||
};
|
||||
|
||||
enum accept_stat {
|
||||
SUCCESS = 0, /* RPC executed successfully */
|
||||
PROG_UNAVAIL = 1, /* remote hasn't exported program */
|
||||
PROG_MISMATCH = 2, /* remote can't support version # */
|
||||
PROC_UNAVAIL = 3, /* program can't support procedure */
|
||||
GARBAGE_ARGS = 4, /* procedure can't decode params */
|
||||
SYSTEM_ERR = 5 /* e.g. memory allocation failure */
|
||||
};
|
||||
|
||||
enum reject_stat {
|
||||
RPC_MISMATCH = 0, /* RPC version number != 2 */
|
||||
AUTH_ERROR = 1 /* remote can't authenticate caller */
|
||||
};
|
||||
|
||||
enum auth_stat {
|
||||
AUTH_OK = 0, /* success */
|
||||
/*
|
||||
* failed at remote end
|
||||
*/
|
||||
AUTH_BADCRED = 1, /* bad credential (seal broken) */
|
||||
AUTH_REJECTEDCRED = 2, /* client must begin new session */
|
||||
AUTH_BADVERF = 3, /* bad verifier (seal broken) */
|
||||
AUTH_REJECTEDVERF = 4, /* verifier expired or replayed */
|
||||
AUTH_TOOWEAK = 5, /* rejected for security reasons */
|
||||
/*
|
||||
* failed locally
|
||||
*/
|
||||
AUTH_INVALIDRESP = 6, /* bogus response verifier */
|
||||
AUTH_FAILED = 7, /* reason unknown */
|
||||
/*
|
||||
* AUTH_KERB errors; deprecated. See [[139]RFC2695]
|
||||
*/
|
||||
AUTH_KERB_GENERIC = 8, /* kerberos generic error */
|
||||
AUTH_TIMEEXPIRE = 9, /* time of credential expired */
|
||||
AUTH_TKT_FILE = 10, /* problem with ticket file */
|
||||
AUTH_DECODE = 11, /* can't decode authenticator */
|
||||
AUTH_NET_ADDR = 12, /* wrong net address in ticket */
|
||||
/*
|
||||
* RPCSEC_GSS GSS related errors
|
||||
*/
|
||||
RPCSEC_GSS_CREDPROBLEM = 13, /* no credentials for user */
|
||||
RPCSEC_GSS_CTXPROBLEM = 14 /* problem with context */
|
||||
};
|
||||
|
||||
struct rpc_msg {
|
||||
unsigned int xid;
|
||||
union rpc_msg_body body;
|
||||
};
|
||||
|
||||
union rpc_msg_body switch (msg_type mtype) {
|
||||
case CALL:
|
||||
call_body cbody;
|
||||
case REPLY:
|
||||
reply_body rbody;
|
||||
};
|
||||
|
||||
struct call_body {
|
||||
unsigned int rpcvers; /* must be equal to two (2) */
|
||||
unsigned int prog;
|
||||
unsigned int vers;
|
||||
unsigned int proc;
|
||||
opaque_auth cred;
|
||||
opaque_auth verf;
|
||||
/* procedure-specific parameters start here */
|
||||
};
|
||||
|
||||
union reply_body switch (reply_stat stat) {
|
||||
case MSG_ACCEPTED:
|
||||
accepted_reply areply;
|
||||
case MSG_DENIED:
|
||||
rejected_reply rreply;
|
||||
} /*reply*/;
|
||||
|
||||
struct accepted_reply {
|
||||
opaque_auth verf;
|
||||
union accepted_reply_data reply_data;
|
||||
};
|
||||
|
||||
union accepted_reply_data switch (accept_stat stat) {
|
||||
case SUCCESS:
|
||||
void /* opaque results[0] */;
|
||||
/*
|
||||
* procedure-specific results start here
|
||||
*/
|
||||
case PROG_MISMATCH:
|
||||
struct {
|
||||
unsigned int low;
|
||||
unsigned int high;
|
||||
} mismatch_info;
|
||||
default:
|
||||
/*
|
||||
* Void. Cases include PROG_UNAVAIL, PROC_UNAVAIL,
|
||||
* GARBAGE_ARGS, and SYSTEM_ERR.
|
||||
*/
|
||||
void;
|
||||
};
|
||||
|
||||
union rejected_reply switch (reject_stat stat) {
|
||||
case RPC_MISMATCH:
|
||||
struct {
|
||||
unsigned int low;
|
||||
unsigned int high;
|
||||
} mismatch_info;
|
||||
case AUTH_ERROR:
|
||||
auth_stat auth_stat; /* renamed to avoid conflict with discriminator */
|
||||
};
|
||||
|
||||
struct authsys_parms {
|
||||
unsigned int stamp;
|
||||
string machinename<255>;
|
||||
unsigned int uid;
|
||||
unsigned int gid;
|
||||
unsigned int gids<16>;
|
||||
};
|
||||
228
samples/RPC/rusers.x
Normal file
228
samples/RPC/rusers.x
Normal file
@@ -0,0 +1,228 @@
|
||||
/*
|
||||
* Copyright (c) 2010, Oracle America, Inc.
|
||||
* Redistribution and use in source and binary forms, with or without
|
||||
* modification, are permitted provided that the following conditions are
|
||||
* met:
|
||||
*
|
||||
* * Redistributions of source code must retain the above copyright
|
||||
* notice, this list of conditions and the following disclaimer.
|
||||
* * Redistributions in binary form must reproduce the above
|
||||
* copyright notice, this list of conditions and the following
|
||||
* disclaimer in the documentation and/or other materials
|
||||
* provided with the distribution.
|
||||
* * Neither the name of the "Oracle America, Inc." nor the names of its
|
||||
* contributors may be used to endorse or promote products derived
|
||||
* from this software without specific prior written permission.
|
||||
*
|
||||
* THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
|
||||
* "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
|
||||
* LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS
|
||||
* FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE
|
||||
* COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT,
|
||||
* INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
|
||||
* DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE
|
||||
* GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
|
||||
* INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY,
|
||||
* WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING
|
||||
* NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
|
||||
* OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
|
||||
*/
|
||||
%/*
|
||||
% * Find out about remote users
|
||||
% */
|
||||
|
||||
const RUSERS_MAXUSERLEN = 32;
|
||||
const RUSERS_MAXLINELEN = 32;
|
||||
const RUSERS_MAXHOSTLEN = 257;
|
||||
|
||||
struct rusers_utmp {
|
||||
string ut_user<RUSERS_MAXUSERLEN>; /* aka ut_name */
|
||||
string ut_line<RUSERS_MAXLINELEN>; /* device */
|
||||
string ut_host<RUSERS_MAXHOSTLEN>; /* host user logged on from */
|
||||
int ut_type; /* type of entry */
|
||||
int ut_time; /* time entry was made */
|
||||
unsigned int ut_idle; /* minutes idle */
|
||||
};
|
||||
|
||||
typedef rusers_utmp utmp_array<>;
|
||||
|
||||
#ifdef RPC_HDR
|
||||
%
|
||||
%/*
|
||||
% * Values for ut_type field above.
|
||||
% */
|
||||
#endif
|
||||
const RUSERS_EMPTY = 0;
|
||||
const RUSERS_RUN_LVL = 1;
|
||||
const RUSERS_BOOT_TIME = 2;
|
||||
const RUSERS_OLD_TIME = 3;
|
||||
const RUSERS_NEW_TIME = 4;
|
||||
const RUSERS_INIT_PROCESS = 5;
|
||||
const RUSERS_LOGIN_PROCESS = 6;
|
||||
const RUSERS_USER_PROCESS = 7;
|
||||
const RUSERS_DEAD_PROCESS = 8;
|
||||
const RUSERS_ACCOUNTING = 9;
|
||||
|
||||
program RUSERSPROG {
|
||||
|
||||
version RUSERSVERS_3 {
|
||||
int
|
||||
RUSERSPROC_NUM(void) = 1;
|
||||
|
||||
utmp_array
|
||||
RUSERSPROC_NAMES(void) = 2;
|
||||
|
||||
utmp_array
|
||||
RUSERSPROC_ALLNAMES(void) = 3;
|
||||
} = 3;
|
||||
|
||||
} = 100002;
|
||||
|
||||
#ifdef RPC_HDR
|
||||
%
|
||||
%
|
||||
%#ifdef __cplusplus
|
||||
%extern "C" {
|
||||
%#endif
|
||||
%
|
||||
%#include <rpc/xdr.h>
|
||||
%
|
||||
%/*
|
||||
% * The following structures are used by version 2 of the rusersd protocol.
|
||||
% * They were not developed with rpcgen, so they do not appear as RPCL.
|
||||
% */
|
||||
%
|
||||
%#define RUSERSVERS_IDLE 2
|
||||
%#define RUSERSVERS 3 /* current version */
|
||||
%#define MAXUSERS 100
|
||||
%
|
||||
%/*
|
||||
% * This is the structure used in version 2 of the rusersd RPC service.
|
||||
% * It corresponds to the utmp structure for BSD systems.
|
||||
% */
|
||||
%struct ru_utmp {
|
||||
% char ut_line[8]; /* tty name */
|
||||
% char ut_name[8]; /* user id */
|
||||
% char ut_host[16]; /* host name, if remote */
|
||||
% long int ut_time; /* time on */
|
||||
%};
|
||||
%
|
||||
%struct utmparr {
|
||||
% struct ru_utmp **uta_arr;
|
||||
% int uta_cnt;
|
||||
%};
|
||||
%typedef struct utmparr utmparr;
|
||||
%
|
||||
%extern bool_t xdr_utmparr (XDR *xdrs, struct utmparr *objp) __THROW;
|
||||
%
|
||||
%struct utmpidle {
|
||||
% struct ru_utmp ui_utmp;
|
||||
% unsigned int ui_idle;
|
||||
%};
|
||||
%
|
||||
%struct utmpidlearr {
|
||||
% struct utmpidle **uia_arr;
|
||||
% int uia_cnt;
|
||||
%};
|
||||
%
|
||||
%extern bool_t xdr_utmpidlearr (XDR *xdrs, struct utmpidlearr *objp) __THROW;
|
||||
%
|
||||
%#ifdef __cplusplus
|
||||
%}
|
||||
%#endif
|
||||
#endif
|
||||
|
||||
|
||||
#ifdef RPC_XDR
|
||||
%bool_t xdr_utmp (XDR *xdrs, struct ru_utmp *objp);
|
||||
%
|
||||
%bool_t
|
||||
%xdr_utmp (XDR *xdrs, struct ru_utmp *objp)
|
||||
%{
|
||||
% /* Since the fields are char foo [xxx], we should not free them. */
|
||||
% if (xdrs->x_op != XDR_FREE)
|
||||
% {
|
||||
% char *ptr;
|
||||
% unsigned int size;
|
||||
% ptr = objp->ut_line;
|
||||
% size = sizeof (objp->ut_line);
|
||||
% if (!xdr_bytes (xdrs, &ptr, &size, size)) {
|
||||
% return (FALSE);
|
||||
% }
|
||||
% ptr = objp->ut_name;
|
||||
% size = sizeof (objp->ut_name);
|
||||
% if (!xdr_bytes (xdrs, &ptr, &size, size)) {
|
||||
% return (FALSE);
|
||||
% }
|
||||
% ptr = objp->ut_host;
|
||||
% size = sizeof (objp->ut_host);
|
||||
% if (!xdr_bytes (xdrs, &ptr, &size, size)) {
|
||||
% return (FALSE);
|
||||
% }
|
||||
% }
|
||||
% if (!xdr_long(xdrs, &objp->ut_time)) {
|
||||
% return (FALSE);
|
||||
% }
|
||||
% return (TRUE);
|
||||
%}
|
||||
%
|
||||
%bool_t xdr_utmpptr(XDR *xdrs, struct ru_utmp **objpp);
|
||||
%
|
||||
%bool_t
|
||||
%xdr_utmpptr (XDR *xdrs, struct ru_utmp **objpp)
|
||||
%{
|
||||
% if (!xdr_reference(xdrs, (char **) objpp, sizeof (struct ru_utmp),
|
||||
% (xdrproc_t) xdr_utmp)) {
|
||||
% return (FALSE);
|
||||
% }
|
||||
% return (TRUE);
|
||||
%}
|
||||
%
|
||||
%bool_t
|
||||
%xdr_utmparr (XDR *xdrs, struct utmparr *objp)
|
||||
%{
|
||||
% if (!xdr_array(xdrs, (char **)&objp->uta_arr, (u_int *)&objp->uta_cnt,
|
||||
% MAXUSERS, sizeof(struct ru_utmp *),
|
||||
% (xdrproc_t) xdr_utmpptr)) {
|
||||
% return (FALSE);
|
||||
% }
|
||||
% return (TRUE);
|
||||
%}
|
||||
%
|
||||
%bool_t xdr_utmpidle(XDR *xdrs, struct utmpidle *objp);
|
||||
%
|
||||
%bool_t
|
||||
%xdr_utmpidle (XDR *xdrs, struct utmpidle *objp)
|
||||
%{
|
||||
% if (!xdr_utmp(xdrs, &objp->ui_utmp)) {
|
||||
% return (FALSE);
|
||||
% }
|
||||
% if (!xdr_u_int(xdrs, &objp->ui_idle)) {
|
||||
% return (FALSE);
|
||||
% }
|
||||
% return (TRUE);
|
||||
%}
|
||||
%
|
||||
%bool_t xdr_utmpidleptr(XDR *xdrs, struct utmpidle **objp);
|
||||
%
|
||||
%bool_t
|
||||
%xdr_utmpidleptr (XDR *xdrs, struct utmpidle **objpp)
|
||||
%{
|
||||
% if (!xdr_reference(xdrs, (char **) objpp, sizeof (struct utmpidle),
|
||||
% (xdrproc_t) xdr_utmpidle)) {
|
||||
% return (FALSE);
|
||||
% }
|
||||
% return (TRUE);
|
||||
%}
|
||||
%
|
||||
%bool_t
|
||||
%xdr_utmpidlearr (XDR *xdrs, struct utmpidlearr *objp)
|
||||
%{
|
||||
% if (!xdr_array(xdrs, (char **)&objp->uia_arr, (u_int *)&objp->uia_cnt,
|
||||
% MAXUSERS, sizeof(struct utmpidle *),
|
||||
% (xdrproc_t) xdr_utmpidleptr)) {
|
||||
% return (FALSE);
|
||||
% }
|
||||
% return (TRUE);
|
||||
%}
|
||||
#endif
|
||||
311
samples/RPC/yp.x
Normal file
311
samples/RPC/yp.x
Normal file
@@ -0,0 +1,311 @@
|
||||
/* @(#)yp.x 2.1 88/08/01 4.0 RPCSRC */
|
||||
|
||||
/*
|
||||
* Copyright (c) 2010, Oracle America, Inc.
|
||||
*
|
||||
* Redistribution and use in source and binary forms, with or without
|
||||
* modification, are permitted provided that the following conditions are
|
||||
* met:
|
||||
*
|
||||
* * Redistributions of source code must retain the above copyright
|
||||
* notice, this list of conditions and the following disclaimer.
|
||||
* * Redistributions in binary form must reproduce the above
|
||||
* copyright notice, this list of conditions and the following
|
||||
* disclaimer in the documentation and/or other materials
|
||||
* provided with the distribution.
|
||||
* * Neither the name of the "Oracle America, Inc." nor the names of its
|
||||
* contributors may be used to endorse or promote products derived
|
||||
* from this software without specific prior written permission.
|
||||
*
|
||||
* THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
|
||||
* "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
|
||||
* LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS
|
||||
* FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE
|
||||
* COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT,
|
||||
* INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
|
||||
* DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE
|
||||
* GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
|
||||
* INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY,
|
||||
* WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING
|
||||
* NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
|
||||
* OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
|
||||
*/
|
||||
|
||||
/*
|
||||
* Protocol description file for the Yellow Pages Service
|
||||
*/
|
||||
|
||||
const YPMAXRECORD = 1024;
|
||||
const YPMAXDOMAIN = 64;
|
||||
const YPMAXMAP = 64;
|
||||
const YPMAXPEER = 64;
|
||||
|
||||
|
||||
enum ypstat {
|
||||
YP_TRUE = 1,
|
||||
YP_NOMORE = 2,
|
||||
YP_FALSE = 0,
|
||||
YP_NOMAP = -1,
|
||||
YP_NODOM = -2,
|
||||
YP_NOKEY = -3,
|
||||
YP_BADOP = -4,
|
||||
YP_BADDB = -5,
|
||||
YP_YPERR = -6,
|
||||
YP_BADARGS = -7,
|
||||
YP_VERS = -8
|
||||
};
|
||||
|
||||
|
||||
enum ypxfrstat {
|
||||
YPXFR_SUCC = 1,
|
||||
YPXFR_AGE = 2,
|
||||
YPXFR_NOMAP = -1,
|
||||
YPXFR_NODOM = -2,
|
||||
YPXFR_RSRC = -3,
|
||||
YPXFR_RPC = -4,
|
||||
YPXFR_MADDR = -5,
|
||||
YPXFR_YPERR = -6,
|
||||
YPXFR_BADARGS = -7,
|
||||
YPXFR_DBM = -8,
|
||||
YPXFR_FILE = -9,
|
||||
YPXFR_SKEW = -10,
|
||||
YPXFR_CLEAR = -11,
|
||||
YPXFR_FORCE = -12,
|
||||
YPXFR_XFRERR = -13,
|
||||
YPXFR_REFUSED = -14
|
||||
};
|
||||
|
||||
|
||||
typedef string domainname<YPMAXDOMAIN>;
|
||||
typedef string mapname<YPMAXMAP>;
|
||||
typedef string peername<YPMAXPEER>;
|
||||
typedef opaque keydat<YPMAXRECORD>;
|
||||
typedef opaque valdat<YPMAXRECORD>;
|
||||
|
||||
|
||||
struct ypmap_parms {
|
||||
domainname domain;
|
||||
mapname map;
|
||||
unsigned int ordernum;
|
||||
peername peer;
|
||||
};
|
||||
|
||||
struct ypreq_key {
|
||||
domainname domain;
|
||||
mapname map;
|
||||
keydat key;
|
||||
};
|
||||
|
||||
struct ypreq_nokey {
|
||||
domainname domain;
|
||||
mapname map;
|
||||
};
|
||||
|
||||
struct ypreq_xfr {
|
||||
ypmap_parms map_parms;
|
||||
unsigned int transid;
|
||||
unsigned int prog;
|
||||
unsigned int port;
|
||||
};
|
||||
|
||||
|
||||
struct ypresp_val {
|
||||
ypstat stat;
|
||||
valdat val;
|
||||
};
|
||||
|
||||
struct ypresp_key_val {
|
||||
ypstat stat;
|
||||
#ifdef STUPID_SUN_BUG
|
||||
/* This is the form as distributed by Sun. But even the Sun NIS
|
||||
servers expect the values in the other order. So their
|
||||
implementation somehow must change the order internally. We
|
||||
don't want to follow this bad example since the user should be
|
||||
able to use rpcgen on this file. */
|
||||
keydat key;
|
||||
valdat val;
|
||||
#else
|
||||
valdat val;
|
||||
keydat key;
|
||||
#endif
|
||||
};
|
||||
|
||||
|
||||
struct ypresp_master {
|
||||
ypstat stat;
|
||||
peername peer;
|
||||
};
|
||||
|
||||
struct ypresp_order {
|
||||
ypstat stat;
|
||||
unsigned int ordernum;
|
||||
};
|
||||
|
||||
union ypresp_all switch (bool more) {
|
||||
case TRUE:
|
||||
ypresp_key_val val;
|
||||
case FALSE:
|
||||
void;
|
||||
};
|
||||
|
||||
struct ypresp_xfr {
|
||||
unsigned int transid;
|
||||
ypxfrstat xfrstat;
|
||||
};
|
||||
|
||||
struct ypmaplist {
|
||||
mapname map;
|
||||
ypmaplist *next;
|
||||
};
|
||||
|
||||
struct ypresp_maplist {
|
||||
ypstat stat;
|
||||
ypmaplist *maps;
|
||||
};
|
||||
|
||||
enum yppush_status {
|
||||
YPPUSH_SUCC = 1, /* Success */
|
||||
YPPUSH_AGE = 2, /* Master's version not newer */
|
||||
YPPUSH_NOMAP = -1, /* Can't find server for map */
|
||||
YPPUSH_NODOM = -2, /* Domain not supported */
|
||||
YPPUSH_RSRC = -3, /* Local resource alloc failure */
|
||||
YPPUSH_RPC = -4, /* RPC failure talking to server */
|
||||
YPPUSH_MADDR = -5, /* Can't get master address */
|
||||
YPPUSH_YPERR = -6, /* YP server/map db error */
|
||||
YPPUSH_BADARGS = -7, /* Request arguments bad */
|
||||
YPPUSH_DBM = -8, /* Local dbm operation failed */
|
||||
YPPUSH_FILE = -9, /* Local file I/O operation failed */
|
||||
YPPUSH_SKEW = -10, /* Map version skew during transfer */
|
||||
YPPUSH_CLEAR = -11, /* Can't send "Clear" req to local ypserv */
|
||||
YPPUSH_FORCE = -12, /* No local order number in map use -f flag. */
|
||||
YPPUSH_XFRERR = -13, /* ypxfr error */
|
||||
YPPUSH_REFUSED = -14 /* Transfer request refused by ypserv */
|
||||
};
|
||||
|
||||
struct yppushresp_xfr {
|
||||
unsigned transid;
|
||||
yppush_status status;
|
||||
};
|
||||
|
||||
/*
|
||||
* Response structure and overall result status codes. Success and failure
|
||||
* represent two separate response message types.
|
||||
*/
|
||||
|
||||
enum ypbind_resptype {
|
||||
YPBIND_SUCC_VAL = 1,
|
||||
YPBIND_FAIL_VAL = 2
|
||||
};
|
||||
|
||||
struct ypbind_binding {
|
||||
opaque ypbind_binding_addr[4]; /* In network order */
|
||||
opaque ypbind_binding_port[2]; /* In network order */
|
||||
};
|
||||
|
||||
union ypbind_resp switch (ypbind_resptype ypbind_status) {
|
||||
case YPBIND_FAIL_VAL:
|
||||
unsigned ypbind_error;
|
||||
case YPBIND_SUCC_VAL:
|
||||
ypbind_binding ypbind_bindinfo;
|
||||
};
|
||||
|
||||
/* Detailed failure reason codes for response field ypbind_error*/
|
||||
|
||||
const YPBIND_ERR_ERR = 1; /* Internal error */
|
||||
const YPBIND_ERR_NOSERV = 2; /* No bound server for passed domain */
|
||||
const YPBIND_ERR_RESC = 3; /* System resource allocation failure */
|
||||
|
||||
|
||||
/*
|
||||
* Request data structure for ypbind "Set domain" procedure.
|
||||
*/
|
||||
struct ypbind_setdom {
|
||||
domainname ypsetdom_domain;
|
||||
ypbind_binding ypsetdom_binding;
|
||||
unsigned ypsetdom_vers;
|
||||
};
|
||||
|
||||
|
||||
/*
|
||||
* YP access protocol
|
||||
*/
|
||||
program YPPROG {
|
||||
version YPVERS {
|
||||
void
|
||||
YPPROC_NULL(void) = 0;
|
||||
|
||||
bool
|
||||
YPPROC_DOMAIN(domainname) = 1;
|
||||
|
||||
bool
|
||||
YPPROC_DOMAIN_NONACK(domainname) = 2;
|
||||
|
||||
ypresp_val
|
||||
YPPROC_MATCH(ypreq_key) = 3;
|
||||
|
||||
ypresp_key_val
|
||||
YPPROC_FIRST(ypreq_key) = 4;
|
||||
|
||||
ypresp_key_val
|
||||
YPPROC_NEXT(ypreq_key) = 5;
|
||||
|
||||
ypresp_xfr
|
||||
YPPROC_XFR(ypreq_xfr) = 6;
|
||||
|
||||
void
|
||||
YPPROC_CLEAR(void) = 7;
|
||||
|
||||
ypresp_all
|
||||
YPPROC_ALL(ypreq_nokey) = 8;
|
||||
|
||||
ypresp_master
|
||||
YPPROC_MASTER(ypreq_nokey) = 9;
|
||||
|
||||
ypresp_order
|
||||
YPPROC_ORDER(ypreq_nokey) = 10;
|
||||
|
||||
ypresp_maplist
|
||||
YPPROC_MAPLIST(domainname) = 11;
|
||||
} = 2;
|
||||
} = 100004;
|
||||
|
||||
|
||||
/*
|
||||
* YPPUSHPROC_XFRRESP is the callback routine for result of YPPROC_XFR
|
||||
*/
|
||||
program YPPUSH_XFRRESPPROG {
|
||||
version YPPUSH_XFRRESPVERS {
|
||||
void
|
||||
YPPUSHPROC_NULL(void) = 0;
|
||||
|
||||
#ifdef STUPID_SUN_BUG
|
||||
/* This is the form as distributed by Sun. But even
|
||||
the Sun NIS servers expect the values in the other
|
||||
order. So their implementation somehow must change
|
||||
the order internally. We don't want to follow this
|
||||
bad example since the user should be able to use
|
||||
rpcgen on this file. */
|
||||
yppushresp_xfr
|
||||
YPPUSHPROC_XFRRESP(void) = 1;
|
||||
#else
|
||||
void
|
||||
YPPUSHPROC_XFRRESP(yppushresp_xfr) = 1;
|
||||
#endif
|
||||
} = 1;
|
||||
} = 0x40000000; /* transient: could be anything up to 0x5fffffff */
|
||||
|
||||
/*
|
||||
* YP binding protocol
|
||||
*/
|
||||
program YPBINDPROG {
|
||||
version YPBINDVERS {
|
||||
void
|
||||
YPBINDPROC_NULL(void) = 0;
|
||||
|
||||
ypbind_resp
|
||||
YPBINDPROC_DOMAIN(domainname) = 1;
|
||||
|
||||
void
|
||||
YPBINDPROC_SETDOM(ypbind_setdom) = 2;
|
||||
} = 2;
|
||||
} = 100007;
|
||||
205
samples/Scala/car-ride.kojo
Normal file
205
samples/Scala/car-ride.kojo
Normal file
@@ -0,0 +1,205 @@
|
||||
// Kojo examples (files ending in .kojo) are licensed under The MIT License:
|
||||
|
||||
// Copyright (C) 2009-2018 Lalit Pant <pant.lalit@gmail.com> and the Kojo Dev Team.
|
||||
|
||||
// Permission is hereby granted, free of charge, to any person obtaining a copy
|
||||
// of this software and associated documentation files (the "Software"), to deal
|
||||
// in the Software without restriction, including without limitation the rights
|
||||
// to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
||||
// copies of the Software, and to permit persons to whom the Software is
|
||||
// furnished to do so, subject to the following conditions:
|
||||
|
||||
// The above copyright notice and this permission notice shall be included in all
|
||||
// copies or substantial portions of the Software.
|
||||
|
||||
// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
||||
// IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
||||
// FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
||||
// AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
||||
// LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
||||
// OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
|
||||
// SOFTWARE.
|
||||
|
||||
// Use the four arrow keys to avoid the blue cars
|
||||
// You gain energy every second, and lose energy for every collision
|
||||
// You lose if your energy drops below zero, or you hit the edges of the screen
|
||||
// You win if you stay alive for a minute
|
||||
switchToDefault2Perspective()
|
||||
val carHeight = 100
|
||||
val markerHeight = 80
|
||||
// The collision polygon for the (very similarly sized) car images car1.png and car2.png
|
||||
val carE = trans(2, 14) -> Picture {
|
||||
repeat(2) {
|
||||
forward(70); right(45); forward(20); right(45)
|
||||
forward(18); right(45); forward(20); right(45)
|
||||
}
|
||||
}
|
||||
def car(img: String) = PicShape.image(img, carE)
|
||||
|
||||
val cars = collection.mutable.Map.empty[Picture, Vector2D]
|
||||
val carSpeed = 3
|
||||
val pResponse = 3
|
||||
var pVel = Vector2D(0, 0)
|
||||
var disabledTime = 0L
|
||||
|
||||
val bplayer = newMp3Player
|
||||
val cplayer = newMp3Player
|
||||
|
||||
def createCar() {
|
||||
val c = trans(cb.x + random(cb.width.toInt), cb.y + cb.height) -> car("/media/car-ride/car2.png")
|
||||
draw(c)
|
||||
cars += c -> Vector2D(0, -carSpeed)
|
||||
}
|
||||
val markers = collection.mutable.Set.empty[Picture]
|
||||
def createMarker() {
|
||||
val mwidth = 20
|
||||
val m = fillColor(white) * penColor(white) *
|
||||
trans(cb.x + cb.width / 2 - mwidth / 2, cb.y + cb.height) -> PicShape.rect(markerHeight, mwidth)
|
||||
draw(m)
|
||||
markers += m
|
||||
}
|
||||
|
||||
cleari()
|
||||
drawStage(darkGray)
|
||||
val cb = canvasBounds
|
||||
val player = car("/media/car-ride/car1.png")
|
||||
draw(player)
|
||||
drawAndHide(carE)
|
||||
|
||||
timer(1200) {
|
||||
createMarker()
|
||||
createCar()
|
||||
}
|
||||
|
||||
animate {
|
||||
player.moveToFront()
|
||||
val enabled = epochTimeMillis - disabledTime > 300
|
||||
if (enabled) {
|
||||
if (isKeyPressed(Kc.VK_LEFT)) {
|
||||
pVel = Vector2D(-pResponse, 0)
|
||||
player.transv(pVel)
|
||||
}
|
||||
if (isKeyPressed(Kc.VK_RIGHT)) {
|
||||
pVel = Vector2D(pResponse, 0)
|
||||
player.transv(pVel)
|
||||
}
|
||||
if (isKeyPressed(Kc.VK_UP)) {
|
||||
pVel = Vector2D(0, pResponse)
|
||||
player.transv(pVel)
|
||||
if (!isMp3Playing) {
|
||||
playMp3Sound("/media/car-ride/car-accel.mp3")
|
||||
}
|
||||
}
|
||||
else {
|
||||
stopMp3()
|
||||
}
|
||||
if (isKeyPressed(Kc.VK_DOWN)) {
|
||||
pVel = Vector2D(0, -pResponse)
|
||||
player.transv(pVel)
|
||||
if (!bplayer.isMp3Playing) {
|
||||
bplayer.playMp3Sound("/media/car-ride/car-brake.mp3")
|
||||
}
|
||||
}
|
||||
else {
|
||||
bplayer.stopMp3()
|
||||
}
|
||||
}
|
||||
else {
|
||||
player.transv(pVel)
|
||||
}
|
||||
|
||||
if (player.collidesWith(stageLeft) || player.collidesWith(stageRight)) {
|
||||
cplayer.playMp3Sound("/media/car-ride/car-crash.mp3")
|
||||
player.setOpacity(0.5)
|
||||
drawMessage("You Crashed!", red)
|
||||
stopAnimation()
|
||||
}
|
||||
else if (player.collidesWith(stageTop)) {
|
||||
pVel = Vector2D(0, -pResponse)
|
||||
player.transv(pVel * 2)
|
||||
disabledTime = epochTimeMillis
|
||||
}
|
||||
else if (player.collidesWith(stageBot)) {
|
||||
pVel = Vector2D(0, pResponse)
|
||||
player.transv(pVel * 2)
|
||||
disabledTime = epochTimeMillis
|
||||
}
|
||||
|
||||
cars.foreach { cv =>
|
||||
val (c, vel) = cv
|
||||
c.moveToFront()
|
||||
if (player.collidesWith(c)) {
|
||||
cplayer.playMp3Sound("/media/car-ride/car-crash.mp3")
|
||||
pVel = bouncePicVectorOffPic(player, pVel - vel, c) / 2
|
||||
player.transv(pVel * 3)
|
||||
c.transv(-pVel * 3)
|
||||
disabledTime = epochTimeMillis
|
||||
updateEnergyCrash()
|
||||
}
|
||||
else {
|
||||
val newVel = Vector2D(vel.x + randomDouble(1) / 2 - 0.25, vel.y)
|
||||
cars += c -> newVel
|
||||
c.transv(newVel)
|
||||
}
|
||||
if (c.position.y + carHeight < cb.y) {
|
||||
c.erase()
|
||||
cars -= c
|
||||
}
|
||||
}
|
||||
markers.foreach { m =>
|
||||
m.translate(0, -carSpeed * 2)
|
||||
if (m.position.y + markerHeight < cb.y) {
|
||||
m.erase()
|
||||
markers -= m
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
var energyLevel = 0
|
||||
def energyText = s"Energy: $energyLevel"
|
||||
val energyLabel = trans(cb.x + 10, cb.y + cb.height - 10) -> PicShape.textu(energyText, 20, blue)
|
||||
def updateEnergyTick() {
|
||||
energyLevel += 2
|
||||
energyLabel.update(energyText)
|
||||
}
|
||||
def updateEnergyCrash() {
|
||||
energyLevel -= 10
|
||||
energyLabel.update(energyText)
|
||||
if (energyLevel < 0) {
|
||||
drawMessage("You're out of energy! You Lose", red)
|
||||
stopAnimation()
|
||||
}
|
||||
}
|
||||
|
||||
def drawMessage(m: String, c: Color) {
|
||||
val te = textExtent(m, 30)
|
||||
val pic = penColor(c) * trans(cb.x + (cb.width - te.width) / 2, 0) -> PicShape.text(m, 30)
|
||||
draw(pic)
|
||||
}
|
||||
|
||||
def manageGameScore() {
|
||||
var gameTime = 0
|
||||
val timeLabel = trans(cb.x + 10, cb.y + 50) -> PicShape.textu(gameTime, 20, blue)
|
||||
draw(timeLabel)
|
||||
draw(energyLabel)
|
||||
timeLabel.forwardInputTo(stageArea)
|
||||
|
||||
timer(1000) {
|
||||
gameTime += 1
|
||||
timeLabel.update(gameTime)
|
||||
updateEnergyTick()
|
||||
|
||||
if (gameTime == 60) {
|
||||
drawMessage("Time up! You Win", green)
|
||||
stopAnimation()
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
manageGameScore()
|
||||
playMp3Loop("/media/car-ride/car-move.mp3")
|
||||
activateCanvas()
|
||||
|
||||
// Car images, via google images, from http://motor-kid.com/race-cars-top-view.html
|
||||
// and www.carinfopic.com
|
||||
// Car sounds from http://soundbible.com
|
||||
53
samples/Scala/fib-tree.kojo
Normal file
53
samples/Scala/fib-tree.kojo
Normal file
@@ -0,0 +1,53 @@
|
||||
// Kojo examples (files ending in .kojo) are licensed under The MIT License:
|
||||
|
||||
// Copyright (C) 2009-2018 Lalit Pant <pant.lalit@gmail.com> and the Kojo Dev Team.
|
||||
|
||||
// Permission is hereby granted, free of charge, to any person obtaining a copy
|
||||
// of this software and associated documentation files (the "Software"), to deal
|
||||
// in the Software without restriction, including without limitation the rights
|
||||
// to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
||||
// copies of the Software, and to permit persons to whom the Software is
|
||||
// furnished to do so, subject to the following conditions:
|
||||
|
||||
// The above copyright notice and this permission notice shall be included in all
|
||||
// copies or substantial portions of the Software.
|
||||
|
||||
// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
||||
// IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
||||
// FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
||||
// AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
||||
// LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
||||
// OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
|
||||
// SOFTWARE.
|
||||
|
||||
|
||||
// Example from http://lalitpant.blogspot.in/2012/05/recursive-drawing-with-kojo.html
|
||||
// This example is based on Kojo Pictures
|
||||
val size = 100
|
||||
def S = Picture {
|
||||
repeat (4) {
|
||||
forward(size)
|
||||
right()
|
||||
}
|
||||
}
|
||||
|
||||
def stem = scale(0.13, 1) * penColor(noColor) * fillColor(black) -> S
|
||||
|
||||
clear()
|
||||
setBackground(Color(255, 170, 29))
|
||||
invisible()
|
||||
|
||||
def drawing(n: Int): Picture = {
|
||||
if (n == 1)
|
||||
stem
|
||||
else
|
||||
GPics(stem,
|
||||
trans(2, size-5) * brit(0.05) -> GPics(
|
||||
rot(25) * scale(0.72) -> drawing(n-1),
|
||||
rot(25) * trans(0, size * 0.72) * rot(-75) * scale(0.55) -> drawing(n-1)
|
||||
)
|
||||
)
|
||||
}
|
||||
|
||||
val pic = trans(0, -100) -> drawing(10)
|
||||
draw(pic)
|
||||
84
samples/Scala/turtle-controller.kojo
Normal file
84
samples/Scala/turtle-controller.kojo
Normal file
@@ -0,0 +1,84 @@
|
||||
// Kojo examples (files ending in .kojo) are licensed under The MIT License:
|
||||
|
||||
// Copyright (C) 2009-2018 Lalit Pant <pant.lalit@gmail.com> and the Kojo Dev Team.
|
||||
|
||||
// Permission is hereby granted, free of charge, to any person obtaining a copy
|
||||
// of this software and associated documentation files (the "Software"), to deal
|
||||
// in the Software without restriction, including without limitation the rights
|
||||
// to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
||||
// copies of the Software, and to permit persons to whom the Software is
|
||||
// furnished to do so, subject to the following conditions:
|
||||
|
||||
// The above copyright notice and this permission notice shall be included in all
|
||||
// copies or substantial portions of the Software.
|
||||
|
||||
// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
||||
// IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
||||
// FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
||||
// AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
||||
// LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
||||
// OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
|
||||
// SOFTWARE.
|
||||
|
||||
// Run this program to get basic control over the turtle via some buttons
|
||||
// Useful for smaller kids
|
||||
|
||||
// Change these value to tweak the behavior of the buttons
|
||||
val fdStep = 50
|
||||
val fdStep2 = 10
|
||||
val rtStep = 90
|
||||
val rtStep2 = 10
|
||||
val bgColor = white
|
||||
val sBgColor = "white"
|
||||
// End tweak region
|
||||
|
||||
clear()
|
||||
clearOutput()
|
||||
beamsOn()
|
||||
val width = canvasBounds.width
|
||||
val height = canvasBounds.height
|
||||
|
||||
setBackground(bgColor)
|
||||
setPenColor(purple)
|
||||
|
||||
def action(code: String) {
|
||||
interpret(code); println(code)
|
||||
}
|
||||
|
||||
val cmd = Map(
|
||||
"forward1" -> s"forward($fdStep)",
|
||||
"forward2" -> s"forward($fdStep2)",
|
||||
"hop1" -> s"hop($fdStep)",
|
||||
"hop2" -> s"hop($fdStep2)",
|
||||
"right1" -> s"right($rtStep)",
|
||||
"right2" -> s"right($rtStep2)",
|
||||
"left1" -> s"left($rtStep)",
|
||||
"left2" -> s"left($rtStep2)"
|
||||
)
|
||||
|
||||
def eraseCmds(n: Int) =
|
||||
s"saveStyle(); setPenColor($sBgColor); setPenThickness(4); back($n); restoreStyle()"
|
||||
|
||||
def button(forcmd: String) = PicShape.button(cmd(forcmd)) { action(cmd(forcmd)) }
|
||||
|
||||
val panel = trans(-width / 2, -height / 2) * scale(1.4) -> VPics(
|
||||
HPics(
|
||||
button("left2"),
|
||||
button("forward2"),
|
||||
button("right2"),
|
||||
button("hop2"),
|
||||
PicShape.button(s"erase($fdStep2)") { action(eraseCmds(fdStep2)) }
|
||||
),
|
||||
HPics(
|
||||
button("left1"),
|
||||
button("forward1"),
|
||||
button("right1"),
|
||||
button("hop1"),
|
||||
PicShape.button(s"erase($fdStep)") { action(eraseCmds(fdStep)) }
|
||||
)
|
||||
)
|
||||
|
||||
draw(panel)
|
||||
println("// Paste the generated program below into the script editor")
|
||||
println("// and run it -- to reproduce your drawing")
|
||||
println("clear()")
|
||||
10
samples/SugarSS/sample.sss
Normal file
10
samples/SugarSS/sample.sss
Normal file
@@ -0,0 +1,10 @@
|
||||
@define-mixin size $size
|
||||
width: $size
|
||||
|
||||
$big: 100px
|
||||
|
||||
// Main block
|
||||
.block
|
||||
&_logo
|
||||
background: inline("./logo.png")
|
||||
@mixin size $big
|
||||
23
samples/YARA/OfExample.yar
Normal file
23
samples/YARA/OfExample.yar
Normal file
@@ -0,0 +1,23 @@
|
||||
rule OfExample2
|
||||
{
|
||||
strings:
|
||||
$foo1 = "foo1"
|
||||
$foo2 = "foo2"
|
||||
$foo3 = "foo3"
|
||||
|
||||
condition:
|
||||
2 of ($foo*) // equivalent to 2 of ($foo1,$foo2,$foo3)
|
||||
}
|
||||
|
||||
rule OfExample3
|
||||
{
|
||||
strings:
|
||||
$foo1 = "foo1"
|
||||
$foo2 = "foo2"
|
||||
|
||||
$bar1 = "bar1"
|
||||
$bar2 = "bar2"
|
||||
|
||||
condition:
|
||||
3 of ($foo*,$bar1,$bar2)
|
||||
}
|
||||
13
samples/YARA/example.yara
Normal file
13
samples/YARA/example.yara
Normal file
@@ -0,0 +1,13 @@
|
||||
rule silent_banker : banker
|
||||
{
|
||||
meta:
|
||||
description = "This is just an example"
|
||||
thread_level = 3
|
||||
in_the_wild = true
|
||||
strings:
|
||||
$a = {6A 40 68 00 30 00 00 6A 14 8D 91}
|
||||
$b = {8D 4D B0 2B C1 83 C0 27 99 6A 4E 59 F7 F9}
|
||||
$c = "UVODFRYSIHLNWPEJXQZAKCBGMT"
|
||||
condition:
|
||||
$a or $b or $c
|
||||
}
|
||||
1
samples/YARA/true.yar
Normal file
1
samples/YARA/true.yar
Normal file
@@ -0,0 +1 @@
|
||||
rule test { condition: true }
|
||||
@@ -1,6 +1,7 @@
|
||||
#!/usr/bin/env ruby
|
||||
|
||||
require "optparse"
|
||||
require "open3"
|
||||
|
||||
ROOT = File.expand_path("../../", __FILE__)
|
||||
|
||||
@@ -42,6 +43,17 @@ def log(msg)
|
||||
puts msg if $verbose
|
||||
end
|
||||
|
||||
def command(*args)
|
||||
log "$ #{args.join(' ')}"
|
||||
output, status = Open3.capture2e(*args)
|
||||
if !status.success?
|
||||
output.each_line do |line|
|
||||
log " > #{line}"
|
||||
end
|
||||
warn "Command failed. Aborting."
|
||||
exit 1
|
||||
end
|
||||
end
|
||||
|
||||
usage = """Usage:
|
||||
#{$0} [-v|--verbose] [--replace grammar] url
|
||||
@@ -51,12 +63,12 @@ Examples:
|
||||
"""
|
||||
|
||||
$replace = nil
|
||||
$verbose = false
|
||||
$verbose = true
|
||||
|
||||
OptionParser.new do |opts|
|
||||
opts.banner = usage
|
||||
opts.on("-v", "--verbose", "Print verbose feedback to STDOUT") do
|
||||
$verbose = true
|
||||
opts.on("-q", "--quiet", "Do not print output unless there's a failure") do
|
||||
$verbose = false
|
||||
end
|
||||
opts.on("-rSUBMODULE", "--replace=SUBMODDULE", "Replace an existing grammar submodule.") do |name|
|
||||
$replace = name
|
||||
@@ -82,23 +94,22 @@ Dir.chdir(ROOT)
|
||||
|
||||
if repo_old
|
||||
log "Deregistering: #{repo_old}"
|
||||
`git submodule deinit #{repo_old}`
|
||||
`git rm -rf #{repo_old}`
|
||||
`script/convert-grammars`
|
||||
command('git', 'submodule', 'deinit', repo_old)
|
||||
command('git', 'rm', '-rf', repo_old)
|
||||
command('script/grammar-compiler', 'update', '-f')
|
||||
end
|
||||
|
||||
log "Registering new submodule: #{repo_new}"
|
||||
`git submodule add -f #{https} #{repo_new}`
|
||||
exit 1 if $?.exitstatus > 0
|
||||
`script/convert-grammars --add #{repo_new}`
|
||||
command('git', 'submodule', 'add', '-f', https, repo_new)
|
||||
command('script/grammar-compiler', 'add', repo_new)
|
||||
|
||||
log "Confirming license"
|
||||
if repo_old
|
||||
`script/licensed`
|
||||
command('script/licensed')
|
||||
else
|
||||
`script/licensed --module "#{repo_new}"`
|
||||
command('script/licensed', '--module', repo_new)
|
||||
end
|
||||
|
||||
log "Updating grammar documentation in vendor/README.md"
|
||||
`bundle exec rake samples`
|
||||
`script/list-grammars`
|
||||
command('bundle', 'exec', 'rake', 'samples')
|
||||
command('script/list-grammars')
|
||||
|
||||
9
script/build-grammars-tarball
Executable file
9
script/build-grammars-tarball
Executable file
@@ -0,0 +1,9 @@
|
||||
#!/bin/sh
|
||||
|
||||
set -e
|
||||
cd "$(dirname "$0")/.."
|
||||
rm -rf ./linguist-grammars
|
||||
./script/grammar-compiler compile -o linguist-grammars || true
|
||||
tar -zcvf linguist-grammars.tar.gz linguist-grammars
|
||||
|
||||
|
||||
@@ -1,319 +0,0 @@
|
||||
#!/usr/bin/env ruby
|
||||
|
||||
require 'bundler/setup'
|
||||
require 'json'
|
||||
require 'net/http'
|
||||
require 'optparse'
|
||||
require 'plist'
|
||||
require 'set'
|
||||
require 'thread'
|
||||
require 'tmpdir'
|
||||
require 'uri'
|
||||
require 'yaml'
|
||||
|
||||
ROOT = File.expand_path("../..", __FILE__)
|
||||
GRAMMARS_PATH = File.join(ROOT, "grammars")
|
||||
SOURCES_FILE = File.join(ROOT, "grammars.yml")
|
||||
CSONC = File.join(ROOT, "node_modules", ".bin", "csonc")
|
||||
|
||||
$options = {
|
||||
:add => false,
|
||||
:install => true,
|
||||
:output => SOURCES_FILE,
|
||||
:remote => true,
|
||||
}
|
||||
|
||||
class SingleFile
|
||||
def initialize(path)
|
||||
@path = path
|
||||
end
|
||||
|
||||
def url
|
||||
@path
|
||||
end
|
||||
|
||||
def fetch(tmp_dir)
|
||||
[@path]
|
||||
end
|
||||
end
|
||||
|
||||
class DirectoryPackage
|
||||
def self.fetch(dir)
|
||||
Dir["#{dir}/**/*"].select do |path|
|
||||
case File.extname(path.downcase)
|
||||
when '.plist'
|
||||
path.split('/')[-2] == 'Syntaxes'
|
||||
when '.tmlanguage', '.yaml-tmlanguage'
|
||||
true
|
||||
when '.cson', '.json'
|
||||
path.split('/')[-2] == 'grammars'
|
||||
else
|
||||
false
|
||||
end
|
||||
end
|
||||
end
|
||||
|
||||
def initialize(directory)
|
||||
@directory = directory
|
||||
end
|
||||
|
||||
def url
|
||||
@directory
|
||||
end
|
||||
|
||||
def fetch(tmp_dir)
|
||||
self.class.fetch(File.join(ROOT, @directory))
|
||||
end
|
||||
end
|
||||
|
||||
class TarballPackage
|
||||
def self.fetch(tmp_dir, url)
|
||||
`curl --silent --location --max-time 30 --output "#{tmp_dir}/archive" "#{url}"`
|
||||
raise "Failed to fetch GH package: #{url} #{$?.to_s}" unless $?.success?
|
||||
|
||||
output = File.join(tmp_dir, 'extracted')
|
||||
Dir.mkdir(output)
|
||||
`tar -C "#{output}" -xf "#{tmp_dir}/archive"`
|
||||
raise "Failed to uncompress tarball: #{tmp_dir}/archive (from #{url}) #{$?.to_s}" unless $?.success?
|
||||
|
||||
DirectoryPackage.fetch(output)
|
||||
end
|
||||
|
||||
attr_reader :url
|
||||
|
||||
def initialize(url)
|
||||
@url = url
|
||||
end
|
||||
|
||||
def fetch(tmp_dir)
|
||||
self.class.fetch(tmp_dir, url)
|
||||
end
|
||||
end
|
||||
|
||||
class SingleGrammar
|
||||
attr_reader :url
|
||||
|
||||
def initialize(url)
|
||||
@url = url
|
||||
end
|
||||
|
||||
def fetch(tmp_dir)
|
||||
filename = File.join(tmp_dir, File.basename(url))
|
||||
`curl --silent --location --max-time 10 --output "#{filename}" "#{url}"`
|
||||
raise "Failed to fetch grammar: #{url}: #{$?.to_s}" unless $?.success?
|
||||
[filename]
|
||||
end
|
||||
end
|
||||
|
||||
class SVNPackage
|
||||
attr_reader :url
|
||||
|
||||
def initialize(url)
|
||||
@url = url
|
||||
end
|
||||
|
||||
def fetch(tmp_dir)
|
||||
`svn export -q "#{url}/Syntaxes" "#{tmp_dir}/Syntaxes"`
|
||||
raise "Failed to export SVN repository: #{url}: #{$?.to_s}" unless $?.success?
|
||||
Dir["#{tmp_dir}/Syntaxes/*.{plist,tmLanguage,tmlanguage,YAML-tmLanguage}"]
|
||||
end
|
||||
end
|
||||
|
||||
class GitHubPackage
|
||||
def self.parse_url(url)
|
||||
url, ref = url.split("@", 2)
|
||||
path = URI.parse(url).path.split('/')
|
||||
[path[1], path[2].chomp('.git'), ref || "master"]
|
||||
end
|
||||
|
||||
attr_reader :user
|
||||
attr_reader :repo
|
||||
attr_reader :ref
|
||||
|
||||
def initialize(url)
|
||||
@user, @repo, @ref = self.class.parse_url(url)
|
||||
end
|
||||
|
||||
def url
|
||||
suffix = "@#{ref}" unless ref == "master"
|
||||
"https://github.com/#{user}/#{repo}#{suffix}"
|
||||
end
|
||||
|
||||
def fetch(tmp_dir)
|
||||
url = "https://github.com/#{user}/#{repo}/archive/#{ref}.tar.gz"
|
||||
TarballPackage.fetch(tmp_dir, url)
|
||||
end
|
||||
end
|
||||
|
||||
def load_grammar(path)
|
||||
case File.extname(path.downcase)
|
||||
when '.plist', '.tmlanguage'
|
||||
Plist::parse_xml(path)
|
||||
when '.yaml-tmlanguage'
|
||||
content = File.read(path)
|
||||
# Attempt to parse YAML file even if it has a YAML 1.2 header
|
||||
if content.lines[0] =~ /^%YAML[ :]1\.2/
|
||||
content = content.lines[1..-1].join
|
||||
end
|
||||
begin
|
||||
YAML.load(content)
|
||||
rescue Psych::SyntaxError => e
|
||||
$stderr.puts "Failed to parse YAML grammar '#{path}'"
|
||||
end
|
||||
when '.cson'
|
||||
cson = `"#{CSONC}" "#{path}"`
|
||||
raise "Failed to convert CSON grammar '#{path}': #{$?.to_s}" unless $?.success?
|
||||
JSON.parse(cson)
|
||||
when '.json'
|
||||
JSON.parse(File.read(path))
|
||||
else
|
||||
raise "Invalid document type #{path}"
|
||||
end
|
||||
end
|
||||
|
||||
def load_grammars(tmp_dir, source, all_scopes)
|
||||
is_url = source.start_with?("http:", "https:")
|
||||
return [] if is_url && !$options[:remote]
|
||||
return [] if !is_url && !File.exist?(source)
|
||||
|
||||
p = if !is_url
|
||||
if File.directory?(source)
|
||||
DirectoryPackage.new(source)
|
||||
else
|
||||
SingleFile.new(source)
|
||||
end
|
||||
elsif source.end_with?('.tmLanguage', '.plist', '.YAML-tmLanguage')
|
||||
SingleGrammar.new(source)
|
||||
elsif source.start_with?('https://github.com')
|
||||
GitHubPackage.new(source)
|
||||
elsif source.start_with?('http://svn.textmate.org')
|
||||
SVNPackage.new(source)
|
||||
elsif source.end_with?('.tar.gz')
|
||||
TarballPackage.new(source)
|
||||
else
|
||||
nil
|
||||
end
|
||||
|
||||
raise "Unsupported source: #{source}" unless p
|
||||
|
||||
p.fetch(tmp_dir).map do |path|
|
||||
grammar = load_grammar(path)
|
||||
scope = grammar['scopeName'] || grammar['scope']
|
||||
|
||||
if all_scopes.key?(scope)
|
||||
unless all_scopes[scope] == p.url
|
||||
$stderr.puts "WARN: Duplicated scope #{scope}\n" +
|
||||
" Current package: #{p.url}\n" +
|
||||
" Previous package: #{all_scopes[scope]}"
|
||||
end
|
||||
next
|
||||
end
|
||||
all_scopes[scope] = p.url
|
||||
grammar
|
||||
end.compact
|
||||
end
|
||||
|
||||
def install_grammars(grammars, path)
|
||||
installed = []
|
||||
|
||||
grammars.each do |grammar|
|
||||
scope = grammar['scopeName'] || grammar['scope']
|
||||
File.write(File.join(GRAMMARS_PATH, "#{scope}.json"), JSON.pretty_generate(grammar))
|
||||
installed << scope
|
||||
end
|
||||
|
||||
$stderr.puts("OK #{path} (#{installed.join(', ')})")
|
||||
end
|
||||
|
||||
def run_thread(queue, all_scopes)
|
||||
Dir.mktmpdir do |tmpdir|
|
||||
loop do
|
||||
source, index = begin
|
||||
queue.pop(true)
|
||||
rescue ThreadError
|
||||
# The queue is empty.
|
||||
break
|
||||
end
|
||||
|
||||
dir = "#{tmpdir}/#{index}"
|
||||
Dir.mkdir(dir)
|
||||
|
||||
grammars = load_grammars(dir, source, all_scopes)
|
||||
install_grammars(grammars, source) if $options[:install]
|
||||
end
|
||||
end
|
||||
end
|
||||
|
||||
def generate_yaml(all_scopes, base)
|
||||
yaml = all_scopes.each_with_object(base) do |(key,value),out|
|
||||
out[value] ||= []
|
||||
out[value] << key
|
||||
end
|
||||
|
||||
yaml = Hash[yaml.sort]
|
||||
yaml.each { |k, v| v.sort! }
|
||||
yaml
|
||||
end
|
||||
|
||||
def main(sources)
|
||||
begin
|
||||
Dir.mkdir(GRAMMARS_PATH)
|
||||
rescue Errno::EEXIST
|
||||
end
|
||||
|
||||
`npm install`
|
||||
|
||||
all_scopes = {}
|
||||
|
||||
if source = $options[:add]
|
||||
Dir.mktmpdir do |tmpdir|
|
||||
grammars = load_grammars(tmpdir, source, all_scopes)
|
||||
install_grammars(grammars, source) if $options[:install]
|
||||
end
|
||||
generate_yaml(all_scopes, sources)
|
||||
else
|
||||
queue = Queue.new
|
||||
|
||||
sources.each do |url, scopes|
|
||||
queue.push([url, queue.length])
|
||||
end
|
||||
|
||||
threads = 8.times.map do
|
||||
Thread.new { run_thread(queue, all_scopes) }
|
||||
end
|
||||
threads.each(&:join)
|
||||
generate_yaml(all_scopes, {})
|
||||
end
|
||||
end
|
||||
|
||||
OptionParser.new do |opts|
|
||||
opts.banner = "Usage: #{$0} [options]"
|
||||
|
||||
opts.on("--add GRAMMAR", "Add a new grammar. GRAMMAR may be a file path or URL.") do |a|
|
||||
$options[:add] = a
|
||||
end
|
||||
|
||||
opts.on("--[no-]install", "Install grammars into grammars/ directory.") do |i|
|
||||
$options[:install] = i
|
||||
end
|
||||
|
||||
opts.on("--output FILE", "Write output to FILE. Use - for stdout.") do |o|
|
||||
$options[:output] = o == "-" ? $stdout : o
|
||||
end
|
||||
|
||||
opts.on("--[no-]remote", "Download remote grammars.") do |r|
|
||||
$options[:remote] = r
|
||||
end
|
||||
end.parse!
|
||||
|
||||
sources = File.open(SOURCES_FILE) do |file|
|
||||
YAML.load(file)
|
||||
end
|
||||
|
||||
yaml = main(sources)
|
||||
|
||||
if $options[:output].is_a?(IO)
|
||||
$options[:output].write(YAML.dump(yaml))
|
||||
else
|
||||
File.write($options[:output], YAML.dump(yaml))
|
||||
end
|
||||
14
script/grammar-compiler
Executable file
14
script/grammar-compiler
Executable file
@@ -0,0 +1,14 @@
|
||||
#!/bin/sh
|
||||
|
||||
set -e
|
||||
cd "$(dirname "$0")/.."
|
||||
|
||||
image="linguist/grammar-compiler:latest"
|
||||
mkdir -p grammars
|
||||
|
||||
docker pull $image
|
||||
|
||||
exec docker run --rm \
|
||||
-u $(id -u $USER):$(id -g $USER) \
|
||||
-v $PWD:/src/linguist \
|
||||
-w /src/linguist $image "$@"
|
||||
@@ -1,60 +0,0 @@
|
||||
#!/usr/bin/env ruby
|
||||
|
||||
require "bundler/setup"
|
||||
require "json"
|
||||
require "linguist"
|
||||
require "set"
|
||||
require "yaml"
|
||||
|
||||
ROOT = File.expand_path("../../", __FILE__)
|
||||
|
||||
def find_includes(json)
|
||||
case json
|
||||
when Hash
|
||||
result = []
|
||||
if inc = json["include"]
|
||||
result << inc.split("#", 2).first unless inc.start_with?("#", "$")
|
||||
end
|
||||
result + json.values.flat_map { |v| find_includes(v) }
|
||||
when Array
|
||||
json.flat_map { |v| find_includes(v) }
|
||||
else
|
||||
[]
|
||||
end
|
||||
end
|
||||
|
||||
def transitive_includes(scope, includes)
|
||||
scopes = Set.new
|
||||
queue = includes[scope] || []
|
||||
while s = queue.shift
|
||||
next if scopes.include?(s)
|
||||
scopes << s
|
||||
queue += includes[s] || []
|
||||
end
|
||||
scopes
|
||||
end
|
||||
|
||||
includes = {}
|
||||
Dir[File.join(ROOT, "grammars/*.json")].each do |path|
|
||||
scope = File.basename(path).sub(/\.json/, '')
|
||||
json = JSON.load(File.read(path))
|
||||
incs = find_includes(json)
|
||||
next if incs.empty?
|
||||
includes[scope] ||= []
|
||||
includes[scope] += incs
|
||||
end
|
||||
|
||||
yaml = YAML.load(File.read(File.join(ROOT, "grammars.yml")))
|
||||
language_scopes = Linguist::Language.all.map(&:tm_scope).to_set
|
||||
|
||||
# The set of used scopes is the scopes for each language, plus all the scopes
|
||||
# they include, transitively.
|
||||
used_scopes = language_scopes + language_scopes.flat_map { |s| transitive_includes(s, includes).to_a }.to_set
|
||||
|
||||
unused = yaml.reject { |repo, scopes| scopes.any? { |scope| used_scopes.include?(scope) } }
|
||||
|
||||
puts "Unused grammar repos"
|
||||
puts unused.map { |repo, scopes| sprintf("%-100s %s", repo, scopes.join(", ")) }.sort.join("\n")
|
||||
|
||||
yaml.delete_if { |k| unused.key?(k) }
|
||||
File.write(File.join(ROOT, "grammars.yml"), YAML.dump(yaml))
|
||||
@@ -307,5 +307,36 @@ class TestBlob < Minitest::Test
|
||||
|
||||
included = sample_blob_memory("HTML/pages.html")
|
||||
assert_predicate included, :include_in_language_stats?
|
||||
|
||||
# Test detectable override (i.e by .gitattributes)
|
||||
|
||||
def prose.detectable?; true end
|
||||
assert_predicate prose, :include_in_language_stats?
|
||||
|
||||
included_not_detectable = included.clone()
|
||||
def included_not_detectable.detectable?; false end
|
||||
refute_predicate included_not_detectable, :include_in_language_stats?
|
||||
|
||||
# Test not included if vendored, documentation or generated overridden
|
||||
# even if detectable
|
||||
|
||||
included_vendored = included.clone()
|
||||
def included_vendored.vendored?; true end
|
||||
refute_predicate included_vendored, :include_in_language_stats?
|
||||
def included_vendored.detectable?; true end
|
||||
refute_predicate included_vendored, :include_in_language_stats?
|
||||
|
||||
included_documentation = included.clone()
|
||||
def included_documentation.documentation?; true end
|
||||
refute_predicate included_documentation, :include_in_language_stats?
|
||||
def included_documentation.detectable?; true end
|
||||
refute_predicate included_documentation, :include_in_language_stats?
|
||||
|
||||
included_generated = included.clone()
|
||||
def included_generated.generated?; true end
|
||||
refute_predicate included_generated, :include_in_language_stats?
|
||||
def included_generated.detectable?; true end
|
||||
refute_predicate included_generated, :include_in_language_stats?
|
||||
|
||||
end
|
||||
end
|
||||
|
||||
@@ -188,6 +188,17 @@ class TestFileBlob < Minitest::Test
|
||||
assert fixture_blob("Binary/MainMenu.nib").generated?
|
||||
assert !sample_blob("XML/project.pbxproj").generated?
|
||||
|
||||
# Cocoapods
|
||||
assert sample_blob('Pods/blah').generated?
|
||||
assert !sample_blob('My-Pods/blah').generated?
|
||||
|
||||
# Carthage
|
||||
assert sample_blob('Carthage/Build/blah').generated?
|
||||
assert !sample_blob('Carthage/blah').generated?
|
||||
assert !sample_blob('Carthage/Checkout/blah').generated?
|
||||
assert !sample_blob('My-Carthage/Build/blah').generated?
|
||||
assert !sample_blob('My-Carthage/Build/blah').generated?
|
||||
|
||||
# Gemfile.lock is NOT generated
|
||||
assert !sample_blob("Gemfile.lock").generated?
|
||||
|
||||
@@ -313,8 +324,6 @@ class TestFileBlob < Minitest::Test
|
||||
assert sample_blob("deps/http_parser/http_parser.c").vendored?
|
||||
assert sample_blob("deps/v8/src/v8.h").vendored?
|
||||
|
||||
assert sample_blob("tools/something/else.c").vendored?
|
||||
|
||||
# Chart.js
|
||||
assert sample_blob("some/vendored/path/Chart.js").vendored?
|
||||
assert !sample_blob("some/vendored/path/chart.js").vendored?
|
||||
@@ -490,9 +499,9 @@ class TestFileBlob < Minitest::Test
|
||||
|
||||
# Carthage
|
||||
assert sample_blob('Carthage/blah').vendored?
|
||||
|
||||
# Cocoapods
|
||||
assert sample_blob('Pods/blah').vendored?
|
||||
assert sample_blob('iOS/Carthage/blah').vendored?
|
||||
assert !sample_blob('My-Carthage/blah').vendored?
|
||||
assert !sample_blob('iOS/My-Carthage/blah').vendored?
|
||||
|
||||
# Html5shiv
|
||||
assert sample_blob("Scripts/html5shiv.js").vendored?
|
||||
|
||||
@@ -42,6 +42,24 @@ class TestGenerated < Minitest::Test
|
||||
generated_sample_without_loading_data("Dummy/foo.xcworkspacedata")
|
||||
generated_sample_without_loading_data("Dummy/foo.xcuserstate")
|
||||
|
||||
# Cocoapods
|
||||
generated_sample_without_loading_data("Pods/Pods.xcodeproj")
|
||||
generated_sample_without_loading_data("Pods/SwiftDependency/foo.swift")
|
||||
generated_sample_without_loading_data("Pods/ObjCDependency/foo.h")
|
||||
generated_sample_without_loading_data("Pods/ObjCDependency/foo.m")
|
||||
generated_sample_without_loading_data("Dummy/Pods/Pods.xcodeproj")
|
||||
generated_sample_without_loading_data("Dummy/Pods/SwiftDependency/foo.swift")
|
||||
generated_sample_without_loading_data("Dummy/Pods/ObjCDependency/foo.h")
|
||||
generated_sample_without_loading_data("Dummy/Pods/ObjCDependency/foo.m")
|
||||
|
||||
# Carthage
|
||||
generated_sample_without_loading_data("Carthage/Build/.Dependency.version")
|
||||
generated_sample_without_loading_data("Carthage/Build/iOS/Dependency.framework")
|
||||
generated_sample_without_loading_data("Carthage/Build/Mac/Dependency.framework")
|
||||
generated_sample_without_loading_data("src/Carthage/Build/.Dependency.version")
|
||||
generated_sample_without_loading_data("src/Carthage/Build/iOS/Dependency.framework")
|
||||
generated_sample_without_loading_data("src/Carthage/Build/Mac/Dependency.framework")
|
||||
|
||||
# Go-specific vendored paths
|
||||
generated_sample_without_loading_data("go/vendor/github.com/foo.go")
|
||||
generated_sample_without_loading_data("go/vendor/golang.org/src/foo.c")
|
||||
|
||||
@@ -94,19 +94,6 @@ class TestGrammars < Minitest::Test
|
||||
assert nonexistent_submodules.empty? && unlisted_submodules.empty?, message.sub(/\.\Z/, "")
|
||||
end
|
||||
|
||||
def test_local_scopes_are_in_sync
|
||||
actual = YAML.load(`"#{File.join(ROOT, "script", "convert-grammars")}" --output - --no-install --no-remote`)
|
||||
assert $?.success?, "script/convert-grammars failed"
|
||||
|
||||
# We're not checking remote grammars. That can take a long time and make CI
|
||||
# flaky if network conditions are poor.
|
||||
@grammars.delete_if { |k, v| k.start_with?("http:", "https:") }
|
||||
|
||||
@grammars.each do |k, v|
|
||||
assert_equal v, actual[k], "The scopes listed for #{k} in grammars.yml don't match the scopes found in that repository"
|
||||
end
|
||||
end
|
||||
|
||||
def test_readme_file_is_in_sync
|
||||
current_data = File.read("#{ROOT}/vendor/README.md").to_s.sub(/\A.+?<!--.+?-->\n/ms, "")
|
||||
updated_data = `script/list-grammars --print`
|
||||
|
||||
@@ -13,8 +13,9 @@ class TestHeuristics < Minitest::Test
|
||||
end
|
||||
|
||||
def all_fixtures(language_name, file="*")
|
||||
Dir.glob("#{samples_path}/#{language_name}/#{file}") -
|
||||
["#{samples_path}/#{language_name}/filenames"]
|
||||
fixs = Dir.glob("#{samples_path}/#{language_name}/#{file}") -
|
||||
["#{samples_path}/#{language_name}/filenames"]
|
||||
fixs.reject { |f| File.symlink?(f) }
|
||||
end
|
||||
|
||||
def test_no_match
|
||||
@@ -23,6 +24,10 @@ class TestHeuristics < Minitest::Test
|
||||
assert_equal [], results
|
||||
end
|
||||
|
||||
def test_symlink_empty
|
||||
assert_equal [], Heuristics.call(file_blob("Markdown/symlink.md"), [Language["Markdown"]])
|
||||
end
|
||||
|
||||
def assert_heuristics(hash)
|
||||
candidates = hash.keys.map { |l| Language[l] }
|
||||
|
||||
|
||||
@@ -470,5 +470,13 @@ class TestLanguage < Minitest::Test
|
||||
|
||||
def test_non_crash_on_comma
|
||||
assert_nil Language[',']
|
||||
assert_nil Language.find_by_name(',')
|
||||
assert_nil Language.find_by_alias(',')
|
||||
end
|
||||
|
||||
def test_detect_prefers_markdown_for_md
|
||||
blob = Linguist::FileBlob.new(File.join(samples_path, "Markdown/symlink.md"))
|
||||
match = Linguist.detect(blob)
|
||||
assert_equal Language["Markdown"], match
|
||||
end
|
||||
end
|
||||
|
||||
@@ -121,4 +121,16 @@ class TestRepository < Minitest::Test
|
||||
# overridden .gitattributes
|
||||
assert rakefile.generated?
|
||||
end
|
||||
|
||||
def test_linguist_override_detectable?
|
||||
attr_commit = "8f86998866f6f2c8aa14e0dd430e61fd25cff720"
|
||||
linguist_repo(attr_commit).read_index
|
||||
|
||||
# markdown is overridden by .gitattributes to be detectable, html to not be detectable
|
||||
markdown = Linguist::LazyBlob.new(rugged_repository, attr_commit, "samples/Markdown/tender.md")
|
||||
html = Linguist::LazyBlob.new(rugged_repository, attr_commit, "samples/HTML/pages.html")
|
||||
|
||||
assert_predicate markdown, :detectable?
|
||||
refute_predicate html, :detectable?
|
||||
end
|
||||
end
|
||||
|
||||
1
tools/grammars/.gitignore
vendored
Normal file
1
tools/grammars/.gitignore
vendored
Normal file
@@ -0,0 +1 @@
|
||||
/vendor
|
||||
30
tools/grammars/Dockerfile
Normal file
30
tools/grammars/Dockerfile
Normal file
@@ -0,0 +1,30 @@
|
||||
FROM golang:1.9.2
|
||||
|
||||
WORKDIR /go/src/github.com/github/linguist/tools/grammars
|
||||
|
||||
RUN curl -sL https://deb.nodesource.com/setup_6.x | bash - && \
|
||||
apt-get update && \
|
||||
apt-get install -y nodejs cmake && \
|
||||
npm install -g season && \
|
||||
cd /tmp && git clone https://github.com/vmg/pcre && \
|
||||
mkdir -p /tmp/pcre/build && cd /tmp/pcre/build && \
|
||||
cmake .. \
|
||||
-DPCRE_SUPPORT_JIT=ON \
|
||||
-DPCRE_SUPPORT_UTF=ON \
|
||||
-DPCRE_SUPPORT_UNICODE_PROPERTIES=ON \
|
||||
-DBUILD_SHARED_LIBS=OFF \
|
||||
-DCMAKE_C_FLAGS="-fPIC $(EXTRA_PCRE_CFLAGS)" \
|
||||
-DCMAKE_BUILD_TYPE=RelWithDebInfo \
|
||||
-DPCRE_BUILD_PCRECPP=OFF \
|
||||
-DPCRE_BUILD_PCREGREP=OFF \
|
||||
-DPCRE_BUILD_TESTS=OFF \
|
||||
-G "Unix Makefiles" && \
|
||||
make && make install && \
|
||||
rm -rf /tmp/pcre && \
|
||||
cd /go && go get -u github.com/golang/dep/cmd/dep && \
|
||||
rm -rf /var/lib/apt/lists/*
|
||||
|
||||
COPY . .
|
||||
RUN dep ensure && go install ./cmd/grammar-compiler
|
||||
|
||||
ENTRYPOINT ["grammar-compiler"]
|
||||
51
tools/grammars/Gopkg.lock
generated
Normal file
51
tools/grammars/Gopkg.lock
generated
Normal file
@@ -0,0 +1,51 @@
|
||||
# This file is autogenerated, do not edit; changes may be undone by the next 'dep ensure'.
|
||||
|
||||
|
||||
[[projects]]
|
||||
branch = "master"
|
||||
name = "github.com/golang/protobuf"
|
||||
packages = ["proto"]
|
||||
revision = "1e59b77b52bf8e4b449a57e6f79f21226d571845"
|
||||
|
||||
[[projects]]
|
||||
branch = "master"
|
||||
name = "github.com/groob/plist"
|
||||
packages = ["."]
|
||||
revision = "7b367e0aa692e62a223e823f3288c0c00f519a36"
|
||||
|
||||
[[projects]]
|
||||
name = "github.com/mattn/go-runewidth"
|
||||
packages = ["."]
|
||||
revision = "9e777a8366cce605130a531d2cd6363d07ad7317"
|
||||
version = "v0.0.2"
|
||||
|
||||
[[projects]]
|
||||
branch = "master"
|
||||
name = "github.com/mitchellh/mapstructure"
|
||||
packages = ["."]
|
||||
revision = "06020f85339e21b2478f756a78e295255ffa4d6a"
|
||||
|
||||
[[projects]]
|
||||
name = "github.com/urfave/cli"
|
||||
packages = ["."]
|
||||
revision = "cfb38830724cc34fedffe9a2a29fb54fa9169cd1"
|
||||
version = "v1.20.0"
|
||||
|
||||
[[projects]]
|
||||
name = "gopkg.in/cheggaaa/pb.v1"
|
||||
packages = ["."]
|
||||
revision = "657164d0228d6bebe316fdf725c69f131a50fb10"
|
||||
version = "v1.0.18"
|
||||
|
||||
[[projects]]
|
||||
branch = "v2"
|
||||
name = "gopkg.in/yaml.v2"
|
||||
packages = ["."]
|
||||
revision = "287cf08546ab5e7e37d55a84f7ed3fd1db036de5"
|
||||
|
||||
[solve-meta]
|
||||
analyzer-name = "dep"
|
||||
analyzer-version = 1
|
||||
inputs-digest = "ba2e3150d728692b49e3e2d652b6ea23db82777c340e0c432cd4af6f0eef9f55"
|
||||
solver-name = "gps-cdcl"
|
||||
solver-version = 1
|
||||
23
tools/grammars/Gopkg.toml
Normal file
23
tools/grammars/Gopkg.toml
Normal file
@@ -0,0 +1,23 @@
|
||||
[[constraint]]
|
||||
branch = "v2"
|
||||
name = "gopkg.in/yaml.v2"
|
||||
|
||||
[[constraint]]
|
||||
branch = "master"
|
||||
name = "github.com/groob/plist"
|
||||
|
||||
[[constraint]]
|
||||
branch = "master"
|
||||
name = "github.com/golang/protobuf"
|
||||
|
||||
[[constraint]]
|
||||
branch = "master"
|
||||
name = "github.com/mitchellh/mapstructure"
|
||||
|
||||
[[constraint]]
|
||||
name = "gopkg.in/cheggaaa/pb.v1"
|
||||
version = "1.0.18"
|
||||
|
||||
[[constraint]]
|
||||
name = "github.com/urfave/cli"
|
||||
version = "1.20.0"
|
||||
120
tools/grammars/cmd/grammar-compiler/main.go
Normal file
120
tools/grammars/cmd/grammar-compiler/main.go
Normal file
@@ -0,0 +1,120 @@
|
||||
package main
|
||||
|
||||
import (
|
||||
"os"
|
||||
|
||||
"github.com/github/linguist/tools/grammars/compiler"
|
||||
"github.com/urfave/cli"
|
||||
)
|
||||
|
||||
func cwd() string {
|
||||
cwd, _ := os.Getwd()
|
||||
return cwd
|
||||
}
|
||||
|
||||
func wrap(err error) error {
|
||||
return cli.NewExitError(err, 255)
|
||||
}
|
||||
|
||||
func main() {
|
||||
app := cli.NewApp()
|
||||
app.Name = "Linguist Grammars Compiler"
|
||||
app.Usage = "Compile user-submitted grammars and check them for errors"
|
||||
|
||||
app.Flags = []cli.Flag{
|
||||
cli.StringFlag{
|
||||
Name: "linguist-path",
|
||||
Value: cwd(),
|
||||
Usage: "path to Linguist root",
|
||||
},
|
||||
}
|
||||
|
||||
app.Commands = []cli.Command{
|
||||
{
|
||||
Name: "add",
|
||||
Usage: "add a new grammar source",
|
||||
Flags: []cli.Flag{
|
||||
cli.BoolFlag{
|
||||
Name: "force, f",
|
||||
Usage: "ignore compilation errors",
|
||||
},
|
||||
},
|
||||
Action: func(c *cli.Context) error {
|
||||
conv, err := compiler.NewConverter(c.String("linguist-path"))
|
||||
if err != nil {
|
||||
return wrap(err)
|
||||
}
|
||||
if err := conv.AddGrammar(c.Args().First()); err != nil {
|
||||
if !c.Bool("force") {
|
||||
return wrap(err)
|
||||
}
|
||||
}
|
||||
if err := conv.WriteGrammarList(); err != nil {
|
||||
return wrap(err)
|
||||
}
|
||||
return nil
|
||||
},
|
||||
},
|
||||
{
|
||||
Name: "update",
|
||||
Usage: "update grammars.yml with the contents of the grammars library",
|
||||
Flags: []cli.Flag{
|
||||
cli.BoolFlag{
|
||||
Name: "force, f",
|
||||
Usage: "write grammars.yml even if grammars fail to compile",
|
||||
},
|
||||
},
|
||||
Action: func(c *cli.Context) error {
|
||||
conv, err := compiler.NewConverter(c.String("linguist-path"))
|
||||
if err != nil {
|
||||
return wrap(err)
|
||||
}
|
||||
if err := conv.ConvertGrammars(true); err != nil {
|
||||
return wrap(err)
|
||||
}
|
||||
if err := conv.Report(); err != nil {
|
||||
if !c.Bool("force") {
|
||||
return wrap(err)
|
||||
}
|
||||
}
|
||||
if err := conv.WriteGrammarList(); err != nil {
|
||||
return wrap(err)
|
||||
}
|
||||
return nil
|
||||
},
|
||||
},
|
||||
{
|
||||
Name: "compile",
|
||||
Usage: "convert the grammars from the library",
|
||||
Flags: []cli.Flag{
|
||||
cli.StringFlag{Name: "proto-out, P"},
|
||||
cli.StringFlag{Name: "out, o"},
|
||||
},
|
||||
Action: func(c *cli.Context) error {
|
||||
conv, err := compiler.NewConverter(c.String("linguist-path"))
|
||||
if err != nil {
|
||||
return cli.NewExitError(err, 1)
|
||||
}
|
||||
if err := conv.ConvertGrammars(false); err != nil {
|
||||
return cli.NewExitError(err, 1)
|
||||
}
|
||||
if out := c.String("proto-out"); out != "" {
|
||||
if err := conv.WriteProto(out); err != nil {
|
||||
return cli.NewExitError(err, 1)
|
||||
}
|
||||
}
|
||||
if out := c.String("out"); out != "" {
|
||||
if err := conv.WriteJSON(out); err != nil {
|
||||
return cli.NewExitError(err, 1)
|
||||
}
|
||||
}
|
||||
if err := conv.Report(); err != nil {
|
||||
return wrap(err)
|
||||
}
|
||||
return nil
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
app.Run(os.Args)
|
||||
}
|
||||
264
tools/grammars/compiler/converter.go
Normal file
264
tools/grammars/compiler/converter.go
Normal file
@@ -0,0 +1,264 @@
|
||||
package compiler
|
||||
|
||||
import (
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"io/ioutil"
|
||||
"os"
|
||||
"path"
|
||||
"runtime"
|
||||
"sort"
|
||||
"strings"
|
||||
"sync"
|
||||
|
||||
grammar "github.com/github/linguist/tools/grammars/proto"
|
||||
"github.com/golang/protobuf/proto"
|
||||
pb "gopkg.in/cheggaaa/pb.v1"
|
||||
yaml "gopkg.in/yaml.v2"
|
||||
)
|
||||
|
||||
type Converter struct {
|
||||
root string
|
||||
|
||||
modified bool
|
||||
grammars map[string][]string
|
||||
Loaded map[string]*Repository
|
||||
|
||||
progress *pb.ProgressBar
|
||||
wg sync.WaitGroup
|
||||
queue chan string
|
||||
mu sync.Mutex
|
||||
}
|
||||
|
||||
func (conv *Converter) Load(src string) *Repository {
|
||||
if strings.HasPrefix(src, "http://") || strings.HasPrefix(src, "https://") {
|
||||
return LoadFromURL(src)
|
||||
}
|
||||
return LoadFromFilesystem(conv.root, src)
|
||||
}
|
||||
|
||||
func (conv *Converter) work() {
|
||||
for source := range conv.queue {
|
||||
repo := conv.Load(source)
|
||||
|
||||
conv.mu.Lock()
|
||||
conv.Loaded[source] = repo
|
||||
conv.mu.Unlock()
|
||||
|
||||
conv.progress.Increment()
|
||||
}
|
||||
|
||||
conv.wg.Done()
|
||||
}
|
||||
|
||||
func (conv *Converter) tmpScopes() map[string]bool {
|
||||
scopes := make(map[string]bool)
|
||||
for _, ary := range conv.grammars {
|
||||
for _, s := range ary {
|
||||
scopes[s] = true
|
||||
}
|
||||
}
|
||||
return scopes
|
||||
}
|
||||
|
||||
func (conv *Converter) AddGrammar(source string) error {
|
||||
repo := conv.Load(source)
|
||||
if len(repo.Files) == 0 {
|
||||
return fmt.Errorf("source '%s' contains no grammar files", source)
|
||||
}
|
||||
|
||||
conv.grammars[source] = repo.Scopes()
|
||||
conv.modified = true
|
||||
|
||||
knownScopes := conv.tmpScopes()
|
||||
repo.FixRules(knownScopes)
|
||||
|
||||
if len(repo.Errors) > 0 {
|
||||
fmt.Fprintf(os.Stderr, "The new grammar %s contains %d errors:\n",
|
||||
repo, len(repo.Errors))
|
||||
for _, err := range repo.Errors {
|
||||
fmt.Fprintf(os.Stderr, " - %s\n", err)
|
||||
}
|
||||
fmt.Fprintf(os.Stderr, "\n")
|
||||
return fmt.Errorf("failed to compile the given grammar")
|
||||
}
|
||||
|
||||
fmt.Printf("OK! added grammar source '%s'\n", source)
|
||||
for scope := range repo.Files {
|
||||
fmt.Printf("\tnew scope: %s\n", scope)
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func (conv *Converter) AllScopes() map[string]bool {
|
||||
// Map from scope -> Repository first to error check
|
||||
// possible duplicates
|
||||
allScopes := make(map[string]*Repository)
|
||||
for _, repo := range conv.Loaded {
|
||||
for scope := range repo.Files {
|
||||
if original := allScopes[scope]; original != nil {
|
||||
repo.Fail(&DuplicateScopeError{original, scope})
|
||||
} else {
|
||||
allScopes[scope] = repo
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Convert to scope -> bool
|
||||
scopes := make(map[string]bool)
|
||||
for s := range allScopes {
|
||||
scopes[s] = true
|
||||
}
|
||||
return scopes
|
||||
}
|
||||
|
||||
func (conv *Converter) ConvertGrammars(update bool) error {
|
||||
conv.Loaded = make(map[string]*Repository)
|
||||
conv.queue = make(chan string, 128)
|
||||
|
||||
conv.progress = pb.New(len(conv.grammars))
|
||||
conv.progress.Start()
|
||||
|
||||
for i := 0; i < runtime.NumCPU(); i++ {
|
||||
conv.wg.Add(1)
|
||||
go conv.work()
|
||||
}
|
||||
|
||||
for src := range conv.grammars {
|
||||
conv.queue <- src
|
||||
}
|
||||
|
||||
close(conv.queue)
|
||||
conv.wg.Wait()
|
||||
|
||||
done := fmt.Sprintf("done! processed %d grammars\n", len(conv.Loaded))
|
||||
conv.progress.FinishPrint(done)
|
||||
|
||||
if update {
|
||||
conv.grammars = make(map[string][]string)
|
||||
conv.modified = true
|
||||
}
|
||||
|
||||
knownScopes := conv.AllScopes()
|
||||
|
||||
for source, repo := range conv.Loaded {
|
||||
repo.FixRules(knownScopes)
|
||||
|
||||
if update {
|
||||
scopes := repo.Scopes()
|
||||
if len(scopes) > 0 {
|
||||
conv.grammars[source] = scopes
|
||||
}
|
||||
} else {
|
||||
expected := conv.grammars[source]
|
||||
repo.CompareScopes(expected)
|
||||
}
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func (conv *Converter) WriteProto(path string) error {
|
||||
library := grammar.Library{
|
||||
Grammars: make(map[string]*grammar.Rule),
|
||||
}
|
||||
|
||||
for _, repo := range conv.Loaded {
|
||||
for scope, file := range repo.Files {
|
||||
library.Grammars[scope] = file.Rule
|
||||
}
|
||||
}
|
||||
|
||||
pb, err := proto.Marshal(&library)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
return ioutil.WriteFile(path, pb, 0666)
|
||||
}
|
||||
|
||||
func (conv *Converter) writeJSONFile(path string, rule *grammar.Rule) error {
|
||||
j, err := os.Create(path)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
defer j.Close()
|
||||
|
||||
enc := json.NewEncoder(j)
|
||||
enc.SetIndent("", " ")
|
||||
return enc.Encode(rule)
|
||||
}
|
||||
|
||||
func (conv *Converter) WriteJSON(rulePath string) error {
|
||||
if err := os.MkdirAll(rulePath, os.ModePerm); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
for _, repo := range conv.Loaded {
|
||||
for scope, file := range repo.Files {
|
||||
p := path.Join(rulePath, scope+".json")
|
||||
if err := conv.writeJSONFile(p, file.Rule); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func (conv *Converter) WriteGrammarList() error {
|
||||
if !conv.modified {
|
||||
return nil
|
||||
}
|
||||
|
||||
outyml, err := yaml.Marshal(conv.grammars)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
ymlpath := path.Join(conv.root, "grammars.yml")
|
||||
return ioutil.WriteFile(ymlpath, outyml, 0666)
|
||||
}
|
||||
|
||||
func (conv *Converter) Report() error {
|
||||
var failed []*Repository
|
||||
for _, repo := range conv.Loaded {
|
||||
if len(repo.Errors) > 0 {
|
||||
failed = append(failed, repo)
|
||||
}
|
||||
}
|
||||
|
||||
sort.Slice(failed, func(i, j int) bool {
|
||||
return failed[i].Source < failed[j].Source
|
||||
})
|
||||
|
||||
total := 0
|
||||
for _, repo := range failed {
|
||||
fmt.Fprintf(os.Stderr, "- [ ] %s (%d errors)\n", repo, len(repo.Errors))
|
||||
for _, err := range repo.Errors {
|
||||
fmt.Fprintf(os.Stderr, " - [ ] %s\n", err)
|
||||
}
|
||||
fmt.Fprintf(os.Stderr, "\n")
|
||||
total += len(repo.Errors)
|
||||
}
|
||||
|
||||
if total > 0 {
|
||||
return fmt.Errorf("the grammar library contains %d errors", total)
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func NewConverter(root string) (*Converter, error) {
|
||||
yml, err := ioutil.ReadFile(path.Join(root, "grammars.yml"))
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
conv := &Converter{root: root}
|
||||
|
||||
if err := yaml.Unmarshal(yml, &conv.grammars); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
return conv, nil
|
||||
}
|
||||
21
tools/grammars/compiler/cson.go
Normal file
21
tools/grammars/compiler/cson.go
Normal file
@@ -0,0 +1,21 @@
|
||||
package compiler
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"os/exec"
|
||||
)
|
||||
|
||||
func ConvertCSON(data []byte) ([]byte, error) {
|
||||
stdin := bytes.NewBuffer(data)
|
||||
stdout := &bytes.Buffer{}
|
||||
|
||||
cmd := exec.Command("csonc")
|
||||
cmd.Stdin = stdin
|
||||
cmd.Stdout = stdout
|
||||
|
||||
if err := cmd.Run(); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
return stdout.Bytes(), nil
|
||||
}
|
||||
30
tools/grammars/compiler/data.go
Normal file
30
tools/grammars/compiler/data.go
Normal file
@@ -0,0 +1,30 @@
|
||||
package compiler
|
||||
|
||||
var GrammarAliases = map[string]string{
|
||||
"source.erb": "text.html.erb",
|
||||
"source.cpp": "source.c++",
|
||||
"source.less": "source.css.less",
|
||||
"text.html.markdown": "source.gfm",
|
||||
"text.md": "source.gfm",
|
||||
"source.php": "text.html.php",
|
||||
"text.plain": "",
|
||||
"source.asciidoc": "text.html.asciidoc",
|
||||
"source.perl6": "source.perl6fe",
|
||||
"source.css.scss": "source.scss",
|
||||
}
|
||||
|
||||
var KnownFields = map[string]bool{
|
||||
"comment": true,
|
||||
"uuid": true,
|
||||
"author": true,
|
||||
"comments": true,
|
||||
"macros": true,
|
||||
"fileTypes": true,
|
||||
"firstLineMatch": true,
|
||||
"keyEquivalent": true,
|
||||
"foldingStopMarker": true,
|
||||
"foldingStartMarker": true,
|
||||
"foldingEndMarker": true,
|
||||
"limitLineLength": true,
|
||||
"hideFromUser": true,
|
||||
}
|
||||
85
tools/grammars/compiler/errors.go
Normal file
85
tools/grammars/compiler/errors.go
Normal file
@@ -0,0 +1,85 @@
|
||||
package compiler
|
||||
|
||||
import "fmt"
|
||||
import "strings"
|
||||
|
||||
type ConversionError struct {
|
||||
Path string
|
||||
Err error
|
||||
}
|
||||
|
||||
func (err *ConversionError) Error() string {
|
||||
return fmt.Sprintf(
|
||||
"Grammar conversion failed. File `%s` failed to parse: %s",
|
||||
err.Path, err.Err)
|
||||
}
|
||||
|
||||
type DuplicateScopeError struct {
|
||||
Original *Repository
|
||||
Duplicate string
|
||||
}
|
||||
|
||||
func (err *DuplicateScopeError) Error() string {
|
||||
return fmt.Sprintf(
|
||||
"Duplicate scope in repository: scope `%s` was already defined in %s",
|
||||
err.Duplicate, err.Original)
|
||||
}
|
||||
|
||||
type MissingScopeError struct {
|
||||
Scope string
|
||||
}
|
||||
|
||||
func (err *MissingScopeError) Error() string {
|
||||
return fmt.Sprintf(
|
||||
"Missing scope in repository: `%s` is listed in grammars.yml but cannot be found",
|
||||
err.Scope)
|
||||
}
|
||||
|
||||
type UnexpectedScopeError struct {
|
||||
File *LoadedFile
|
||||
Scope string
|
||||
}
|
||||
|
||||
func (err *UnexpectedScopeError) Error() string {
|
||||
return fmt.Sprintf(
|
||||
"Unexpected scope in repository: `%s` declared in %s was not listed in grammars.yml",
|
||||
err.Scope, err.File)
|
||||
}
|
||||
|
||||
type MissingIncludeError struct {
|
||||
File *LoadedFile
|
||||
Include string
|
||||
}
|
||||
|
||||
func (err *MissingIncludeError) Error() string {
|
||||
return fmt.Sprintf(
|
||||
"Missing include in grammar: %s attempts to include `%s` but the scope cannot be found",
|
||||
err.File, err.Include)
|
||||
}
|
||||
|
||||
type UnknownKeysError struct {
|
||||
File *LoadedFile
|
||||
Keys []string
|
||||
}
|
||||
|
||||
func (err *UnknownKeysError) Error() string {
|
||||
var keys []string
|
||||
for _, k := range err.Keys {
|
||||
keys = append(keys, fmt.Sprintf("`%s`", k))
|
||||
}
|
||||
|
||||
return fmt.Sprintf(
|
||||
"Unknown keys in grammar: %s contains invalid keys (%s)",
|
||||
err.File, strings.Join(keys, ", "))
|
||||
}
|
||||
|
||||
type InvalidRegexError struct {
|
||||
File *LoadedFile
|
||||
Err error
|
||||
}
|
||||
|
||||
func (err *InvalidRegexError) Error() string {
|
||||
return fmt.Sprintf(
|
||||
"Invalid regex in grammar: %s contains a malformed regex (%s)",
|
||||
err.File, err.Err)
|
||||
}
|
||||
129
tools/grammars/compiler/loader.go
Normal file
129
tools/grammars/compiler/loader.go
Normal file
@@ -0,0 +1,129 @@
|
||||
package compiler
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"sort"
|
||||
"strings"
|
||||
|
||||
grammar "github.com/github/linguist/tools/grammars/proto"
|
||||
)
|
||||
|
||||
type LoadedFile struct {
|
||||
Path string
|
||||
Rule *grammar.Rule
|
||||
}
|
||||
|
||||
func (f *LoadedFile) String() string {
|
||||
return fmt.Sprintf("`%s` (in `%s`)", f.Rule.ScopeName, f.Path)
|
||||
}
|
||||
|
||||
type Repository struct {
|
||||
Source string
|
||||
Upstream string
|
||||
Files map[string]*LoadedFile
|
||||
Errors []error
|
||||
}
|
||||
|
||||
func newRepository(src string) *Repository {
|
||||
return &Repository{
|
||||
Source: src,
|
||||
Files: make(map[string]*LoadedFile),
|
||||
}
|
||||
}
|
||||
|
||||
func (repo *Repository) String() string {
|
||||
str := fmt.Sprintf("repository `%s`", repo.Source)
|
||||
if repo.Upstream != "" {
|
||||
str = str + fmt.Sprintf(" (from %s)", repo.Upstream)
|
||||
}
|
||||
return str
|
||||
}
|
||||
|
||||
func (repo *Repository) Fail(err error) {
|
||||
repo.Errors = append(repo.Errors, err)
|
||||
}
|
||||
|
||||
func (repo *Repository) AddFile(path string, rule *grammar.Rule, uk []string) {
|
||||
file := &LoadedFile{
|
||||
Path: path,
|
||||
Rule: rule,
|
||||
}
|
||||
|
||||
repo.Files[rule.ScopeName] = file
|
||||
if len(uk) > 0 {
|
||||
repo.Fail(&UnknownKeysError{file, uk})
|
||||
}
|
||||
}
|
||||
|
||||
func toMap(slice []string) map[string]bool {
|
||||
m := make(map[string]bool)
|
||||
for _, s := range slice {
|
||||
m[s] = true
|
||||
}
|
||||
return m
|
||||
}
|
||||
|
||||
func (repo *Repository) CompareScopes(scopes []string) {
|
||||
expected := toMap(scopes)
|
||||
|
||||
for scope, file := range repo.Files {
|
||||
if !expected[scope] {
|
||||
repo.Fail(&UnexpectedScopeError{file, scope})
|
||||
}
|
||||
}
|
||||
|
||||
for scope := range expected {
|
||||
if _, ok := repo.Files[scope]; !ok {
|
||||
repo.Fail(&MissingScopeError{scope})
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func (repo *Repository) FixRules(knownScopes map[string]bool) {
|
||||
for _, file := range repo.Files {
|
||||
w := walker{
|
||||
File: file,
|
||||
Known: knownScopes,
|
||||
Missing: make(map[string]bool),
|
||||
}
|
||||
|
||||
w.walk(file.Rule)
|
||||
repo.Errors = append(repo.Errors, w.Errors...)
|
||||
|
||||
}
|
||||
}
|
||||
|
||||
func (repo *Repository) Scopes() (scopes []string) {
|
||||
for s := range repo.Files {
|
||||
scopes = append(scopes, s)
|
||||
}
|
||||
sort.Strings(scopes)
|
||||
return
|
||||
}
|
||||
|
||||
func isValidGrammar(path string, info os.FileInfo) bool {
|
||||
if info.IsDir() {
|
||||
return false
|
||||
}
|
||||
|
||||
// Tree-Sitter grammars are not supported
|
||||
if strings.HasPrefix(filepath.Base(path), "tree-sitter-") {
|
||||
return false
|
||||
}
|
||||
|
||||
dir := filepath.Dir(path)
|
||||
ext := filepath.Ext(path)
|
||||
|
||||
switch strings.ToLower(ext) {
|
||||
case ".plist":
|
||||
return strings.HasSuffix(dir, "/Syntaxes")
|
||||
case ".tmlanguage", ".yaml-tmlanguage":
|
||||
return true
|
||||
case ".cson", ".json":
|
||||
return strings.HasSuffix(dir, "/grammars") || strings.HasSuffix(dir, "/syntaxes")
|
||||
default:
|
||||
return false
|
||||
}
|
||||
}
|
||||
110
tools/grammars/compiler/loader_fs.go
Normal file
110
tools/grammars/compiler/loader_fs.go
Normal file
@@ -0,0 +1,110 @@
|
||||
package compiler
|
||||
|
||||
import (
|
||||
"io/ioutil"
|
||||
"os"
|
||||
"os/exec"
|
||||
"path"
|
||||
"path/filepath"
|
||||
"sort"
|
||||
"strings"
|
||||
)
|
||||
|
||||
type fsLoader struct {
|
||||
*Repository
|
||||
abspath string
|
||||
}
|
||||
|
||||
var preferredGrammars = map[string]int{
|
||||
".tmlanguage": 0,
|
||||
".cson": 1,
|
||||
".json": 1,
|
||||
".plist": 2,
|
||||
".yaml-tmlanguage": 3,
|
||||
}
|
||||
|
||||
func findPreferredExtension(ext []string) string {
|
||||
if len(ext) > 1 {
|
||||
sort.Slice(ext, func(i, j int) bool {
|
||||
a := strings.ToLower(ext[i])
|
||||
b := strings.ToLower(ext[j])
|
||||
return preferredGrammars[a] < preferredGrammars[b]
|
||||
})
|
||||
}
|
||||
return ext[0]
|
||||
}
|
||||
|
||||
func (l *fsLoader) findGrammars() (files []string, err error) {
|
||||
grammars := make(map[string][]string)
|
||||
|
||||
err = filepath.Walk(l.abspath,
|
||||
func(path string, info os.FileInfo, err error) error {
|
||||
if err == nil && isValidGrammar(path, info) {
|
||||
ext := filepath.Ext(path)
|
||||
base := path[0 : len(path)-len(ext)]
|
||||
grammars[base] = append(grammars[base], ext)
|
||||
}
|
||||
return nil
|
||||
})
|
||||
|
||||
for base, ext := range grammars {
|
||||
pref := findPreferredExtension(ext)
|
||||
files = append(files, base+pref)
|
||||
}
|
||||
|
||||
return
|
||||
}
|
||||
|
||||
func (l *fsLoader) load() {
|
||||
grammars, err := l.findGrammars()
|
||||
if err != nil {
|
||||
l.Fail(err)
|
||||
return
|
||||
}
|
||||
|
||||
for _, path := range grammars {
|
||||
data, err := ioutil.ReadFile(path)
|
||||
if err != nil {
|
||||
l.Fail(err)
|
||||
continue
|
||||
}
|
||||
|
||||
if rel, err := filepath.Rel(l.abspath, path); err == nil {
|
||||
path = rel
|
||||
}
|
||||
|
||||
rule, unknown, err := ConvertProto(filepath.Ext(path), data)
|
||||
if err != nil {
|
||||
l.Fail(&ConversionError{path, err})
|
||||
continue
|
||||
}
|
||||
|
||||
if _, ok := l.Files[rule.ScopeName]; ok {
|
||||
continue
|
||||
}
|
||||
|
||||
l.AddFile(path, rule, unknown)
|
||||
}
|
||||
}
|
||||
|
||||
func gitRemoteName(path string) (string, error) {
|
||||
remote, err := exec.Command("git", "-C", path, "remote", "get-url", "origin").Output()
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
return strings.TrimSpace(string(remote)), nil
|
||||
}
|
||||
|
||||
func LoadFromFilesystem(root, src string) *Repository {
|
||||
loader := fsLoader{
|
||||
Repository: newRepository(src),
|
||||
abspath: path.Join(root, src),
|
||||
}
|
||||
loader.load()
|
||||
|
||||
if ups, err := gitRemoteName(loader.abspath); err == nil {
|
||||
loader.Repository.Upstream = ups
|
||||
}
|
||||
|
||||
return loader.Repository
|
||||
}
|
||||
93
tools/grammars/compiler/loader_url.go
Normal file
93
tools/grammars/compiler/loader_url.go
Normal file
@@ -0,0 +1,93 @@
|
||||
package compiler
|
||||
|
||||
import (
|
||||
"archive/tar"
|
||||
"compress/gzip"
|
||||
"io"
|
||||
"io/ioutil"
|
||||
"net/http"
|
||||
"path/filepath"
|
||||
"strings"
|
||||
)
|
||||
|
||||
type urlLoader struct {
|
||||
*Repository
|
||||
}
|
||||
|
||||
func (l *urlLoader) loadTarball(r io.Reader) {
|
||||
gzf, err := gzip.NewReader(r)
|
||||
if err != nil {
|
||||
l.Fail(err)
|
||||
return
|
||||
}
|
||||
defer gzf.Close()
|
||||
|
||||
tarReader := tar.NewReader(gzf)
|
||||
for true {
|
||||
header, err := tarReader.Next()
|
||||
|
||||
if err != nil {
|
||||
if err != io.EOF {
|
||||
l.Fail(err)
|
||||
}
|
||||
return
|
||||
}
|
||||
|
||||
if isValidGrammar(header.Name, header.FileInfo()) {
|
||||
data, err := ioutil.ReadAll(tarReader)
|
||||
if err != nil {
|
||||
l.Fail(err)
|
||||
return
|
||||
}
|
||||
|
||||
ext := filepath.Ext(header.Name)
|
||||
rule, unknown, err := ConvertProto(ext, data)
|
||||
if err != nil {
|
||||
l.Fail(&ConversionError{header.Name, err})
|
||||
continue
|
||||
}
|
||||
|
||||
if _, ok := l.Files[rule.ScopeName]; ok {
|
||||
continue
|
||||
}
|
||||
|
||||
l.AddFile(header.Name, rule, unknown)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func (l *urlLoader) load() {
|
||||
res, err := http.Get(l.Source)
|
||||
if err != nil {
|
||||
l.Fail(err)
|
||||
return
|
||||
}
|
||||
defer res.Body.Close()
|
||||
|
||||
if strings.HasSuffix(l.Source, ".tar.gz") {
|
||||
l.loadTarball(res.Body)
|
||||
return
|
||||
}
|
||||
|
||||
data, err := ioutil.ReadAll(res.Body)
|
||||
if err != nil {
|
||||
l.Fail(err)
|
||||
return
|
||||
}
|
||||
|
||||
ext := filepath.Ext(l.Source)
|
||||
filename := filepath.Base(l.Source)
|
||||
rule, unknown, err := ConvertProto(ext, data)
|
||||
if err != nil {
|
||||
l.Fail(&ConversionError{filename, err})
|
||||
return
|
||||
}
|
||||
|
||||
l.AddFile(filename, rule, unknown)
|
||||
}
|
||||
|
||||
func LoadFromURL(src string) *Repository {
|
||||
loader := urlLoader{newRepository(src)}
|
||||
loader.load()
|
||||
return loader.Repository
|
||||
}
|
||||
68
tools/grammars/compiler/pcre.go
Normal file
68
tools/grammars/compiler/pcre.go
Normal file
@@ -0,0 +1,68 @@
|
||||
package compiler
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
|
||||
"github.com/github/linguist/tools/grammars/pcre"
|
||||
)
|
||||
|
||||
type replacement struct {
|
||||
pos int
|
||||
len int
|
||||
val string
|
||||
}
|
||||
|
||||
func fixRegex(re string) (string, bool) {
|
||||
var (
|
||||
replace []replacement
|
||||
escape = false
|
||||
hasBackRefs = false
|
||||
)
|
||||
|
||||
for i, ch := range re {
|
||||
if escape {
|
||||
if ch == 'h' {
|
||||
replace = append(replace, replacement{i - 1, 2, "[[:xdigit:]]"})
|
||||
}
|
||||
if '0' <= ch && ch <= '9' {
|
||||
hasBackRefs = true
|
||||
}
|
||||
}
|
||||
escape = !escape && ch == '\\'
|
||||
}
|
||||
|
||||
if len(replace) > 0 {
|
||||
reb := []byte(re)
|
||||
offset := 0
|
||||
for _, repl := range replace {
|
||||
reb = append(
|
||||
reb[:offset+repl.pos],
|
||||
append([]byte(repl.val), reb[offset+repl.pos+repl.len:]...)...)
|
||||
offset += len(repl.val) - repl.len
|
||||
}
|
||||
return string(reb), hasBackRefs
|
||||
}
|
||||
|
||||
return re, hasBackRefs
|
||||
}
|
||||
|
||||
func CheckPCRE(re string) (string, error) {
|
||||
hasBackRefs := false
|
||||
|
||||
if re == "" {
|
||||
return "", nil
|
||||
}
|
||||
if len(re) > 32*1024 {
|
||||
return "", fmt.Errorf(
|
||||
"regex %s: definition too long (%d bytes)",
|
||||
pcre.RegexPP(re), len(re))
|
||||
}
|
||||
|
||||
re, hasBackRefs = fixRegex(re)
|
||||
if !hasBackRefs {
|
||||
if err := pcre.CheckRegexp(re, pcre.DefaultFlags); err != nil {
|
||||
return "", err
|
||||
}
|
||||
}
|
||||
return re, nil
|
||||
}
|
||||
27
tools/grammars/compiler/pcre_test.go
Normal file
27
tools/grammars/compiler/pcre_test.go
Normal file
@@ -0,0 +1,27 @@
|
||||
package compiler
|
||||
|
||||
import (
|
||||
"testing"
|
||||
)
|
||||
|
||||
func Test_fixRegex(t *testing.T) {
|
||||
tests := []struct {
|
||||
re string
|
||||
want string
|
||||
}{
|
||||
{"foobar", "foobar"},
|
||||
{`testing\h`, "testing[[:xdigit:]]"},
|
||||
{`\htest`, `[[:xdigit:]]test`},
|
||||
{`abc\hdef`, `abc[[:xdigit:]]def`},
|
||||
{`\\\htest`, `\\[[:xdigit:]]test`},
|
||||
{`\\htest`, `\\htest`},
|
||||
{`\h\h\h\h`, `[[:xdigit:]][[:xdigit:]][[:xdigit:]][[:xdigit:]]`},
|
||||
{`abc\hdef\hghi\h`, `abc[[:xdigit:]]def[[:xdigit:]]ghi[[:xdigit:]]`},
|
||||
}
|
||||
for _, tt := range tests {
|
||||
got, _ := fixRegex(tt.re)
|
||||
if got != tt.want {
|
||||
t.Errorf("fixRegex() got = %v, want %v", got, tt.want)
|
||||
}
|
||||
}
|
||||
}
|
||||
96
tools/grammars/compiler/proto.go
Normal file
96
tools/grammars/compiler/proto.go
Normal file
@@ -0,0 +1,96 @@
|
||||
package compiler
|
||||
|
||||
import (
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"reflect"
|
||||
"strings"
|
||||
|
||||
grammar "github.com/github/linguist/tools/grammars/proto"
|
||||
"github.com/groob/plist"
|
||||
"github.com/mitchellh/mapstructure"
|
||||
yaml "gopkg.in/yaml.v2"
|
||||
)
|
||||
|
||||
func looseDecoder(f reflect.Kind, t reflect.Kind, data interface{}) (interface{}, error) {
|
||||
dataVal := reflect.ValueOf(data)
|
||||
switch t {
|
||||
case reflect.Bool:
|
||||
switch f {
|
||||
case reflect.Bool:
|
||||
return dataVal.Bool(), nil
|
||||
case reflect.Float32, reflect.Float64:
|
||||
return (int(dataVal.Float()) != 0), nil
|
||||
case reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64:
|
||||
return (dataVal.Int() != 0), nil
|
||||
case reflect.Uint, reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64:
|
||||
return (dataVal.Uint() != 0), nil
|
||||
case reflect.String:
|
||||
switch dataVal.String() {
|
||||
case "1":
|
||||
return true, nil
|
||||
case "0":
|
||||
return false, nil
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
return data, nil
|
||||
}
|
||||
|
||||
func filterUnusedKeys(keys []string) (out []string) {
|
||||
for _, k := range keys {
|
||||
parts := strings.Split(k, ".")
|
||||
field := parts[len(parts)-1]
|
||||
if !KnownFields[field] {
|
||||
out = append(out, k)
|
||||
}
|
||||
}
|
||||
return
|
||||
}
|
||||
|
||||
func ConvertProto(ext string, data []byte) (*grammar.Rule, []string, error) {
|
||||
var (
|
||||
raw map[string]interface{}
|
||||
out grammar.Rule
|
||||
err error
|
||||
md mapstructure.Metadata
|
||||
)
|
||||
|
||||
switch strings.ToLower(ext) {
|
||||
case ".plist", ".tmlanguage":
|
||||
err = plist.Unmarshal(data, &raw)
|
||||
case ".yaml-tmlanguage":
|
||||
err = yaml.Unmarshal(data, &raw)
|
||||
case ".cson":
|
||||
data, err = ConvertCSON(data)
|
||||
if err == nil {
|
||||
err = json.Unmarshal(data, &raw)
|
||||
}
|
||||
case ".json":
|
||||
err = json.Unmarshal(data, &raw)
|
||||
default:
|
||||
err = fmt.Errorf("grammars: unsupported extension '%s'", ext)
|
||||
}
|
||||
|
||||
if err != nil {
|
||||
return nil, nil, err
|
||||
}
|
||||
|
||||
config := mapstructure.DecoderConfig{
|
||||
Result: &out,
|
||||
Metadata: &md,
|
||||
DecodeHook: looseDecoder,
|
||||
}
|
||||
|
||||
decoder, err := mapstructure.NewDecoder(&config)
|
||||
if err != nil {
|
||||
return nil, nil, err
|
||||
}
|
||||
|
||||
if err := decoder.Decode(raw); err != nil {
|
||||
return nil, nil, err
|
||||
}
|
||||
|
||||
return &out, filterUnusedKeys(md.Unused), nil
|
||||
}
|
||||
79
tools/grammars/compiler/walker.go
Normal file
79
tools/grammars/compiler/walker.go
Normal file
@@ -0,0 +1,79 @@
|
||||
package compiler
|
||||
|
||||
import (
|
||||
"strings"
|
||||
|
||||
grammar "github.com/github/linguist/tools/grammars/proto"
|
||||
)
|
||||
|
||||
func (w *walker) checkInclude(rule *grammar.Rule) {
|
||||
include := rule.Include
|
||||
|
||||
if include == "" || include[0] == '#' || include[0] == '$' {
|
||||
return
|
||||
}
|
||||
|
||||
if alias, ok := GrammarAliases[include]; ok {
|
||||
rule.Include = alias
|
||||
return
|
||||
}
|
||||
|
||||
include = strings.Split(include, "#")[0]
|
||||
ok := w.Known[include]
|
||||
if !ok {
|
||||
if !w.Missing[include] {
|
||||
w.Missing[include] = true
|
||||
w.Errors = append(w.Errors, &MissingIncludeError{w.File, include})
|
||||
}
|
||||
rule.Include = ""
|
||||
}
|
||||
}
|
||||
|
||||
func (w *walker) checkRegexps(rule *grammar.Rule) {
|
||||
check := func(re string) string {
|
||||
re2, err := CheckPCRE(re)
|
||||
if err != nil {
|
||||
w.Errors = append(w.Errors, &InvalidRegexError{w.File, err})
|
||||
}
|
||||
return re2
|
||||
}
|
||||
|
||||
rule.Match = check(rule.Match)
|
||||
rule.Begin = check(rule.Begin)
|
||||
rule.While = check(rule.While)
|
||||
rule.End = check(rule.End)
|
||||
}
|
||||
|
||||
func (w *walker) walk(rule *grammar.Rule) {
|
||||
w.checkInclude(rule)
|
||||
w.checkRegexps(rule)
|
||||
|
||||
for _, rule := range rule.Patterns {
|
||||
w.walk(rule)
|
||||
}
|
||||
for _, rule := range rule.Captures {
|
||||
w.walk(rule)
|
||||
}
|
||||
for _, rule := range rule.BeginCaptures {
|
||||
w.walk(rule)
|
||||
}
|
||||
for _, rule := range rule.WhileCaptures {
|
||||
w.walk(rule)
|
||||
}
|
||||
for _, rule := range rule.EndCaptures {
|
||||
w.walk(rule)
|
||||
}
|
||||
for _, rule := range rule.Repository {
|
||||
w.walk(rule)
|
||||
}
|
||||
for _, rule := range rule.Injections {
|
||||
w.walk(rule)
|
||||
}
|
||||
}
|
||||
|
||||
type walker struct {
|
||||
File *LoadedFile
|
||||
Known map[string]bool
|
||||
Missing map[string]bool
|
||||
Errors []error
|
||||
}
|
||||
11
tools/grammars/docker/build
Executable file
11
tools/grammars/docker/build
Executable file
@@ -0,0 +1,11 @@
|
||||
#!/bin/sh
|
||||
|
||||
set -ex
|
||||
cd "$(dirname "$0")/.."
|
||||
|
||||
image=linguist/grammar-compiler
|
||||
docker build -t $image .
|
||||
|
||||
if [ "$1" = "--push" ]; then
|
||||
docker push $image
|
||||
fi
|
||||
53
tools/grammars/pcre/pcre.go
Normal file
53
tools/grammars/pcre/pcre.go
Normal file
@@ -0,0 +1,53 @@
|
||||
package pcre
|
||||
|
||||
/*
|
||||
#cgo LDFLAGS: -lpcre
|
||||
#include <pcre.h>
|
||||
*/
|
||||
import "C"
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"strings"
|
||||
"unsafe"
|
||||
)
|
||||
|
||||
func RegexPP(re string) string {
|
||||
if len(re) > 32 {
|
||||
re = fmt.Sprintf("\"`%s`...\"", re[:32])
|
||||
} else {
|
||||
re = fmt.Sprintf("\"`%s`\"", re)
|
||||
}
|
||||
return strings.Replace(re, "\n", "", -1)
|
||||
}
|
||||
|
||||
type CompileError struct {
|
||||
Pattern string
|
||||
Message string
|
||||
Offset int
|
||||
}
|
||||
|
||||
func (e *CompileError) Error() string {
|
||||
return fmt.Sprintf("regex %s: %s (at offset %d)",
|
||||
RegexPP(e.Pattern), e.Message, e.Offset)
|
||||
}
|
||||
|
||||
const DefaultFlags = int(C.PCRE_DUPNAMES | C.PCRE_UTF8 | C.PCRE_NEWLINE_ANYCRLF)
|
||||
|
||||
func CheckRegexp(pattern string, flags int) error {
|
||||
pattern1 := C.CString(pattern)
|
||||
defer C.free(unsafe.Pointer(pattern1))
|
||||
|
||||
var errptr *C.char
|
||||
var erroffset C.int
|
||||
ptr := C.pcre_compile(pattern1, C.int(flags), &errptr, &erroffset, nil)
|
||||
if ptr == nil {
|
||||
return &CompileError{
|
||||
Pattern: pattern,
|
||||
Message: C.GoString(errptr),
|
||||
Offset: int(erroffset),
|
||||
}
|
||||
}
|
||||
C.free(unsafe.Pointer(ptr))
|
||||
return nil
|
||||
}
|
||||
239
tools/grammars/proto/grammar.pb.go
Normal file
239
tools/grammars/proto/grammar.pb.go
Normal file
@@ -0,0 +1,239 @@
|
||||
// Code generated by protoc-gen-go. DO NOT EDIT.
|
||||
// source: proto/grammar.proto
|
||||
|
||||
/*
|
||||
Package grammar is a generated protocol buffer package.
|
||||
|
||||
It is generated from these files:
|
||||
proto/grammar.proto
|
||||
|
||||
It has these top-level messages:
|
||||
Rule
|
||||
Library
|
||||
*/
|
||||
package grammar
|
||||
|
||||
import proto "github.com/golang/protobuf/proto"
|
||||
import fmt "fmt"
|
||||
import math "math"
|
||||
|
||||
// Reference imports to suppress errors if they are not otherwise used.
|
||||
var _ = proto.Marshal
|
||||
var _ = fmt.Errorf
|
||||
var _ = math.Inf
|
||||
|
||||
// This is a compile-time assertion to ensure that this generated file
|
||||
// is compatible with the proto package it is being compiled against.
|
||||
// A compilation error at this line likely means your copy of the
|
||||
// proto package needs to be updated.
|
||||
const _ = proto.ProtoPackageIsVersion2 // please upgrade the proto package
|
||||
|
||||
type Rule struct {
|
||||
Name string `protobuf:"bytes,1,opt,name=name" json:"name,omitempty"`
|
||||
ScopeName string `protobuf:"bytes,2,opt,name=scopeName" json:"scopeName,omitempty"`
|
||||
ContentName string `protobuf:"bytes,3,opt,name=contentName" json:"contentName,omitempty"`
|
||||
Match string `protobuf:"bytes,4,opt,name=match" json:"match,omitempty"`
|
||||
Begin string `protobuf:"bytes,5,opt,name=begin" json:"begin,omitempty"`
|
||||
While string `protobuf:"bytes,6,opt,name=while" json:"while,omitempty"`
|
||||
End string `protobuf:"bytes,7,opt,name=end" json:"end,omitempty"`
|
||||
Include string `protobuf:"bytes,8,opt,name=include" json:"include,omitempty"`
|
||||
Patterns []*Rule `protobuf:"bytes,9,rep,name=patterns" json:"patterns,omitempty"`
|
||||
Captures map[string]*Rule `protobuf:"bytes,10,rep,name=captures" json:"captures,omitempty" protobuf_key:"bytes,1,opt,name=key" protobuf_val:"bytes,2,opt,name=value"`
|
||||
BeginCaptures map[string]*Rule `protobuf:"bytes,11,rep,name=beginCaptures" json:"beginCaptures,omitempty" protobuf_key:"bytes,1,opt,name=key" protobuf_val:"bytes,2,opt,name=value"`
|
||||
WhileCaptures map[string]*Rule `protobuf:"bytes,12,rep,name=whileCaptures" json:"whileCaptures,omitempty" protobuf_key:"bytes,1,opt,name=key" protobuf_val:"bytes,2,opt,name=value"`
|
||||
EndCaptures map[string]*Rule `protobuf:"bytes,13,rep,name=endCaptures" json:"endCaptures,omitempty" protobuf_key:"bytes,1,opt,name=key" protobuf_val:"bytes,2,opt,name=value"`
|
||||
Repository map[string]*Rule `protobuf:"bytes,14,rep,name=repository" json:"repository,omitempty" protobuf_key:"bytes,1,opt,name=key" protobuf_val:"bytes,2,opt,name=value"`
|
||||
Injections map[string]*Rule `protobuf:"bytes,15,rep,name=injections" json:"injections,omitempty" protobuf_key:"bytes,1,opt,name=key" protobuf_val:"bytes,2,opt,name=value"`
|
||||
Disabled bool `protobuf:"varint,16,opt,name=disabled" json:"disabled,omitempty"`
|
||||
ApplyEndPatternLast bool `protobuf:"varint,17,opt,name=applyEndPatternLast" json:"applyEndPatternLast,omitempty"`
|
||||
IncludeResetBase bool `protobuf:"varint,18,opt,name=includeResetBase" json:"includeResetBase,omitempty"`
|
||||
}
|
||||
|
||||
func (m *Rule) Reset() { *m = Rule{} }
|
||||
func (m *Rule) String() string { return proto.CompactTextString(m) }
|
||||
func (*Rule) ProtoMessage() {}
|
||||
func (*Rule) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{0} }
|
||||
|
||||
func (m *Rule) GetName() string {
|
||||
if m != nil {
|
||||
return m.Name
|
||||
}
|
||||
return ""
|
||||
}
|
||||
|
||||
func (m *Rule) GetScopeName() string {
|
||||
if m != nil {
|
||||
return m.ScopeName
|
||||
}
|
||||
return ""
|
||||
}
|
||||
|
||||
func (m *Rule) GetContentName() string {
|
||||
if m != nil {
|
||||
return m.ContentName
|
||||
}
|
||||
return ""
|
||||
}
|
||||
|
||||
func (m *Rule) GetMatch() string {
|
||||
if m != nil {
|
||||
return m.Match
|
||||
}
|
||||
return ""
|
||||
}
|
||||
|
||||
func (m *Rule) GetBegin() string {
|
||||
if m != nil {
|
||||
return m.Begin
|
||||
}
|
||||
return ""
|
||||
}
|
||||
|
||||
func (m *Rule) GetWhile() string {
|
||||
if m != nil {
|
||||
return m.While
|
||||
}
|
||||
return ""
|
||||
}
|
||||
|
||||
func (m *Rule) GetEnd() string {
|
||||
if m != nil {
|
||||
return m.End
|
||||
}
|
||||
return ""
|
||||
}
|
||||
|
||||
func (m *Rule) GetInclude() string {
|
||||
if m != nil {
|
||||
return m.Include
|
||||
}
|
||||
return ""
|
||||
}
|
||||
|
||||
func (m *Rule) GetPatterns() []*Rule {
|
||||
if m != nil {
|
||||
return m.Patterns
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func (m *Rule) GetCaptures() map[string]*Rule {
|
||||
if m != nil {
|
||||
return m.Captures
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func (m *Rule) GetBeginCaptures() map[string]*Rule {
|
||||
if m != nil {
|
||||
return m.BeginCaptures
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func (m *Rule) GetWhileCaptures() map[string]*Rule {
|
||||
if m != nil {
|
||||
return m.WhileCaptures
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func (m *Rule) GetEndCaptures() map[string]*Rule {
|
||||
if m != nil {
|
||||
return m.EndCaptures
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func (m *Rule) GetRepository() map[string]*Rule {
|
||||
if m != nil {
|
||||
return m.Repository
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func (m *Rule) GetInjections() map[string]*Rule {
|
||||
if m != nil {
|
||||
return m.Injections
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func (m *Rule) GetDisabled() bool {
|
||||
if m != nil {
|
||||
return m.Disabled
|
||||
}
|
||||
return false
|
||||
}
|
||||
|
||||
func (m *Rule) GetApplyEndPatternLast() bool {
|
||||
if m != nil {
|
||||
return m.ApplyEndPatternLast
|
||||
}
|
||||
return false
|
||||
}
|
||||
|
||||
func (m *Rule) GetIncludeResetBase() bool {
|
||||
if m != nil {
|
||||
return m.IncludeResetBase
|
||||
}
|
||||
return false
|
||||
}
|
||||
|
||||
type Library struct {
|
||||
Grammars map[string]*Rule `protobuf:"bytes,1,rep,name=grammars" json:"grammars,omitempty" protobuf_key:"bytes,1,opt,name=key" protobuf_val:"bytes,2,opt,name=value"`
|
||||
}
|
||||
|
||||
func (m *Library) Reset() { *m = Library{} }
|
||||
func (m *Library) String() string { return proto.CompactTextString(m) }
|
||||
func (*Library) ProtoMessage() {}
|
||||
func (*Library) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{1} }
|
||||
|
||||
func (m *Library) GetGrammars() map[string]*Rule {
|
||||
if m != nil {
|
||||
return m.Grammars
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func init() {
|
||||
proto.RegisterType((*Rule)(nil), "grammar.Rule")
|
||||
proto.RegisterType((*Library)(nil), "grammar.Library")
|
||||
}
|
||||
|
||||
func init() { proto.RegisterFile("proto/grammar.proto", fileDescriptor0) }
|
||||
|
||||
var fileDescriptor0 = []byte{
|
||||
// 486 bytes of a gzipped FileDescriptorProto
|
||||
0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0xa4, 0x54, 0xcb, 0x8e, 0xd3, 0x30,
|
||||
0x14, 0x55, 0x66, 0xda, 0x69, 0x7a, 0x4b, 0x99, 0x72, 0x87, 0x85, 0x55, 0x1e, 0x8a, 0x86, 0x4d,
|
||||
0x61, 0x51, 0x10, 0x2c, 0x40, 0x23, 0x21, 0xa1, 0x41, 0x05, 0x81, 0xca, 0x43, 0xd9, 0xb0, 0x76,
|
||||
0x13, 0x6b, 0x26, 0x90, 0x3a, 0x91, 0xed, 0x82, 0xf2, 0x19, 0x7c, 0x19, 0xbf, 0x84, 0x7c, 0xed,
|
||||
0xa6, 0x49, 0xdb, 0x5d, 0x76, 0xbe, 0xe7, 0x25, 0x3b, 0x3e, 0x0e, 0x5c, 0x94, 0xaa, 0x30, 0xc5,
|
||||
0xf3, 0x1b, 0xc5, 0xd7, 0x6b, 0xae, 0xe6, 0x34, 0xe1, 0xc0, 0x8f, 0x97, 0xff, 0x86, 0xd0, 0x8b,
|
||||
0x37, 0xb9, 0x40, 0x84, 0x9e, 0xe4, 0x6b, 0xc1, 0x82, 0x28, 0x98, 0x0d, 0x63, 0x5a, 0xe3, 0x43,
|
||||
0x18, 0xea, 0xa4, 0x28, 0xc5, 0x57, 0x4b, 0x9c, 0x10, 0xb1, 0x03, 0x30, 0x82, 0x51, 0x52, 0x48,
|
||||
0x23, 0xa4, 0x21, 0xfe, 0x94, 0xf8, 0x26, 0x84, 0xf7, 0xa1, 0xbf, 0xe6, 0x26, 0xb9, 0x65, 0x3d,
|
||||
0xe2, 0xdc, 0x60, 0xd1, 0x95, 0xb8, 0xc9, 0x24, 0xeb, 0x3b, 0x94, 0x06, 0x8b, 0xfe, 0xb9, 0xcd,
|
||||
0x72, 0xc1, 0xce, 0x1c, 0x4a, 0x03, 0x4e, 0xe0, 0x54, 0xc8, 0x94, 0x0d, 0x08, 0xb3, 0x4b, 0x64,
|
||||
0x30, 0xc8, 0x64, 0x92, 0x6f, 0x52, 0xc1, 0x42, 0x42, 0xb7, 0x23, 0x3e, 0x85, 0xb0, 0xe4, 0xc6,
|
||||
0x08, 0x25, 0x35, 0x1b, 0x46, 0xa7, 0xb3, 0xd1, 0xcb, 0xf1, 0x7c, 0x7b, 0x6a, 0x7b, 0xc4, 0xb8,
|
||||
0xa6, 0xf1, 0x35, 0x84, 0x09, 0x2f, 0xcd, 0x46, 0x09, 0xcd, 0x80, 0xa4, 0x0f, 0x5a, 0xd2, 0xf9,
|
||||
0x7b, 0xcf, 0x2e, 0xa4, 0x51, 0x55, 0x5c, 0x8b, 0xf1, 0x03, 0x8c, 0x69, 0xbb, 0x5b, 0x9e, 0x8d,
|
||||
0xc8, 0x1d, 0xb5, 0xdd, 0xd7, 0x4d, 0x89, 0x8b, 0x68, 0xdb, 0x6c, 0x0e, 0x1d, 0xb0, 0xce, 0xb9,
|
||||
0x73, 0x2c, 0xe7, 0x47, 0x53, 0xe2, 0x73, 0x5a, 0x36, 0x7c, 0x07, 0x23, 0x21, 0xd3, 0x3a, 0x65,
|
||||
0x4c, 0x29, 0x8f, 0xdb, 0x29, 0x8b, 0x9d, 0xc0, 0x65, 0x34, 0x2d, 0xf8, 0x16, 0x40, 0x89, 0xb2,
|
||||
0xd0, 0x99, 0x29, 0x54, 0xc5, 0xee, 0x52, 0xc0, 0xa3, 0x76, 0x40, 0x5c, 0xf3, 0xce, 0xdf, 0x30,
|
||||
0x58, 0x7b, 0x26, 0x7f, 0x8a, 0xc4, 0x64, 0x85, 0xd4, 0xec, 0xfc, 0x98, 0xfd, 0x53, 0xcd, 0x7b,
|
||||
0xfb, 0xce, 0x80, 0x53, 0x08, 0xd3, 0x4c, 0xf3, 0x55, 0x2e, 0x52, 0x36, 0x89, 0x82, 0x59, 0x18,
|
||||
0xd7, 0x33, 0xbe, 0x80, 0x0b, 0x5e, 0x96, 0x79, 0xb5, 0x90, 0xe9, 0x77, 0x77, 0x71, 0x4b, 0xae,
|
||||
0x0d, 0xbb, 0x47, 0xb2, 0x63, 0x14, 0x3e, 0x83, 0x89, 0x2f, 0x43, 0x2c, 0xb4, 0x30, 0xd7, 0x5c,
|
||||
0x0b, 0x86, 0x24, 0x3f, 0xc0, 0xa7, 0x9f, 0x61, 0xdc, 0xfa, 0x2a, 0xb6, 0x6a, 0xbf, 0x44, 0xe5,
|
||||
0xfb, 0x6f, 0x97, 0xf8, 0x04, 0xfa, 0xbf, 0x79, 0xbe, 0x71, 0xd5, 0x3f, 0x68, 0x93, 0xe3, 0xae,
|
||||
0x4e, 0xde, 0x04, 0xd3, 0x6f, 0x80, 0x87, 0x57, 0xde, 0x31, 0xf0, 0xf0, 0xee, 0xbb, 0x04, 0x7e,
|
||||
0x81, 0xc9, 0x7e, 0x0d, 0xba, 0xc4, 0x2d, 0xe1, 0x7c, 0xaf, 0x14, 0x1d, 0xd3, 0xf6, 0x3a, 0xd2,
|
||||
0x21, 0xed, 0xf2, 0x6f, 0x00, 0x83, 0x65, 0xb6, 0x52, 0x5c, 0x55, 0x78, 0x05, 0xa1, 0x97, 0x69,
|
||||
0x16, 0xec, 0xbd, 0x0d, 0xaf, 0x99, 0x7f, 0xf4, 0x02, 0xff, 0xd4, 0xb7, 0x7a, 0x5b, 0x90, 0x16,
|
||||
0xd5, 0x61, 0x4f, 0xab, 0x33, 0xfa, 0xeb, 0xbe, 0xfa, 0x1f, 0x00, 0x00, 0xff, 0xff, 0x2b, 0x2e,
|
||||
0xec, 0x55, 0x8c, 0x05, 0x00, 0x00,
|
||||
}
|
||||
2
vendor/CodeMirror
vendored
2
vendor/CodeMirror
vendored
Submodule vendor/CodeMirror updated: 97290a687e...15d9d4e201
16
vendor/README.md
vendored
16
vendor/README.md
vendored
@@ -24,7 +24,6 @@ This is a list of grammars that Linguist selects to provide syntax highlighting
|
||||
- **APL:** [Alhadis/language-apl](https://github.com/Alhadis/language-apl)
|
||||
- **Apollo Guidance Computer:** [Alhadis/language-agc](https://github.com/Alhadis/language-agc)
|
||||
- **AppleScript:** [textmate/applescript.tmbundle](https://github.com/textmate/applescript.tmbundle)
|
||||
- **Arduino:** [textmate/c.tmbundle](https://github.com/textmate/c.tmbundle)
|
||||
- **AsciiDoc:** [zuckschwerdt/asciidoc.tmbundle](https://github.com/zuckschwerdt/asciidoc.tmbundle)
|
||||
- **ASN.1:** [ajLangley12/language-asn1](https://github.com/ajLangley12/language-asn1)
|
||||
- **ASP:** [textmate/asp.tmbundle](https://github.com/textmate/asp.tmbundle)
|
||||
@@ -70,6 +69,7 @@ This is a list of grammars that Linguist selects to provide syntax highlighting
|
||||
- **ColdFusion CFC:** [SublimeText/ColdFusion](https://github.com/SublimeText/ColdFusion)
|
||||
- **COLLADA:** [textmate/xml.tmbundle](https://github.com/textmate/xml.tmbundle)
|
||||
- **Common Lisp:** [textmate/lisp.tmbundle](https://github.com/textmate/lisp.tmbundle)
|
||||
- **Common Workflow Language:** [manabuishii/language-cwl](https://github.com/manabuishii/language-cwl)
|
||||
- **Component Pascal:** [textmate/pascal.tmbundle](https://github.com/textmate/pascal.tmbundle)
|
||||
- **Cool:** [anunayk/cool-tmbundle](https://github.com/anunayk/cool-tmbundle)
|
||||
- **Coq:** [mkolosick/Sublime-Coq](https://github.com/mkolosick/Sublime-Coq)
|
||||
@@ -159,7 +159,7 @@ This is a list of grammars that Linguist selects to provide syntax highlighting
|
||||
- **HTML+EEX:** [elixir-lang/elixir-tmbundle](https://github.com/elixir-lang/elixir-tmbundle)
|
||||
- **HTML+ERB:** [atom/language-ruby](https://github.com/atom/language-ruby)
|
||||
- **HTML+PHP:** [textmate/php.tmbundle](https://github.com/textmate/php.tmbundle)
|
||||
- **HTTP:** [httpspec/sublime-highlighting](https://github.com/httpspec/sublime-highlighting)
|
||||
- **HTTP:** [samsalisbury/Sublime-HTTP](https://github.com/samsalisbury/Sublime-HTTP)
|
||||
- **IDL:** [mgalloy/idl.tmbundle](https://github.com/mgalloy/idl.tmbundle)
|
||||
- **Idris:** [idris-hackers/idris-sublime](https://github.com/idris-hackers/idris-sublime)
|
||||
- **Inform 7:** [erkyrath/language-inform7](https://github.com/erkyrath/language-inform7)
|
||||
@@ -168,7 +168,7 @@ This is a list of grammars that Linguist selects to provide syntax highlighting
|
||||
- **Ioke:** [vic/ioke-outdated](https://github.com/vic/ioke-outdated)
|
||||
- **Isabelle:** [lsf37/Isabelle.tmbundle](https://github.com/lsf37/Isabelle.tmbundle)
|
||||
- **Isabelle ROOT:** [lsf37/Isabelle.tmbundle](https://github.com/lsf37/Isabelle.tmbundle)
|
||||
- **J:** [bcj/JSyntax](https://github.com/bcj/JSyntax)
|
||||
- **J:** [tikkanz/JSyntax](https://github.com/tikkanz/JSyntax)
|
||||
- **Jasmin:** [atmarksharp/jasmin-sublime](https://github.com/atmarksharp/jasmin-sublime)
|
||||
- **Java:** [textmate/java.tmbundle](https://github.com/textmate/java.tmbundle)
|
||||
- **Java Server Pages:** [textmate/java.tmbundle](https://github.com/textmate/java.tmbundle)
|
||||
@@ -181,7 +181,7 @@ This is a list of grammars that Linguist selects to provide syntax highlighting
|
||||
- **JSON5:** [atom/language-javascript](https://github.com/atom/language-javascript)
|
||||
- **JSONiq:** [wcandillon/language-jsoniq](https://github.com/wcandillon/language-jsoniq)
|
||||
- **JSONLD:** [atom/language-javascript](https://github.com/atom/language-javascript)
|
||||
- **JSX:** [github-linguist/language-babel](https://github.com/github-linguist/language-babel)
|
||||
- **JSX:** [lildude/language-babel](https://github.com/lildude/language-babel)
|
||||
- **Julia:** [JuliaEditorSupport/atom-language-julia](https://github.com/JuliaEditorSupport/atom-language-julia)
|
||||
- **Jupyter Notebook:** [textmate/json.tmbundle](https://github.com/textmate/json.tmbundle)
|
||||
- **KiCad Layout:** [Alhadis/language-pcb](https://github.com/Alhadis/language-pcb)
|
||||
@@ -213,7 +213,7 @@ This is a list of grammars that Linguist selects to provide syntax highlighting
|
||||
- **Marko:** [marko-js/marko-tmbundle](https://github.com/marko-js/marko-tmbundle)
|
||||
- **Mask:** [tenbits/sublime-mask](https://github.com/tenbits/sublime-mask)
|
||||
- **Mathematica:** [shadanan/mathematica-tmbundle](https://github.com/shadanan/mathematica-tmbundle)
|
||||
- **Matlab:** [textmate/matlab.tmbundle](https://github.com/textmate/matlab.tmbundle)
|
||||
- **Matlab:** [mathworks/MATLAB-Language-grammar](https://github.com/mathworks/MATLAB-Language-grammar)
|
||||
- **Maven POM:** [textmate/maven.tmbundle](https://github.com/textmate/maven.tmbundle)
|
||||
- **Max:** [textmate/json.tmbundle](https://github.com/textmate/json.tmbundle)
|
||||
- **MAXScript:** [Alhadis/language-maxscript](https://github.com/Alhadis/language-maxscript)
|
||||
@@ -238,6 +238,7 @@ This is a list of grammars that Linguist selects to provide syntax highlighting
|
||||
- **NetLinx+ERB:** [amclain/sublime-netlinx](https://github.com/amclain/sublime-netlinx)
|
||||
- **NetLogo:** [textmate/lisp.tmbundle](https://github.com/textmate/lisp.tmbundle)
|
||||
- **NewLisp:** [textmate/lisp.tmbundle](https://github.com/textmate/lisp.tmbundle)
|
||||
- **Nextflow:** [nextflow-io/atom-language-nextflow](https://github.com/nextflow-io/atom-language-nextflow)
|
||||
- **Nginx:** [brandonwamboldt/sublime-nginx](https://github.com/brandonwamboldt/sublime-nginx)
|
||||
- **Nim:** [Varriount/NimLime](https://github.com/Varriount/NimLime)
|
||||
- **Ninja:** [khyo/language-ninja](https://github.com/khyo/language-ninja)
|
||||
@@ -277,6 +278,7 @@ This is a list of grammars that Linguist selects to provide syntax highlighting
|
||||
- **PLpgSQL:** [textmate/sql.tmbundle](https://github.com/textmate/sql.tmbundle)
|
||||
- **PogoScript:** [featurist/PogoScript.tmbundle](https://github.com/featurist/PogoScript.tmbundle)
|
||||
- **Pony:** [CausalityLtd/sublime-pony](https://github.com/CausalityLtd/sublime-pony)
|
||||
- **PostCSS:** [hudochenkov/Syntax-highlighting-for-PostCSS](https://github.com/hudochenkov/Syntax-highlighting-for-PostCSS)
|
||||
- **PostScript:** [textmate/postscript.tmbundle](https://github.com/textmate/postscript.tmbundle)
|
||||
- **POV-Ray SDL:** [c-lipka/language-povray](https://github.com/c-lipka/language-povray)
|
||||
- **PowerShell:** [SublimeText/PowerShell](https://github.com/SublimeText/PowerShell)
|
||||
@@ -311,6 +313,7 @@ This is a list of grammars that Linguist selects to provide syntax highlighting
|
||||
- **RobotFramework:** [shellderp/sublime-robot-plugin](https://github.com/shellderp/sublime-robot-plugin)
|
||||
- **Roff:** [Alhadis/language-roff](https://github.com/Alhadis/language-roff)
|
||||
- **Rouge:** [atom/language-clojure](https://github.com/atom/language-clojure)
|
||||
- **RPC:** [textmate/c.tmbundle](https://github.com/textmate/c.tmbundle)
|
||||
- **RPM Spec:** [waveclaw/language-rpm-spec](https://github.com/waveclaw/language-rpm-spec)
|
||||
- **Ruby:** [atom/language-ruby](https://github.com/atom/language-ruby)
|
||||
- **RUNOFF:** [Alhadis/language-roff](https://github.com/Alhadis/language-roff)
|
||||
@@ -334,6 +337,7 @@ This is a list of grammars that Linguist selects to provide syntax highlighting
|
||||
- **Smalltalk:** [tomas-stefano/smalltalk-tmbundle](https://github.com/tomas-stefano/smalltalk-tmbundle)
|
||||
- **Smarty:** [textmate/php-smarty.tmbundle](https://github.com/textmate/php-smarty.tmbundle)
|
||||
- **SMT:** [SRI-CSL/SMT.tmbundle](https://github.com/SRI-CSL/SMT.tmbundle)
|
||||
- **Solidity:** [davidhq/SublimeEthereum](https://github.com/davidhq/SublimeEthereum)
|
||||
- **SourcePawn:** [github-linguist/sublime-sourcepawn](https://github.com/github-linguist/sublime-sourcepawn)
|
||||
- **SPARQL:** [peta/turtle.tmbundle](https://github.com/peta/turtle.tmbundle)
|
||||
- **Spline Font Database:** [Alhadis/language-fontforge](https://github.com/Alhadis/language-fontforge)
|
||||
@@ -349,6 +353,7 @@ This is a list of grammars that Linguist selects to provide syntax highlighting
|
||||
- **Stylus:** [billymoon/Stylus](https://github.com/billymoon/Stylus)
|
||||
- **Sublime Text Config:** [atom/language-javascript](https://github.com/atom/language-javascript)
|
||||
- **SubRip Text:** [314eter/atom-language-srt](https://github.com/314eter/atom-language-srt)
|
||||
- **SugarSS:** [hudochenkov/Syntax-highlighting-for-PostCSS](https://github.com/hudochenkov/Syntax-highlighting-for-PostCSS)
|
||||
- **SuperCollider:** [supercollider/language-supercollider](https://github.com/supercollider/language-supercollider)
|
||||
- **SVG:** [textmate/xml.tmbundle](https://github.com/textmate/xml.tmbundle)
|
||||
- **Swift:** [textmate/swift.tmbundle](https://github.com/textmate/swift.tmbundle)
|
||||
@@ -405,4 +410,5 @@ This is a list of grammars that Linguist selects to provide syntax highlighting
|
||||
- **Yacc:** [textmate/bison.tmbundle](https://github.com/textmate/bison.tmbundle)
|
||||
- **YAML:** [atom/language-yaml](https://github.com/atom/language-yaml)
|
||||
- **YANG:** [DzonyKalafut/language-yang](https://github.com/DzonyKalafut/language-yang)
|
||||
- **YARA:** [blacktop/language-yara](https://github.com/blacktop/language-yara)
|
||||
- **Zephir:** [phalcon/zephir-sublime](https://github.com/phalcon/zephir-sublime)
|
||||
|
||||
2
vendor/grammars/Docker.tmbundle
vendored
2
vendor/grammars/Docker.tmbundle
vendored
Submodule vendor/grammars/Docker.tmbundle updated: 41b5d53ca4...6e521ead6c
2
vendor/grammars/Elm
vendored
2
vendor/grammars/Elm
vendored
Submodule vendor/grammars/Elm updated: 581b9e6f5b...6bbbca9ccd
2
vendor/grammars/JSyntax
vendored
2
vendor/grammars/JSyntax
vendored
Submodule vendor/grammars/JSyntax updated: 74971149b5...1a918545c8
1
vendor/grammars/MATLAB-Language-grammar
vendored
Submodule
1
vendor/grammars/MATLAB-Language-grammar
vendored
Submodule
Submodule vendor/grammars/MATLAB-Language-grammar added at ef1281a78f
2
vendor/grammars/NimLime
vendored
2
vendor/grammars/NimLime
vendored
Submodule vendor/grammars/NimLime updated: bf48175e71...443f9d48df
2
vendor/grammars/Stylus
vendored
2
vendor/grammars/Stylus
vendored
Submodule vendor/grammars/Stylus updated: 61bab33f37...30908e3b57
2
vendor/grammars/SublimeEthereum
vendored
2
vendor/grammars/SublimeEthereum
vendored
Submodule vendor/grammars/SublimeEthereum updated: 396ba0fbef...ab901fdf94
1
vendor/grammars/Syntax-highlighting-for-PostCSS
vendored
Submodule
1
vendor/grammars/Syntax-highlighting-for-PostCSS
vendored
Submodule
Submodule vendor/grammars/Syntax-highlighting-for-PostCSS added at 575b918985
2
vendor/grammars/TypeScript-TmLanguage
vendored
2
vendor/grammars/TypeScript-TmLanguage
vendored
Submodule vendor/grammars/TypeScript-TmLanguage updated: 4b614e2efd...0247d1444a
2
vendor/grammars/atom-language-julia
vendored
2
vendor/grammars/atom-language-julia
vendored
Submodule vendor/grammars/atom-language-julia updated: 4e8896ed0b...7803a437f8
1
vendor/grammars/atom-language-nextflow
vendored
Submodule
1
vendor/grammars/atom-language-nextflow
vendored
Submodule
Submodule vendor/grammars/atom-language-nextflow added at 557669e2ae
2
vendor/grammars/atom-language-perl6
vendored
2
vendor/grammars/atom-language-perl6
vendored
Submodule vendor/grammars/atom-language-perl6 updated: 611c924d0f...382720261a
2
vendor/grammars/atom-language-rust
vendored
2
vendor/grammars/atom-language-rust
vendored
Submodule vendor/grammars/atom-language-rust updated: 59893b659a...179f449a69
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user