Merge remote-tracking branch 'origin/master' into rewrite-readme

* origin/master: (104 commits)
  Added shebang sample for Pike.
  Added interpreter "pike" for Pike.
  Add support for FXML files.
  Add support for Turtle and SPARQL
  Fixed issues for web ontology to pass tests
  Added Web Ontology Language Support
  Simplify blob tests
  Use the original FileBlob path for filesystem access
  Sample sagews file, as requested
  Update languages.yml with *.sagews
  New grammar for Racket
  Remove grammar for Racket
  Modifying BlobHelper and FileBlob to use path
  Sample file for .cmake.in
  Restore the .cmake.in extension.
  More CMake samples.
  Updating file regex to support unlicense.txt
  Updating ref to include license
  Remove pry
  Start using path with LazyBlob
  ...

Conflicts:
	CONTRIBUTING.md
	README.md
This commit is contained in:
Brandon Keepers
2015-01-16 09:35:33 -05:00
98 changed files with 11733 additions and 160 deletions

51
.gitmodules vendored
View File

@@ -82,9 +82,6 @@
[submodule "vendor/grammars/language-python"]
path = vendor/grammars/language-python
url = https://github.com/atom/language-python
[submodule "vendor/grammars/language-sass"]
path = vendor/grammars/language-sass
url = https://github.com/atom/language-sass
[submodule "vendor/grammars/language-shellscript"]
path = vendor/grammars/language-shellscript
url = https://github.com/atom/language-shellscript
@@ -115,9 +112,6 @@
[submodule "vendor/grammars/nesC.tmbundle"]
path = vendor/grammars/nesC.tmbundle
url = https://github.com/cdwilson/nesC.tmbundle
[submodule "vendor/grammars/racket-tmbundle"]
path = vendor/grammars/racket-tmbundle
url = https://github.com/christophevg/racket-tmbundle
[submodule "vendor/grammars/haxe-sublime-bundle"]
path = vendor/grammars/haxe-sublime-bundle
url = https://github.com/clemos/haxe-sublime-bundle
@@ -244,9 +238,6 @@
[submodule "vendor/grammars/abap.tmbundle"]
path = vendor/grammars/abap.tmbundle
url = https://github.com/pvl/abap.tmbundle
[submodule "vendor/grammars/Scalate.tmbundle"]
path = vendor/grammars/Scalate.tmbundle
url = https://github.com/scalate/Scalate.tmbundle
[submodule "vendor/grammars/mercury-tmlanguage"]
path = vendor/grammars/mercury-tmlanguage
url = https://github.com/sebgod/mercury-tmlanguage
@@ -486,7 +477,7 @@
url = https://github.com/vkostyukov/kotlin-sublime-package
[submodule "vendor/grammars/c.tmbundle"]
path = vendor/grammars/c.tmbundle
url = https://github.com/vmg/c.tmbundle
url = https://github.com/textmate/c.tmbundle
[submodule "vendor/grammars/zephir-sublime"]
path = vendor/grammars/zephir-sublime
url = https://github.com/vmg/zephir-sublime
@@ -509,7 +500,6 @@
[submodule "vendor/grammars/sublime-mask"]
path = vendor/grammars/sublime-mask
url = https://github.com/tenbits/sublime-mask
branch = release
[submodule "vendor/grammars/sublime_cobol"]
path = vendor/grammars/sublime_cobol
url = https://bitbucket.org/bitlang/sublime_cobol
@@ -520,3 +510,42 @@
[submodule "vendor/grammars/IDL-Syntax"]
path = vendor/grammars/IDL-Syntax
url = https://github.com/andik/IDL-Syntax
[submodule "vendor/grammars/sas.tmbundle"]
path = vendor/grammars/sas.tmbundle
url = https://github.com/rpardee/sas.tmbundle
[submodule "vendor/grammars/atom-salt"]
path = vendor/grammars/atom-salt
url = https://github.com/saltstack/atom-salt
[submodule "vendor/grammars/Scalate.tmbundle"]
path = vendor/grammars/Scalate.tmbundle
url = https://github.com/scalate/Scalate.tmbundle
[submodule "vendor/grammars/Elm.tmLanguage"]
path = vendor/grammars/Elm.tmLanguage
url = https://github.com/deadfoxygrandpa/Elm.tmLanguage
[submodule "vendor/grammars/sublime-bsv"]
path = vendor/grammars/sublime-bsv
url = https://github.com/thotypous/sublime-bsv
[submodule "vendor/grammars/AutoHotkey"]
path = vendor/grammars/AutoHotkey
url = https://github.com/robertcollier4/AutoHotkey
[submodule "vendor/grammars/Sublime-HTTP"]
path = vendor/grammars/Sublime-HTTP
url = https://github.com/httpspec/sublime-highlighting
[submodule "vendor/grammars/sass-textmate-bundle"]
path = vendor/grammars/sass-textmate-bundle
url = https://github.com/nathos/sass-textmate-bundle
[submodule "vendor/grammars/carto-atom"]
path = vendor/grammars/carto-atom
url = https://github.com/yohanboniface/carto-atom
[submodule "vendor/grammars/Sublime-Nit"]
path = vendor/grammars/Sublime-Nit
url = https://github.com/R4PaSs/Sublime-Nit
[submodule "vendor/grammars/language-hy"]
path = vendor/grammars/language-hy
url = https://github.com/rwtolbert/language-hy
[submodule "vendor/grammars/Racket"]
path = vendor/grammars/Racket
url = https://github.com/soegaard/racket-highlight-for-github
[submodule "vendor/grammars/turtle.tmbundle"]
path = vendor/grammars/turtle.tmbundle
url = https://github.com/peta/turtle.tmbundle

View File

@@ -1,12 +1,5 @@
before_install:
- git fetch origin master:master
- git fetch origin v2.0.0:v2.0.0
- git fetch origin test/attributes:test/attributes
- git fetch origin test/master:test/master
- sudo apt-get install libicu-dev -y
- git submodule init
- git submodule sync --quiet
- script/fast-submodule-update
sudo: false
before_install: script/travis/before_install
rvm:
- 1.9.3
- 2.0.0
@@ -16,3 +9,4 @@ notifications:
disabled: true
git:
submodules: false
cache: bundler

View File

@@ -13,10 +13,18 @@ To add support for a new language:
0. Add an entry for your language to [`languages.yml`][languages].
0. Add a grammar for your language. Please only add grammars that have a license that permits redistribution.
0. Add your grammar as a submodule: `git submodule add https://github.com/JaneSmith/MyGrammar vendor/grammars/MyGrammar`.
0. Add your grammar to [`grammars.yml`][grammars] by running `script/download-grammars --add vendor/grammars/MyGrammar`.
0. Add your grammar to [`grammars.yml`][grammars] by running `script/convert-grammars --add vendor/grammars/MyGrammar`.
0. Add samples for your language to the [samples directory][samples] in the correct subdirectory.
0. Open a pull request, linking to a [GitHub search result](https://github.com/search?utf8=%E2%9C%93&q=extension%3Aboot+NOT+nothack&type=Code&ref=searchresults) showing in-the-wild usage.
In addition, if your new language defines an extension that's already listed in [`languages.yml`][languages] (such as `.foo`) then sometimes a few more steps will need to be taken:
0. Make sure that example `.foo` files are present in the [samples directory][samples] for each language that uses `.foo`.
0. Test the performance of the Bayesian classifier with a relatively large number (1000s) of sample `.foo` files. (ping @arfon or @bkeepers to help with this) to ensure we're not misclassifying files.
0. If the Bayesian classifier does a bad job with the sample `.foo` files then a [heuristic](https://github.com/github/linguist/blob/master/lib/linguist/heuristics.rb) may need to be written to help.
Remember, the goal here is to try and avoid false positives!
## Fixing syntax highlighting
Syntax highlighting in GitHub is performed using TextMate-compatible grammars. These are the same grammars that TextMate, Sublime Text and Atom use. Every language in `languages.yml` is mapped to its corresponding TM `scope`. This scope will be used when picking up a grammar for highlighting.
@@ -49,6 +57,7 @@ If you are the current maintainer of this gem:
0. Create a branch for the release: `git checkout -b cut-release-vxx.xx.xx`
0. Make sure your local dependencies are up to date: `script/bootstrap`
0. If grammar submodules have not been updated recently, update them: `git submodule update --remote && git commit -a`
0. Ensure that samples are updated: `bundle exec rake samples`
0. Ensure that tests are green: `bundle exec rake test`
0. Bump gem version in `lib/linguist/version.rb`, [like this](https://github.com/github/linguist/commit/8d2ea90a5ba3b2fe6e1508b7155aa4632eea2985).

View File

@@ -1,5 +1,4 @@
source 'https://rubygems.org'
gemspec :name => "github-linguist"
gemspec :name => "github-linguist-grammars"
gem 'test-unit', require: false if RUBY_VERSION >= '2.2'
gem 'byebug' if RUBY_VERSION >= '2.0'

View File

@@ -48,7 +48,7 @@ end
task :build_grammars_gem do
rm_rf "grammars"
sh "script/download-grammars"
sh "script/convert-grammars"
sh "gem", "build", "github-linguist-grammars.gemspec"
end

View File

@@ -18,6 +18,7 @@ Gem::Specification.new do |s|
s.add_dependency 'mime-types', '>= 1.19'
s.add_dependency 'rugged', '~> 0.22.0b4'
s.add_development_dependency 'minitest', '>= 5.0'
s.add_development_dependency 'mocha'
s.add_development_dependency 'pry'
s.add_development_dependency 'rake'

View File

@@ -24,6 +24,8 @@ vendor/grammars/Agda.tmbundle:
- source.agda
vendor/grammars/Alloy.tmbundle:
- source.alloy
vendor/grammars/AutoHotkey:
- source.ahk
vendor/grammars/ColdFusion:
- source.cfscript
- source.cfscript.cfc
@@ -31,6 +33,8 @@ vendor/grammars/ColdFusion:
- text.html.cfm
vendor/grammars/Docker.tmbundle:
- source.dockerfile
vendor/grammars/Elm.tmLanguage:
- source.elm
vendor/grammars/Handlebars:
- text.html.handlebars
vendor/grammars/IDL-Syntax:
@@ -45,13 +49,15 @@ vendor/grammars/LiveScript.tmbundle:
vendor/grammars/NSIS:
- source.nsis
vendor/grammars/NimLime:
- source.nimrod
- source.nimrod_filter
- source.nimrodcfg
- source.nim
- source.nim_filter
- source.nimcfg
vendor/grammars/PHP-Twig.tmbundle:
- text.html.twig
vendor/grammars/RDoc.tmbundle:
- text.rdoc
vendor/grammars/Racket:
- source.racket
vendor/grammars/SCSS.tmbundle:
- source.scss
vendor/grammars/Scalate.tmbundle:
@@ -64,6 +70,8 @@ vendor/grammars/Stata.tmbundle:
- source.stata
vendor/grammars/Sublime-Coq:
- source.coq
vendor/grammars/Sublime-HTTP:
- source.httpspec
vendor/grammars/Sublime-Inform:
- source.Inform7
vendor/grammars/Sublime-Lasso:
@@ -72,6 +80,8 @@ vendor/grammars/Sublime-Logos:
- source.logos
vendor/grammars/Sublime-Loom:
- source.loomscript
vendor/grammars/Sublime-Nit:
- source.nit
vendor/grammars/Sublime-QML:
- source.qml
vendor/grammars/Sublime-REBOL:
@@ -115,6 +125,9 @@ vendor/grammars/asp.tmbundle:
vendor/grammars/assembly.tmbundle:
- objdump.x86asm
- source.x86asm
vendor/grammars/atom-salt:
- source.python.salt
- source.yaml.salt
vendor/grammars/autoitv3-tmbundle:
- source.autoit.3
vendor/grammars/awk-sublime:
@@ -131,6 +144,8 @@ vendor/grammars/c.tmbundle:
- source.c.platform
vendor/grammars/capnproto.tmbundle:
- source.capnp
vendor/grammars/carto-atom:
- source.css.mss
vendor/grammars/ceylon-sublimetext:
- module.ceylon
- source.ceylon
@@ -248,16 +263,16 @@ vendor/grammars/language-csharp:
- source.nant-build
vendor/grammars/language-gfm:
- source.gfm
vendor/grammars/language-hy:
- source.hy
vendor/grammars/language-javascript:
- source.js
- source.js.regexp
vendor/grammars/language-python:
- source.python
- source.regexp.python
- text.python.console
- text.python.traceback
vendor/grammars/language-sass:
- source.css.scss
- source.sass
vendor/grammars/language-shellscript:
- source.shell
- text.shell-session
@@ -349,8 +364,6 @@ vendor/grammars/python-django.tmbundle:
vendor/grammars/r.tmbundle:
- source.r
- text.tex.latex.rd
vendor/grammars/racket-tmbundle:
- source.racket
vendor/grammars/restructuredtext.tmbundle:
- text.restructuredtext
vendor/grammars/ruby-haml.tmbundle:
@@ -366,6 +379,11 @@ vendor/grammars/ruby-slim.tmbundle:
vendor/grammars/ruby.tmbundle:
- source.ruby
- text.html.erb
vendor/grammars/sas.tmbundle:
- source.SASLog
- source.sas
vendor/grammars/sass-textmate-bundle:
- source.sass
vendor/grammars/scala.tmbundle:
- source.sbt
- source.scala
@@ -386,6 +404,8 @@ vendor/grammars/sublime-befunge:
- source.befunge
vendor/grammars/sublime-better-typescript:
- source.ts
vendor/grammars/sublime-bsv:
- source.bsv
vendor/grammars/sublime-cirru:
- source.cirru
vendor/grammars/sublime-glsl:
@@ -432,6 +452,9 @@ vendor/grammars/thrift.tmbundle:
- source.thrift
vendor/grammars/toml.tmbundle:
- source.toml
vendor/grammars/turtle.tmbundle:
- source.sparql
- source.turtle
vendor/grammars/verilog.tmbundle:
- source.verilog
vendor/grammars/x86-assembly-textmate-bundle:

View File

@@ -99,7 +99,7 @@ module Linguist
elsif name.nil?
"attachment"
else
"attachment; filename=#{EscapeUtils.escape_url(File.basename(name))}"
"attachment; filename=#{EscapeUtils.escape_url(name)}"
end
end
@@ -233,7 +233,7 @@ module Linguist
#
# Return true or false
def vendored?
name =~ VendoredRegexp ? true : false
path =~ VendoredRegexp ? true : false
end
# Public: Get each line of data
@@ -301,7 +301,7 @@ module Linguist
#
# Return true or false
def generated?
@_generated ||= Generated.generated?(name, lambda { data })
@_generated ||= Generated.generated?(path, lambda { data })
end
# Public: Detects the Language of the blob.

View File

@@ -3,7 +3,7 @@ require 'linguist/blob_helper'
module Linguist
# A FileBlob is a wrapper around a File object to make it quack
# like a Grit::Blob. It provides the basic interface: `name`,
# `data`, and `size`.
# `data`, `path` and `size`.
class FileBlob
include BlobHelper
@@ -14,43 +14,50 @@ module Linguist
#
# Returns a FileBlob.
def initialize(path, base_path = nil)
@path = path
@name = base_path ? path.sub("#{base_path}/", '') : path
@fullpath = path
@path = base_path ? path.sub("#{base_path}/", '') : path
end
# Public: Filename
#
# Examples
#
# FileBlob.new("/path/to/linguist/lib/linguist.rb").name
# FileBlob.new("/path/to/linguist/lib/linguist.rb").path
# # => "/path/to/linguist/lib/linguist.rb"
#
# FileBlob.new("/path/to/linguist/lib/linguist.rb",
# "/path/to/linguist").name
# "/path/to/linguist").path
# # => "lib/linguist.rb"
#
# Returns a String
attr_reader :name
attr_reader :path
# Public: Read file permissions
#
# Returns a String like '100644'
def mode
File.stat(@path).mode.to_s(8)
File.stat(@fullpath).mode.to_s(8)
end
# Public: File name
#
# Returns a String
def name
File.basename(@fullpath)
end
# Public: Read file contents.
#
# Returns a String.
def data
File.read(@path)
File.read(@fullpath)
end
# Public: Get byte size
#
# Returns an Integer.
def size
File.size(@path)
File.size(@fullpath)
end
# Public: Get file extension.
@@ -67,7 +74,7 @@ module Linguist
#
# Returns an Array
def extensions
basename, *segments = File.basename(name).split(".")
basename, *segments = name.split(".")
segments.map.with_index do |segment, index|
"." + segments[index..-1].join(".")

View File

@@ -51,20 +51,20 @@ module Linguist
#
# Return true or false
def generated?
minified_files? ||
compiled_coffeescript? ||
xcode_file? ||
generated_parser? ||
generated_net_docfile? ||
generated_net_designer_file? ||
generated_postscript? ||
generated_protocol_buffer? ||
generated_jni_header? ||
composer_lock? ||
node_modules? ||
godeps? ||
vcr_cassette? ||
generated_by_zephir?
generated_by_zephir? ||
minified_files? ||
compiled_coffeescript? ||
generated_parser? ||
generated_net_docfile? ||
generated_postscript? ||
generated_protocol_buffer? ||
generated_jni_header? ||
vcr_cassette?
end
# Internal: Is the blob an Xcode file?

View File

@@ -70,10 +70,10 @@ module Linguist
end
disambiguate "Objective-C", "C++", "C" do |data|
if (/@(interface|class|protocol|property|end|synchronised|selector|implementation)\b/.match(data))
if (/^[ \t]*@(interface|class|protocol|property|end|synchronised|selector|implementation)\b/.match(data))
Language["Objective-C"]
elsif (/^\s*#\s*include <(cstdint|string|vector|map|list|array|bitset|queue|stack|forward_list|unordered_map|unordered_set|(i|o|io)stream)>/.match(data) ||
/^\s*template\s*</.match(data) || /^[^@]class\s+\w+/.match(data) || /^[^@](private|public|protected):$/.match(data) || /std::.+$/.match(data))
/^\s*template\s*</.match(data) || /^[ \t]*try/.match(data) || /^[ \t]*catch\s*\(/.match(data) || /^[ \t]*(class|(using[ \t]+)?namespace)\s+\w+/.match(data) || /^[ \t]*(private|public|protected):$/.match(data) || /std::\w+/.match(data))
Language["C++"]
end
end
@@ -175,7 +175,7 @@ module Linguist
disambiguate "Frege", "Forth", "Text" do |data|
if /^(: |also |new-device|previous )/.match(data)
Language["Forth"]
elsif /\s*(import|module|package|data|type) /.match(data)
elsif /^\s*(import|module|package|data|type) /.match(data)
Language["Frege"]
else
Language["Text"]

View File

@@ -548,7 +548,7 @@ module Linguist
if extnames = extensions[name]
extnames.each do |extname|
if !options['extensions'].include?(extname)
if !options['extensions'].index { |x| x.end_with? extname }
warn "#{name} has a sample with extension (#{extname}) that isn't explicitly defined in languages.yml" unless extname == '.script!'
options['extensions'] << extname
end

View File

@@ -224,7 +224,7 @@ AutoHotkey:
extensions:
- .ahk
- .ahkl
tm_scope: none
tm_scope: source.ahk
ace_mode: autohotkey
AutoIt:
@@ -314,7 +314,7 @@ Bluespec:
type: programming
extensions:
- .bsv
tm_scope: source.verilog
tm_scope: source.bsv
ace_mode: verilog
Boo:
@@ -419,7 +419,7 @@ CLIPS:
CMake:
extensions:
- .cmake
- .in
- .cmake.in
filenames:
- CMakeLists.txt
ace_mode: text
@@ -449,6 +449,15 @@ Cap'n Proto:
- .capnp
ace_mode: text
CartoCSS:
type: programming
aliases:
- Carto
extensions:
- .mss
ace_mode: text
tm_scope: source.css.mss
Ceylon:
type: programming
extensions:
@@ -787,9 +796,10 @@ Elixir:
Elm:
type: programming
color: "#60B5CC"
extensions:
- .elm
tm_scope: source.haskell
tm_scope: source.elm
ace_mode: elm
Emacs Lisp:
@@ -1202,7 +1212,7 @@ HTTP:
type: data
extensions:
- .http
tm_scope: none
tm_scope: source.httpspec
ace_mode: text
Hack:
@@ -1211,7 +1221,7 @@ Hack:
extensions:
- .hh
- .php
tm_scope: none
tm_scope: text.html.php
Haml:
group: HTML
@@ -1258,13 +1268,13 @@ Haxe:
Hy:
type: programming
ace_mode: clojure
ace_mode: text
color: "#7891b1"
extensions:
- .hy
aliases:
- hylang
tm_scope: none
tm_scope: source.hy
IDL:
type: programming
@@ -1372,13 +1382,6 @@ JSON:
extensions:
- .json
- .lock
- .sublime-keymap
- .sublime-mousemap
- .sublime-project
- .sublime-settings
- .sublime-workspace
- .sublime_metrics
- .sublime_session
filenames:
- .jshintrc
- composer.lock
@@ -1462,6 +1465,19 @@ JavaScript:
- .pac
- .sjs
- .ssjs
- .sublime-build
- .sublime-commands
- .sublime-completions
- .sublime-keymap
- .sublime-macro
- .sublime-menu
- .sublime-mousemap
- .sublime-project
- .sublime-settings
- .sublime-theme
- .sublime-workspace
- .sublime_metrics
- .sublime_session
- .xsjs
- .xsjslib
filenames:
@@ -1882,6 +1898,7 @@ Nimrod:
- .nim
- .nimrod
ace_mode: text
tm_scope: source.nim
Ninja:
type: data
@@ -1895,7 +1912,7 @@ Nit:
color: "#0d8921"
extensions:
- .nit
tm_scope: none
tm_scope: source.nit
ace_mode: text
Nix:
@@ -2213,6 +2230,8 @@ Pike:
extensions:
- .pike
- .pmod
interpreters:
- pike
ace_mode: text
Pod:
@@ -2585,7 +2604,7 @@ SAS:
color: "#1E90FF"
extensions:
- .sas
tm_scope: none
tm_scope: source.sas
ace_mode: text
SCSS:
@@ -2596,6 +2615,14 @@ SCSS:
extensions:
- .scss
SPARQL:
type: data
tm_scope: source.sparql
ace_mode: text
extensions:
- .sparql
- .rq
SQF:
type: programming
color: "#FFCB1F"
@@ -2611,6 +2638,8 @@ SQL:
ace_mode: sql
extensions:
- .sql
- .cql
- .ddl
- .prc
- .tab
- .udf
@@ -2629,9 +2658,21 @@ Sage:
group: Python
extensions:
- .sage
- .sagews
tm_scope: source.python
ace_mode: python
SaltStack:
type: data
group: YAML
aliases:
- saltstate
- salt
extensions:
- .sls
tm_scope: source.yaml.salt
ace_mode: yaml
Sass:
type: markup
tm_scope: source.sass
@@ -2939,6 +2980,13 @@ Turing:
tm_scope: none
ace_mode: text
Turtle:
type: data
extensions:
- .ttl
tm_scope: source.turtle
ace_mode: text
Twig:
type: markup
group: PHP
@@ -3053,6 +3101,14 @@ Volt:
tm_scope: source.d
ace_mode: d
Web Ontology Language:
type: markup
color: "#3994bc"
extensions:
- .owl
tm_scope: text.xml
ace_mode: xml
WebIDL:
type: programming
extensions:
@@ -3086,8 +3142,10 @@ XML:
- .dita
- .ditamap
- .ditaval
- .dll.config
- .filters
- .fsproj
- .fxml
- .glade
- .grxml
- .ivy
@@ -3108,6 +3166,8 @@ XML:
- .rss
- .scxml
- .srdf
- .stTheme
- .sublime-snippet
- .svg
- .targets
- .tmCommand
@@ -3133,6 +3193,7 @@ XML:
- .xlf
- .xliff
- .xmi
- .xml.dist
- .xsd
- .xul
- .zcml
@@ -3143,9 +3204,7 @@ XML:
- Web.Debug.config
- Web.Release.config
- Web.config
- build.xml.dist
- packages.config
- phpunit.xml.dist
XProc:
type: programming

View File

@@ -14,13 +14,15 @@ module Linguist
attr_reader :repository
attr_reader :oid
attr_reader :name
attr_reader :path
attr_reader :mode
def initialize(repo, oid, name, mode = nil)
alias :name :path
def initialize(repo, oid, path, mode = nil)
@repository = repo
@oid = oid
@name = name
@path = path
@mode = mode
end

View File

@@ -40,24 +40,27 @@
# Minified JavaScript and CSS
- (\.|-)min\.(js|css)$
#Stylesheets imported from packages
- ([^\s]*)import\.(css|less|scss|styl)$
# Bootstrap css and js
- (^|/)bootstrap([^.]*)\.(js|css)$
- (^|/)bootstrap([^.]*)\.(js|css|less|scss|styl)$
- (^|/)custom\.bootstrap([^\s]*)(js|css|less|scss|styl)$
# Font Awesome
- font-awesome.css
- (^|/)font-awesome\.(css|less|scss|styl)$
# Foundation css
- foundation.css
- (^|/)foundation\.(css|less|scss|styl)$
# Normalize.css
- normalize.css
- (^|/)normalize\.(css|less|scss|styl)$
# Bourbon SCSS
- (^|/)[Bb]ourbon/.*\.css$
- (^|/)[Bb]ourbon/.*\.scss$
# Bourbon css
- (^|/)[Bb]ourbon/.*\.(css|less|scss|styl)$
# Animate.css
- animate.css
- (^|/)animate\.(css|less|scss|styl)$
# Vendored dependencies
- third[-_]?party/

View File

@@ -1,3 +1,3 @@
module Linguist
VERSION = "4.2.5"
VERSION = "4.2.6"
end

98
samples/C++/Entity.h Normal file
View File

@@ -0,0 +1,98 @@
/**
* @file Entity.h
* @page EntityPage Entity
* @brief represent an entity in the game
* @author vinz243
* @version 0.1.0
* This file represents an Entity in the game system
* This parent type is a static entity which is shown and loaded into the Physics engine but never updated
*/
#ifndef ENTITY_H
#define ENTITY_H
#include "base.h"
/// @namespace Whitedrop
namespace Whitedrop {
/** @class Entity
* This parent type is a static entity which is shown and loaded into the Physics engine but never updated
*/
class Entity {
public:
/**
* @brief Create static entity
* @details creates a static entity instance according to the mesh and the id, the position
* This needs to be attached to a World after!
* The material name is not the file name but the material name!
* @ref WorldPage
* @param mesh the name of the mesh for the object, file must be in media/meshes
* @param id an unique identifier for the object, shortest as possible
* @param dimensions an Ogre::Vector3 which contains the dimensions in meter
* @param position the Vector3 which contains it position
* @param material the material name
*/
Entity(std::string mesh, std::string id, Ogre::Vector3 dimensions, Ogre::Vector3 position, std::string material);
/**
* @brief The copy constructor
* @details A copy constr
*
* @param ref the Entity to be copied from
*/
Entity(const Entity &ref);
/**
* @brief The assignement operator
* @details
*
* @param ent the entity to be copied
*/
Entity& operator=(const Entity ent);
/**
* @brief destrctor
* @details
*/
virtual ~Entity(void);
/**
* @brief a constance type of the entity
* @details depends of the class.
* May contain STATIC, DYNAMIC or ETHERAL
*/
const std::string type = "STATIC";
/**
* @brief Attach the entity to specified sceneManager
* @details This creates the OgreEntity using sceneMgr,
* set material, create a Node with name as `<id>_n`,
* scale it to match dimensions and translate the node to pos
* @param sceneMgr the scene manager to use
*/
virtual void setup(Ogre::SceneManager* sceneMgr);
/**
* @brief the update method
* @details this method should be called on each world update.
* Even though the method is necessary declared, the main impl of
* a static entity should be empty since it is not updated by physics
* However, a Dynamic entity should implement this function in order to:
* 1) Get from the physics engine the actor position in the physic world
* 2) Update the OgreEntity position and rotation from the previous actor
* @return whether it was successful or not, if falsey engine should stop
*/
virtual bool update(void);
protected:
std::string mMesh = "cube.mesh";
std::string mId;
std::string mMaterial;
Ogre::Vector3 mDimensions;
Ogre::Vector3 mPosition;
Ogre::Entity* mEntity;
Ogre::SceneNode* mNode;
};
}
#endif

View File

@@ -0,0 +1,15 @@
cmake_minimum_required(VERSION 2.6)
enable_testing()
set(CMAKE_BUILD_TYPE debug)
include_directories("/usr/local/include")
find_library(ssl_LIBRARY NAMES ssl PATHS "/usr/local/lib")
add_custom_command(OUTPUT "ver.c" "ver.h" COMMAND ./ver.sh)
add_executable(foo foo.c bar.c baz.c ver.c)
target_link_libraries(foo ${ssl_LIBRARY})

View File

@@ -0,0 +1,25 @@
cmake_minimum_required(VERSION 2.8 FATAL_ERROR)
project(PCLVisualizer)
target_link_libraries (PCLVisualizer ${PCL_LIBRARIES})
#it seems it's needed only on OS X 10.9
find_package(GLEW REQUIRED)
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -I/usr/include -v")
find_package(PCL 1.7 REQUIRED)
include_directories(${PCL_INCLUDE_DIRS})
link_directories(${PCL_LIBRARY_DIRS})
add_definitions(${PCL_DEFINITIONS})
set(PCL_BUILD_TYPE Release)
file(GLOB PCL_openni_viewer_SRC
"src/*.h"
"src/*.cpp"
)
add_executable(PCLVisualizer ${PCL_openni_viewer_SRC})
#add this line to solve probem in mac os x 10.9
target_link_libraries(PCLVisualizer ${PCL_COMMON_LIBRARIES} ${PCL_IO_LIBRARIES} ${PCL_VISUALIZATION_LIBRARIES} ${PCL_FEATURES_LIBRARIES})

View File

@@ -0,0 +1,33 @@
# Specifications for building user and development documentation.
#
# ====================================================================
# Copyright (c) 2009 Ian Blumel. All rights reserved.
#
# This software is licensed as described in the file LICENSE, which
# you should have received as part of this distribution.
# ====================================================================
CMAKE_MINIMUM_REQUIRED(VERSION 2.6)
FIND_FILE( SPHINX sphinx-build.exe)
# If we are windows call to the make.bat file, otherwise rely on the Makefile
# to handle the processing.
IF(WIN32)
SET(SPHINX_MAKE make.bat)
ELSE(WIN32)
SET(SPHINX_MAKE make)
ENDIF(WIN32)
ADD_CUSTOM_TARGET(
doc_usr
COMMAND ${SPHINX_MAKE} html
WORKING_DIRECTORY ${CMAKE_CURRENT_SOURCE_DIR}/usr
)
ADD_CUSTOM_TARGET(
doc_dev
COMMAND ${SPHINX_MAKE} html
WORKING_DIRECTORY ${CMAKE_CURRENT_SOURCE_DIR}/dev
)

View File

@@ -0,0 +1,33 @@
cmake_minimum_required (VERSION 2.6)
set (CMAKE_RUNTIME_OUTPUT_DIRECTORY "bin")
list(APPEND CMAKE_MODULE_PATH ${CMAKE_SOURCE_DIR}/cmake/vala)
find_package(Vala REQUIRED)
include(ValaPrecompile)
include(ValaVersion)
ensure_vala_version("0.11.0" MINIMUM)
project (template C)
find_package(PkgConfig)
pkg_check_modules(GOBJECT REQUIRED gobject-2.0)
add_definitions(${GOBJECT_CFLAGS} ${GOBJECT_CFLAGS_OTHER})
link_libraries(${GOBJECT_LIBRARIES})
link_directories(${GOBJECT_LIBRARY_DIRS})
vala_precompile(VALA_C
src/template.vala
PACKAGES
OPTIONS
--thread
CUSTOM_VAPIS
GENERATE_VAPI
GENERATE_HEADER
DIRECTORY
gen
)
add_executable("template" ${VALA_C})

View File

@@ -0,0 +1,89 @@
# - Check if the STDCALL function exists.
# This works for non-cdecl functions (kernel32 functions, for example)
# CHECK_STDCALL_FUNCTION_EXISTS(FUNCTION FUNCTION_DUMMY_ARGS VARIABLE)
# - macro which checks if the stdcall function exists
# FUNCTION_DECLARATION - the definition of the function ( e.g.: Sleep(500) )
# VARIABLE - variable to store the result
#
# The following variables may be set before calling this macro to
# modify the way the check is run:
#
# CMAKE_REQUIRED_FLAGS = string of compile command line flags
# CMAKE_REQUIRED_DEFINITIONS = list of macros to define (-DFOO=bar)
# CMAKE_REQUIRED_INCLUDES = list of include directories
# CMAKE_REQUIRED_LIBRARIES = list of libraries to link
# CMAKE_EXTRA_INCLUDE_FILES = list of extra includes to check in
MACRO(CHECK_STDCALL_FUNCTION_EXISTS FUNCTION_DECLARATION VARIABLE)
IF("${VARIABLE}" MATCHES "^${VARIABLE}$")
#get includes
SET(CHECK_STDCALL_FUNCTION_PREMAIN)
FOREACH(def ${CMAKE_EXTRA_INCLUDE_FILES})
SET(CHECK_STDCALL_FUNCTION_PREMAIN "${CHECK_STDCALL_FUNCTION_PREMAIN}#include \"${def}\"\n")
ENDFOREACH(def)
#add some default includes
IF ( HAVE_WINDOWS_H )
SET(CHECK_STDCALL_FUNCTION_PREMAIN "${CHECK_STDCALL_FUNCTION_PREMAIN}#include \"windows.h\"\n")
ENDIF ( HAVE_WINDOWS_H )
IF ( HAVE_UNISTD_H )
SET(CHECK_STDCALL_FUNCTION_PREMAIN "${CHECK_STDCALL_FUNCTION_PREMAIN}#include \"unistd.h\"\n")
ENDIF ( HAVE_UNISTD_H )
IF ( HAVE_DIRECT_H )
SET(CHECK_STDCALL_FUNCTION_PREMAIN "${CHECK_STDCALL_FUNCTION_PREMAIN}#include \"direct.h\"\n")
ENDIF ( HAVE_DIRECT_H )
IF ( HAVE_IO_H )
SET(CHECK_STDCALL_FUNCTION_PREMAIN "${CHECK_STDCALL_FUNCTION_PREMAIN}#include \"io.h\"\n")
ENDIF ( HAVE_IO_H )
IF ( HAVE_SYS_TIMEB_H )
SET(CHECK_STDCALL_FUNCTION_PREMAIN "${CHECK_STDCALL_FUNCTION_PREMAIN}#include \"sys/timeb.h\"\n")
ENDIF ( HAVE_SYS_TIMEB_H )
STRING(REGEX REPLACE "(\\(.*\\))" "" CHECK_STDCALL_FUNCTION_EXISTS_FUNCTION ${FUNCTION_DECLARATION} )
SET(MACRO_CHECK_STDCALL_FUNCTION_DEFINITIONS "${CMAKE_REQUIRED_FLAGS}")
MESSAGE(STATUS "Looking for ${CHECK_STDCALL_FUNCTION_EXISTS_FUNCTION}")
IF(CMAKE_REQUIRED_LIBRARIES)
SET(CHECK_STDCALL_FUNCTION_EXISTS_ADD_LIBRARIES
"-DLINK_LIBRARIES:STRING=${CMAKE_REQUIRED_LIBRARIES}")
ELSE(CMAKE_REQUIRED_LIBRARIES)
SET(CHECK_STDCALL_FUNCTION_EXISTS_ADD_LIBRARIES)
ENDIF(CMAKE_REQUIRED_LIBRARIES)
IF(CMAKE_REQUIRED_INCLUDES)
SET(CHECK_STDCALL_FUNCTION_EXISTS_ADD_INCLUDES
"-DINCLUDE_DIRECTORIES:STRING=${CMAKE_REQUIRED_INCLUDES}")
ELSE(CMAKE_REQUIRED_INCLUDES)
SET(CHECK_STDCALL_FUNCTION_EXISTS_ADD_INCLUDES)
ENDIF(CMAKE_REQUIRED_INCLUDES)
SET(CHECK_STDCALL_FUNCTION_DECLARATION ${FUNCTION_DECLARATION})
CONFIGURE_FILE("${clucene-shared_SOURCE_DIR}/cmake/CheckStdCallFunctionExists.cpp.in"
"${CMAKE_BINARY_DIR}${CMAKE_FILES_DIRECTORY}/CMakeTmp/CheckStdCallFunctionExists.cpp" IMMEDIATE @ONLY)
FILE(READ "${CMAKE_BINARY_DIR}${CMAKE_FILES_DIRECTORY}/CMakeTmp/CheckStdCallFunctionExists.cpp"
CHECK_STDCALL_FUNCTION_CONTENT)
TRY_COMPILE(${VARIABLE}
${CMAKE_BINARY_DIR}
"${CMAKE_BINARY_DIR}${CMAKE_FILES_DIRECTORY}/CMakeTmp/CheckStdCallFunctionExists.cpp"
COMPILE_DEFINITIONS ${CMAKE_REQUIRED_DEFINITIONS}
CMAKE_FLAGS -DCOMPILE_DEFINITIONS:STRING=${MACRO_CHECK_STDCALL_FUNCTION_DEFINITIONS}
"${CHECK_STDCALL_FUNCTION_EXISTS_ADD_LIBRARIES}"
"${CHECK_STDCALL_FUNCTION_EXISTS_ADD_INCLUDES}"
OUTPUT_VARIABLE OUTPUT)
IF(${VARIABLE})
SET(${VARIABLE} 1 CACHE INTERNAL "Have function ${FUNCTION_DECLARATION}")
MESSAGE(STATUS "Looking for ${FUNCTION_DECLARATION} - found")
FILE(APPEND ${CMAKE_BINARY_DIR}${CMAKE_FILES_DIRECTORY}/CMakeOutput.log
"Determining if the stdcall function ${FUNCTION_DECLARATION} exists passed with the following output:\n"
"${OUTPUT}\nCheckStdCallFunctionExists.cpp:\n${CHECK_STDCALL_FUNCTION_CONTENT}\n\n")
ELSE(${VARIABLE})
MESSAGE(STATUS "Looking for ${FUNCTION_DECLARATION} - not found")
SET(${VARIABLE} "" CACHE INTERNAL "Have function ${FUNCTION_DECLARATION}")
FILE(APPEND ${CMAKE_BINARY_DIR}${CMAKE_FILES_DIRECTORY}/CMakeError.log
"Determining if the stdcall function ${FUNCTION_DECLARATION} exists failed with the following output:\n"
"${OUTPUT}\nCheckStdCallFunctionExists.cpp:\n${CHECK_STDCALL_FUNCTION_CONTENT}\n\n")
ENDIF(${VARIABLE})
ENDIF("${VARIABLE}" MATCHES "^${VARIABLE}$")
ENDMACRO(CHECK_STDCALL_FUNCTION_EXISTS)

View File

@@ -0,0 +1,22 @@
IF (NOT EXISTS "@PROJECT_BINARY_DIR@/install_manifest.txt")
MESSAGE (FATAL_ERROR "Cannot find install manifest: \"@PROJECT_BINARY_DIR@/install_manifest.txt\"")
ENDIF (NOT EXISTS "@PROJECT_BINARY_DIR@/install_manifest.txt")
FILE (READ "@PROJECT_BINARY_DIR@/install_manifest.txt" files)
STRING (REGEX REPLACE "\n" ";" files "${files}")
FOREACH (file ${files})
MESSAGE (STATUS "Uninstalling \"$ENV{DESTDIR}${file}\"")
IF (EXISTS "$ENV{DESTDIR}${file}")
EXEC_PROGRAM (
"@CMAKE_COMMAND@" ARGS "-E remove \"$ENV{DESTDIR}${file}\""
OUTPUT_VARIABLE rm_out
RETURN_VALUE rm_retval
)
IF (NOT "${rm_retval}" STREQUAL 0)
MESSAGE (FATAL_ERROR "Problem when removing \"$ENV{DESTDIR}${file}\"")
ENDIF (NOT "${rm_retval}" STREQUAL 0)
ELSE (EXISTS "$ENV{DESTDIR}${file}")
MESSAGE (STATUS "File \"$ENV{DESTDIR}${file}\" does not exist.")
ENDIF (EXISTS "$ENV{DESTDIR}${file}")
ENDFOREACH (file)

File diff suppressed because it is too large Load Diff

798
samples/Nit/file.nit Normal file
View File

@@ -0,0 +1,798 @@
# This file is part of NIT ( http://www.nitlanguage.org ).
#
# Copyright 2004-2008 Jean Privat <jean@pryen.org>
# Copyright 2008 Floréal Morandat <morandat@lirmm.fr>
# Copyright 2008 Jean-Sébastien Gélinas <calestar@gmail.com>
#
# This file is free software, which comes along with NIT. This software is
# distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY;
# without even the implied warranty of MERCHANTABILITY or FITNESS FOR A
# PARTICULAR PURPOSE. You can modify it is you want, provided this header
# is kept unaltered, and a notification of the changes is added.
# You are allowed to redistribute it and sell it, alone or is a part of
# another product.
# File manipulations (create, read, write, etc.)
module file
intrude import stream
intrude import ropes
import string_search
import time
in "C Header" `{
#include <dirent.h>
#include <string.h>
#include <sys/types.h>
#include <sys/stat.h>
#include <unistd.h>
#include <stdio.h>
#include <poll.h>
#include <errno.h>
`}
# File Abstract Stream
abstract class FStream
super IOS
# The path of the file.
var path: nullable String = null
# The FILE *.
private var file: nullable NativeFile = null
fun file_stat: FileStat do return _file.file_stat
# File descriptor of this file
fun fd: Int do return _file.fileno
end
# File input stream
class IFStream
super FStream
super BufferedIStream
super PollableIStream
# Misc
# Open the same file again.
# The original path is reused, therefore the reopened file can be a different file.
fun reopen
do
if not eof and not _file.address_is_null then close
_file = new NativeFile.io_open_read(path.to_cstring)
if _file.address_is_null then
last_error = new IOError("Error: Opening file at '{path.as(not null)}' failed with '{sys.errno.strerror}'")
end_reached = true
return
end
end_reached = false
_buffer_pos = 0
_buffer.clear
end
redef fun close
do
if _file.address_is_null then return
var i = _file.io_close
_buffer.clear
end_reached = true
end
redef fun fill_buffer
do
var nb = _file.io_read(_buffer.items, _buffer.capacity)
if nb <= 0 then
end_reached = true
nb = 0
end
_buffer.length = nb
_buffer_pos = 0
end
# End of file?
redef var end_reached: Bool = false
# Open the file at `path` for reading.
init open(path: String)
do
self.path = path
prepare_buffer(10)
_file = new NativeFile.io_open_read(path.to_cstring)
if _file.address_is_null then
last_error = new IOError("Error: Opening file at '{path}' failed with '{sys.errno.strerror}'")
end_reached = true
end
end
init from_fd(fd: Int) do
self.path = ""
prepare_buffer(10)
_file = fd_to_stream(fd, read_only)
if _file.address_is_null then
last_error = new IOError("Error: Converting fd {fd} to stream failed with '{sys.errno.strerror}'")
end_reached = true
end
end
end
# File output stream
class OFStream
super FStream
super OStream
redef fun write(s)
do
if last_error != null then return
if not _is_writable then
last_error = new IOError("Cannot write to non-writable stream")
return
end
if s isa FlatText then
write_native(s.to_cstring, s.length)
else
for i in s.substrings do write_native(i.to_cstring, i.length)
end
end
redef fun close
do
if _file.address_is_null then
last_error = new IOError("Cannot close non-existing write stream")
_is_writable = false
return
end
var i = _file.io_close
_is_writable = false
end
redef var is_writable = false
# Write `len` bytes from `native`.
private fun write_native(native: NativeString, len: Int)
do
if last_error != null then return
if not _is_writable then
last_error = new IOError("Cannot write to non-writable stream")
return
end
if _file.address_is_null then
last_error = new IOError("Writing on a null stream")
_is_writable = false
return
end
var err = _file.io_write(native, len)
if err != len then
# Big problem
last_error = new IOError("Problem in writing : {err} {len} \n")
end
end
# Open the file at `path` for writing.
init open(path: String)
do
_file = new NativeFile.io_open_write(path.to_cstring)
if _file.address_is_null then
last_error = new IOError("Error: Opening file at '{path}' failed with '{sys.errno.strerror}'")
self.path = path
is_writable = false
end
self.path = path
_is_writable = true
end
# Creates a new File stream from a file descriptor
init from_fd(fd: Int) do
self.path = ""
_file = fd_to_stream(fd, wipe_write)
_is_writable = true
if _file.address_is_null then
last_error = new IOError("Error: Opening stream from file descriptor {fd} failed with '{sys.errno.strerror}'")
_is_writable = false
end
end
end
redef interface Object
private fun read_only: NativeString do return "r".to_cstring
private fun wipe_write: NativeString do return "w".to_cstring
private fun fd_to_stream(fd: Int, mode: NativeString): NativeFile `{
return fdopen(fd, mode);
`}
# returns first available stream to read or write to
# return null on interruption (possibly a signal)
protected fun poll( streams : Sequence[FStream] ) : nullable FStream
do
var in_fds = new Array[Int]
var out_fds = new Array[Int]
var fd_to_stream = new HashMap[Int,FStream]
for s in streams do
var fd = s.fd
if s isa IFStream then in_fds.add( fd )
if s isa OFStream then out_fds.add( fd )
fd_to_stream[fd] = s
end
var polled_fd = intern_poll( in_fds, out_fds )
if polled_fd == null then
return null
else
return fd_to_stream[polled_fd]
end
end
private fun intern_poll(in_fds: Array[Int], out_fds: Array[Int]) : nullable Int is extern import Array[Int].length, Array[Int].[], Int.as(nullable Int) `{
int in_len, out_len, total_len;
struct pollfd *c_fds;
sigset_t sigmask;
int i;
int first_polled_fd = -1;
int result;
in_len = Array_of_Int_length( in_fds );
out_len = Array_of_Int_length( out_fds );
total_len = in_len + out_len;
c_fds = malloc( sizeof(struct pollfd) * total_len );
/* input streams */
for ( i=0; i<in_len; i ++ ) {
int fd;
fd = Array_of_Int__index( in_fds, i );
c_fds[i].fd = fd;
c_fds[i].events = POLLIN;
}
/* output streams */
for ( i=0; i<out_len; i ++ ) {
int fd;
fd = Array_of_Int__index( out_fds, i );
c_fds[i].fd = fd;
c_fds[i].events = POLLOUT;
}
/* poll all fds, unlimited timeout */
result = poll( c_fds, total_len, -1 );
if ( result > 0 ) {
/* analyse results */
for ( i=0; i<total_len; i++ )
if ( c_fds[i].revents & c_fds[i].events || /* awaited event */
c_fds[i].revents & POLLHUP ) /* closed */
{
first_polled_fd = c_fds[i].fd;
break;
}
return Int_as_nullable( first_polled_fd );
}
else if ( result < 0 )
fprintf( stderr, "Error in Stream:poll: %s\n", strerror( errno ) );
return null_Int();
`}
end
###############################################################################
class Stdin
super IFStream
init do
_file = new NativeFile.native_stdin
path = "/dev/stdin"
prepare_buffer(1)
end
redef fun poll_in: Bool is extern "file_stdin_poll_in"
end
class Stdout
super OFStream
init do
_file = new NativeFile.native_stdout
path = "/dev/stdout"
_is_writable = true
end
end
class Stderr
super OFStream
init do
_file = new NativeFile.native_stderr
path = "/dev/stderr"
_is_writable = true
end
end
###############################################################################
redef class Streamable
# Like `write_to` but take care of creating the file
fun write_to_file(filepath: String)
do
var stream = new OFStream.open(filepath)
write_to(stream)
stream.close
end
end
redef class String
# return true if a file with this names exists
fun file_exists: Bool do return to_cstring.file_exists
# The status of a file. see POSIX stat(2).
fun file_stat: FileStat do return to_cstring.file_stat
# The status of a file or of a symlink. see POSIX lstat(2).
fun file_lstat: FileStat do return to_cstring.file_lstat
# Remove a file, return true if success
fun file_delete: Bool do return to_cstring.file_delete
# Copy content of file at `self` to `dest`
fun file_copy_to(dest: String)
do
var input = new IFStream.open(self)
var output = new OFStream.open(dest)
while not input.eof do
var buffer = input.read(1024)
output.write buffer
end
input.close
output.close
end
# Remove the trailing extension `ext`.
#
# `ext` usually starts with a dot but could be anything.
#
# assert "file.txt".strip_extension(".txt") == "file"
# assert "file.txt".strip_extension("le.txt") == "fi"
# assert "file.txt".strip_extension("xt") == "file.t"
#
# if `ext` is not present, `self` is returned unmodified.
#
# assert "file.txt".strip_extension(".tar.gz") == "file.txt"
fun strip_extension(ext: String): String
do
if has_suffix(ext) then
return substring(0, length - ext.length)
end
return self
end
# Extract the basename of a path and remove the extension
#
# assert "/path/to/a_file.ext".basename(".ext") == "a_file"
# assert "path/to/a_file.ext".basename(".ext") == "a_file"
# assert "path/to".basename(".ext") == "to"
# assert "path/to/".basename(".ext") == "to"
# assert "path".basename("") == "path"
# assert "/path".basename("") == "path"
# assert "/".basename("") == "/"
# assert "".basename("") == ""
fun basename(ext: String): String
do
var l = length - 1 # Index of the last char
while l > 0 and self.chars[l] == '/' do l -= 1 # remove all trailing `/`
if l == 0 then return "/"
var pos = chars.last_index_of_from('/', l)
var n = self
if pos >= 0 then
n = substring(pos+1, l-pos)
end
return n.strip_extension(ext)
end
# Extract the dirname of a path
#
# assert "/path/to/a_file.ext".dirname == "/path/to"
# assert "path/to/a_file.ext".dirname == "path/to"
# assert "path/to".dirname == "path"
# assert "path/to/".dirname == "path"
# assert "path".dirname == "."
# assert "/path".dirname == "/"
# assert "/".dirname == "/"
# assert "".dirname == "."
fun dirname: String
do
var l = length - 1 # Index of the last char
while l > 0 and self.chars[l] == '/' do l -= 1 # remove all trailing `/`
var pos = chars.last_index_of_from('/', l)
if pos > 0 then
return substring(0, pos)
else if pos == 0 then
return "/"
else
return "."
end
end
# Return the canonicalized absolute pathname (see POSIX function `realpath`)
fun realpath: String do
var cs = to_cstring.file_realpath
var res = cs.to_s_with_copy
# cs.free_malloc # FIXME memory leak
return res
end
# Simplify a file path by remove useless ".", removing "//", and resolving ".."
# ".." are not resolved if they start the path
# starting "/" is not removed
# trainling "/" is removed
#
# Note that the method only wonrk on the string:
# * no I/O access is performed
# * the validity of the path is not checked
#
# assert "some/./complex/../../path/from/../to/a////file//".simplify_path == "path/to/a/file"
# assert "../dir/file".simplify_path == "../dir/file"
# assert "dir/../../".simplify_path == ".."
# assert "dir/..".simplify_path == "."
# assert "//absolute//path/".simplify_path == "/absolute/path"
# assert "//absolute//../".simplify_path == "/"
fun simplify_path: String
do
var a = self.split_with("/")
var a2 = new Array[String]
for x in a do
if x == "." then continue
if x == "" and not a2.is_empty then continue
if x == ".." and not a2.is_empty and a2.last != ".." then
a2.pop
continue
end
a2.push(x)
end
if a2.is_empty then return "."
if a2.length == 1 and a2.first == "" then return "/"
return a2.join("/")
end
# Correctly join two path using the directory separator.
#
# Using a standard "{self}/{path}" does not work in the following cases:
#
# * `self` is empty.
# * `path` ends with `'/'`.
# * `path` starts with `'/'`.
#
# This method ensures that the join is valid.
#
# assert "hello".join_path("world") == "hello/world"
# assert "hel/lo".join_path("wor/ld") == "hel/lo/wor/ld"
# assert "".join_path("world") == "world"
# assert "hello".join_path("/world") == "/world"
# assert "hello/".join_path("world") == "hello/world"
# assert "hello/".join_path("/world") == "/world"
#
# Note: You may want to use `simplify_path` on the result.
#
# Note: This method works only with POSIX paths.
fun join_path(path: String): String
do
if path.is_empty then return self
if self.is_empty then return path
if path.chars[0] == '/' then return path
if self.last == '/' then return "{self}{path}"
return "{self}/{path}"
end
# Convert the path (`self`) to a program name.
#
# Ensure the path (`self`) will be treated as-is by POSIX shells when it is
# used as a program name. In order to do that, prepend `./` if needed.
#
# assert "foo".to_program_name == "./foo"
# assert "/foo".to_program_name == "/foo"
# assert "".to_program_name == "./" # At least, your shell will detect the error.
fun to_program_name: String do
if self.has_prefix("/") then
return self
else
return "./{self}"
end
end
# Alias for `join_path`
#
# assert "hello" / "world" == "hello/world"
# assert "hel/lo" / "wor/ld" == "hel/lo/wor/ld"
# assert "" / "world" == "world"
# assert "/hello" / "/world" == "/world"
#
# This operator is quite useful for chaining changes of path.
# The next one being relative to the previous one.
#
# var a = "foo"
# var b = "/bar"
# var c = "baz/foobar"
# assert a/b/c == "/bar/baz/foobar"
fun /(path: String): String do return join_path(path)
# Returns the relative path needed to go from `self` to `dest`.
#
# assert "/foo/bar".relpath("/foo/baz") == "../baz"
# assert "/foo/bar".relpath("/baz/bar") == "../../baz/bar"
#
# If `self` or `dest` is relative, they are considered relatively to `getcwd`.
#
# In some cases, the result is still independent of the current directory:
#
# assert "foo/bar".relpath("..") == "../../.."
#
# In other cases, parts of the current directory may be exhibited:
#
# var p = "../foo/bar".relpath("baz")
# var c = getcwd.basename("")
# assert p == "../../{c}/baz"
#
# For path resolution independent of the current directory (eg. for paths in URL),
# or to use an other starting directory than the current directory,
# just force absolute paths:
#
# var start = "/a/b/c/d"
# var p2 = (start/"../foo/bar").relpath(start/"baz")
# assert p2 == "../../d/baz"
#
#
# Neither `self` or `dest` has to be real paths or to exist in directories since
# the resolution is only done with string manipulations and without any access to
# the underlying file system.
#
# If `self` and `dest` are the same directory, the empty string is returned:
#
# assert "foo".relpath("foo") == ""
# assert "foo/../bar".relpath("bar") == ""
#
# The empty string and "." designate both the current directory:
#
# assert "".relpath("foo/bar") == "foo/bar"
# assert ".".relpath("foo/bar") == "foo/bar"
# assert "foo/bar".relpath("") == "../.."
# assert "/" + "/".relpath(".") == getcwd
fun relpath(dest: String): String
do
var cwd = getcwd
var from = (cwd/self).simplify_path.split("/")
if from.last.is_empty then from.pop # case for the root directory
var to = (cwd/dest).simplify_path.split("/")
if to.last.is_empty then to.pop # case for the root directory
# Remove common prefixes
while not from.is_empty and not to.is_empty and from.first == to.first do
from.shift
to.shift
end
# Result is going up in `from` with ".." then going down following `to`
var from_len = from.length
if from_len == 0 then return to.join("/")
var up = "../"*(from_len-1) + ".."
if to.is_empty then return up
var res = up + "/" + to.join("/")
return res
end
# Create a directory (and all intermediate directories if needed)
fun mkdir
do
var dirs = self.split_with("/")
var path = new FlatBuffer
if dirs.is_empty then return
if dirs[0].is_empty then
# it was a starting /
path.add('/')
end
for d in dirs do
if d.is_empty then continue
path.append(d)
path.add('/')
path.to_s.to_cstring.file_mkdir
end
end
# Delete a directory and all of its content, return `true` on success
#
# Does not go through symbolic links and may get stuck in a cycle if there
# is a cycle in the filesystem.
fun rmdir: Bool
do
var ok = true
for file in self.files do
var file_path = self.join_path(file)
var stat = file_path.file_lstat
if stat.is_dir then
ok = file_path.rmdir and ok
else
ok = file_path.file_delete and ok
end
stat.free
end
# Delete the directory itself
if ok then to_cstring.rmdir
return ok
end
# Change the current working directory
#
# "/etc".chdir
# assert getcwd == "/etc"
# "..".chdir
# assert getcwd == "/"
#
# TODO: errno
fun chdir do to_cstring.file_chdir
# Return right-most extension (without the dot)
#
# Only the last extension is returned.
# There is no special case for combined extensions.
#
# assert "file.txt".file_extension == "txt"
# assert "file.tar.gz".file_extension == "gz"
#
# For file without extension, `null` is returned.
# Hoever, for trailing dot, `""` is returned.
#
# assert "file".file_extension == null
# assert "file.".file_extension == ""
#
# The starting dot of hidden files is never considered.
#
# assert ".file.txt".file_extension == "txt"
# assert ".file".file_extension == null
fun file_extension: nullable String
do
var last_slash = chars.last_index_of('.')
if last_slash > 0 then
return substring( last_slash+1, length )
else
return null
end
end
# returns files contained within the directory represented by self
fun files : Set[ String ] is extern import HashSet[String], HashSet[String].add, NativeString.to_s, String.to_cstring, HashSet[String].as(Set[String]) `{
char *dir_path;
DIR *dir;
dir_path = String_to_cstring( recv );
if ((dir = opendir(dir_path)) == NULL)
{
perror( dir_path );
exit( 1 );
}
else
{
HashSet_of_String results;
String file_name;
struct dirent *de;
results = new_HashSet_of_String();
while ( ( de = readdir( dir ) ) != NULL )
if ( strcmp( de->d_name, ".." ) != 0 &&
strcmp( de->d_name, "." ) != 0 )
{
file_name = NativeString_to_s( strdup( de->d_name ) );
HashSet_of_String_add( results, file_name );
}
closedir( dir );
return HashSet_of_String_as_Set_of_String( results );
}
`}
end
redef class NativeString
private fun file_exists: Bool is extern "string_NativeString_NativeString_file_exists_0"
private fun file_stat: FileStat is extern "string_NativeString_NativeString_file_stat_0"
private fun file_lstat: FileStat `{
struct stat* stat_element;
int res;
stat_element = malloc(sizeof(struct stat));
res = lstat(recv, stat_element);
if (res == -1) return NULL;
return stat_element;
`}
private fun file_mkdir: Bool is extern "string_NativeString_NativeString_file_mkdir_0"
private fun rmdir: Bool `{ return rmdir(recv); `}
private fun file_delete: Bool is extern "string_NativeString_NativeString_file_delete_0"
private fun file_chdir is extern "string_NativeString_NativeString_file_chdir_0"
private fun file_realpath: NativeString is extern "file_NativeString_realpath"
end
# This class is system dependent ... must reify the vfs
extern class FileStat `{ struct stat * `}
# Returns the permission bits of file
fun mode: Int is extern "file_FileStat_FileStat_mode_0"
# Returns the last access time
fun atime: Int is extern "file_FileStat_FileStat_atime_0"
# Returns the last status change time
fun ctime: Int is extern "file_FileStat_FileStat_ctime_0"
# Returns the last modification time
fun mtime: Int is extern "file_FileStat_FileStat_mtime_0"
# Returns the size
fun size: Int is extern "file_FileStat_FileStat_size_0"
# Returns true if it is a regular file (not a device file, pipe, sockect, ...)
fun is_reg: Bool `{ return S_ISREG(recv->st_mode); `}
# Returns true if it is a directory
fun is_dir: Bool `{ return S_ISDIR(recv->st_mode); `}
# Returns true if it is a character device
fun is_chr: Bool `{ return S_ISCHR(recv->st_mode); `}
# Returns true if it is a block device
fun is_blk: Bool `{ return S_ISBLK(recv->st_mode); `}
# Returns true if the type is fifo
fun is_fifo: Bool `{ return S_ISFIFO(recv->st_mode); `}
# Returns true if the type is a link
fun is_lnk: Bool `{ return S_ISLNK(recv->st_mode); `}
# Returns true if the type is a socket
fun is_sock: Bool `{ return S_ISSOCK(recv->st_mode); `}
end
# Instance of this class are standard FILE * pointers
private extern class NativeFile `{ FILE* `}
fun io_read(buf: NativeString, len: Int): Int is extern "file_NativeFile_NativeFile_io_read_2"
fun io_write(buf: NativeString, len: Int): Int is extern "file_NativeFile_NativeFile_io_write_2"
fun io_close: Int is extern "file_NativeFile_NativeFile_io_close_0"
fun file_stat: FileStat is extern "file_NativeFile_NativeFile_file_stat_0"
fun fileno: Int `{ return fileno(recv); `}
new io_open_read(path: NativeString) is extern "file_NativeFileCapable_NativeFileCapable_io_open_read_1"
new io_open_write(path: NativeString) is extern "file_NativeFileCapable_NativeFileCapable_io_open_write_1"
new native_stdin is extern "file_NativeFileCapable_NativeFileCapable_native_stdin_0"
new native_stdout is extern "file_NativeFileCapable_NativeFileCapable_native_stdout_0"
new native_stderr is extern "file_NativeFileCapable_NativeFileCapable_native_stderr_0"
end
redef class Sys
# Standard input
var stdin: PollableIStream = new Stdin is protected writable
# Standard output
var stdout: OStream = new Stdout is protected writable
# Standard output for errors
var stderr: OStream = new Stderr is protected writable
end
# Print `objects` on the standard output (`stdout`).
protected fun printn(objects: Object...)
do
sys.stdout.write(objects.to_s)
end
# Print an `object` on the standard output (`stdout`) and add a newline.
protected fun print(object: Object)
do
sys.stdout.write(object.to_s)
sys.stdout.write("\n")
end
# Read a character from the standard input (`stdin`).
protected fun getc: Char
do
return sys.stdin.read_char.ascii
end
# Read a line from the standard input (`stdin`).
protected fun gets: String
do
return sys.stdin.read_line
end
# Return the working (current) directory
protected fun getcwd: String do return file_getcwd.to_s
private fun file_getcwd: NativeString is extern "string_NativeString_NativeString_file_getcwd_0"

376
samples/Nit/meetup.nit Normal file
View File

@@ -0,0 +1,376 @@
# This file is part of NIT ( http://www.nitlanguage.org ).
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License
# Shows a meetup and allows to modify its participants
module meetup
import opportunity_model
import boilerplate
import welcome
import template
# Shows a meetup and allows to modify its participants
class OpportunityMeetupPage
super OpportunityPage
# Meetup the page is supposed to show
var meetup: nullable Meetup = null
# Answer mode for the meetup
var mode = 0
init from_id(id: String) do
var db = new OpportunityDB.open("opportunity")
meetup = db.find_meetup_by_id(id)
db.close
if meetup != null then mode = meetup.answer_mode
init
end
init do
header.page_js = "mode = {mode};\n"
header.page_js += """
function update_scores(){
var anss = $('.answer');
var count = {};
var scores = {};
var answers = [];
var maxscore = 0;
for(i=0; i < anss.length; i++){
var incscore = 0;
var inccount = 0;
var idparts = anss[i].id.split("_");
var ansid = idparts[1];
var html = anss[i].innerHTML;
if(html === "<center>✔</center>"){
inccount = 1;
incscore = 2;
}else if(html === "<center>❓</center>"){
incscore = 1;
}
var intansid = parseInt(ansid)
if(answers.indexOf(intansid) == -1){
answers.push(intansid);
}
if(ansid in count){
count[ansid] += inccount;
}else{
count[ansid] = inccount;
}
if(ansid in scores){
scores[ansid] += incscore;
}else{
scores[ansid] = incscore;
}
if(scores[ansid] > maxscore){
maxscore = scores[ansid];
}
}
for(i=0; i < answers.length; i++){
var ansid = answers[i].toString();
var el = $('#total'+ansid)[0];
var ins = "<center>"+count[ansid];
if(scores[ansid] >= maxscore){
ins += "<br/><span style=\\"color:blue\\">★</span>";
}
ins += "</center>";
el.innerHTML = ins;
}
}
function change_answer(ele, id){
// modify only the currently selected entry
if (in_modification_id != id) return;
var e = document.getElementById(ele.id);
var i = e.innerHTML;
var ans = true;"""
if mode == 0 then
header.page_js += """
if(i === "<center>✔</center>"){
ans = 0;
e.innerHTML = "<center>✘</center>"
e.style.color = "red";
}else{
ans = 1;
e.innerHTML = "<center>✔</center>";
e.style.color = "green";
}"""
else
header.page_js += """
if(i === "<center>✔</center>"){
ans = 1;
e.innerHTML = "<center>❓</center>"
e.style.color = "#B8860B";
}else if(i === "<center>❓</center>"){
ans = 0;
e.innerHTML = "<center>✘</center>"
e.style.color = "red";
}else{
ans = 2;
e.innerHTML = "<center>✔</center>";
e.style.color = "green";
}"""
end
header.page_js += """
var a = ele.id.split('_')
var pid = a[1]
var aid = a[2]
update_scores();
$.ajax({
type: "POST",
url: "./rest/answer",
data: {
answer_id: aid,
pers_id: pid,
answer: ans
}
});
}
function change_temp_answer(ele){
var e = document.getElementById(ele.id);
var i = e.innerHTML;"""
if mode == 0 then
header.page_js += """
if(i === "<center>✔</center>"){
e.innerHTML = "<center>✘</center>"
e.style.color = "red";
}else{
e.innerHTML = "<center>✔</center>";
e.style.color = "green";
}
"""
else
header.page_js += """
if(i === "<center>✔</center>"){
e.innerHTML = "<center>❓</center>";
e.style.color = "#B8860B";
}else if(i === "<center>❓</center>"){
e.innerHTML = "<center>✘</center>"
e.style.color = "red";
}else{
e.innerHTML = "<center>✔</center>";
e.style.color = "green";
}
"""
end
header.page_js += """
update_scores();
}
function add_part(ele){
var e = document.getElementById(ele.id);
var pname = document.getElementById("new_name").value;
var arr = e.id.split("_");
var mid = arr[1];
var ans = $('#' + ele.id).parent().parent().parent().children(".answer");
ansmap = {};
for(i=0;i<ans.length;i++){
var curr = ans.eq(i)
"""
if mode == 0 then
header.page_js += """
if(curr[0].innerHTML === "<center>✔</center>"){
ansmap[curr.attr('id')] = 1
}else{
ansmap[curr.attr('id')] = 0
}"""
else
header.page_js += """
if(curr[0].innerHTML === "<center>✔</center>"){
ansmap[curr.attr('id')] = 2
}else if(curr[0].innerHTML === "<center>❓</center>"){
ansmap[curr.attr('id')] = 1
}else{
ansmap[curr.attr('id')] = 0
}"""
end
header.page_js += """
}
$.ajax({
type: "POST",
url: "./rest/meetup/new_pers",
data: {
meetup_id: mid,
persname: pname,
answers: $.param(ansmap)
}
})
.done(function(data){
location.reload();
})
.fail(function(data){
//TODO: Notify of failure
});
}
function remove_people(ele){
var arr = ele.id.split("_")
var pid = arr[1]
$('#' + ele.id).parent().parent().parent().remove();
update_scores();
$.ajax({
type: "POST",
url: "./rest/people",
data: {
method: "DELETE",
p_id: pid
}
});
}
// ID of line currently open for modification
var in_modification_id = null;
function modify_people(ele, id){
if (in_modification_id != null) {
// reset to normal values
$('#modify_'+in_modification_id).text("Modify or delete");
$('#modify_'+in_modification_id).attr("class", "btn btn-xs btn-warning");
$('#line_'+in_modification_id).css("background-color", "");
$('#delete_'+in_modification_id).css("display", "none");
}
if (in_modification_id != id) {
// activate modifiable mode
$('#modify_'+id).text("Done");
$('#modify_'+id).attr("class", "btn btn-xs btn-success");
$('#line_'+id).css("background-color", "LightYellow");
$('#delete_'+id).show();
in_modification_id = id;
} else {
in_modification_id = null;
}
}
"""
end
redef fun rendering do
if meetup == null then
add((new OpportunityHomePage).write_to_string)
return
end
add header
var db = new OpportunityDB.open("opportunity")
add meetup.to_html(db)
db.close
add footer
end
end
redef class Meetup
# Build the HTML for `self`
fun to_html(db: OpportunityDB): Streamable do
var t = new Template
t.add """
<div class="container">
<div class="page-header">
<center><h1>{{{name}}}</h1></center>
"""
if not date.is_empty then t.add """
<center><h4>When: {{{date}}}</h4></center>"""
if not place.is_empty then t.add """
<center><h4>Where: {{{place}}}</h4></center>"""
t.add """
</div>
<table class="table">
"""
t.add "<th>Participant name</th>"
for i in answers(db) do
t.add "<th class=\"text-center\">"
t.add i.to_s
t.add "</th>"
end
t.add "<th></th>"
t.add "</tr>"
for i in participants(db) do
i.load_answers(db, self)
t.add "<tr id=\"line_{i.id}\">"
t.add "<td>"
t.add i.to_s
t.add "</td>"
for j, k in i.answers do
var color
if answer_mode == 0 then
if k == 1 then
color = "green"
else
color = "red"
end
else
if k == 2 then
color = "green"
else if k == 1 then
color = "#B8860B"
else
color = "red"
end
end
t.add """<td class="answer" onclick="change_answer(this, {{{i.id}}})" id="answer_{{{j.id}}}_{{{i.id}}}" style="color:{{{color}}}">"""
t.add "<center>"
if answer_mode == 0 then
if k == 1 then
t.add "✔"
else
t.add "✘"
end
else
if k == 2 then
t.add "✔"
else if k == 1 then
t.add "❓"
else
t.add "✘"
end
end
t.add "</center></td>"
end
t.add """<td class="opportunity-action"><center><button class="btn btn-xs btn-warning" type="button" onclick="modify_people(this, {{{i.id}}})" id="modify_{{{i.id}}}">Modify or delete</button>&nbsp;"""
t.add """<button class="btn btn-xs btn-danger" type="button" onclick="remove_people(this)" id="delete_{{{i.id}}}" style="display: none;">Delete</button></center></td>"""
t.add "</tr>"
end
t.add """
<tr id="newrow" style="background-color: LightYellow">
<td><input id="new_name" type="text" placeholder="Your name" class="input-large"></td>
"""
for i in answers(db) do
t.add "<td class=\"answer\" id=\"newans_{i.id}\" onclick=\"change_temp_answer(this)\" style=\"color:red;\"><center>✘</center></td>"
end
t.add """
<td><center><span id="add_{{{id}}}" onclick="add_part(this)" style="color:green;" class="action"><button class="btn btn-xs btn-success" type="button">Done</button></span></center></td>"""
t.add "</tr>"
# Compute score for each answer
var scores = new HashMap[Int, Int]
var maxsc = 0
for i in answers(db) do
scores[i.id] = i.score(db)
if scores[i.id] > maxsc then maxsc = scores[i.id]
end
t.add """
<tr id="total">
<th>Total</th>
"""
for i in answers(db) do
t.add """<th id="total{{{i.id}}}"><center>{{{i.count(db)}}}"""
if scores.has_key(i.id) and scores[i.id] >= maxsc then
t.add """<br/><span style="color:blue">★</span>"""
end
t.add "</center></th>"
end
t.add "</th>"
t.add """
<th></th>
</tr>"""
t.add "</table>"
t.add "</div>"
return t
end
end

View File

@@ -0,0 +1,6 @@
#!/usr/bin/env pike
int main(int argc, array argv) {
return 0;
}

275
samples/SAS/detect_phi.sas Normal file
View File

@@ -0,0 +1,275 @@
%macro check_dataset(dset =, obs_lim = max, eldest_age = 89) ;
%local i ;
%local inset_name ;
%let inset_name = &dset ;
%if %lowcase(&obs_lim) = max %then %do ;
%** Nothing ;
%end ;
%else %do ;
proc surveyselect
data = &inset_name
out = __sub_dset
method = srs
sampsize = &obs_lim SELECTALL
seed = 1234567
noprint
;
run;
%let dset = __sub_dset ;
%end ;
%macro check_varname(regx, msg) ;
create table possible_bad_vars as
select name, label
from these_vars
where prxmatch(compress("/(&regx)/i"), name)
;
%if &sqlobs > 0 %then %do ;
insert into phi_warnings(dset, variable, label, warning)
select "&inset_name" as dset, name, label, "&msg"
from possible_bad_vars
;
%end ;
%mend check_varname ;
%macro check_vars_for_mrn(length_limit = 6, obs_lim = max) ;
%local char ;
%let char = 2 ;
proc sql noprint ;
select name
into :mrn_array separated by ' '
from these_vars
where type = &char and length ge &length_limit
;
quit ;
%if &sqlobs > 0 %then %do ;
%put Checking these vars for possible MRN contents: &mrn_array ;
data __gnu ;
retain
mrn_regex_handle
badcount
;
set &inset_name (obs = &obs_lim keep = &mrn_array) ;
if _n_ = 1 then do ;
mrn_regex_handle = prxparse("/&mrn_regex/") ;
badcount = 0 ;
end ;
array p &mrn_array ;
do i = 1 to dim(p) ;
if prxmatch(mrn_regex_handle, p{i}) then do ;
badvar = vname(p{i}) ;
badvalue = p{i} ;
badcount = _n_ ;
output ;
end ;
keep badvar badvalue badcount ;
end ;
run ;
proc sql noprint ;
select compress(put(max(badcount), best.))
into :badcount
from __gnu
;
insert into phi_warnings(dset, variable, warning)
select distinct "&inset_name", badvar, "Could this var hold MRN values? Contents of %trim(&badcount) records match the pattern given for MRN values. MRNs should never move across sites."
from __gnu ;
drop table __gnu ;
quit ;
%end ;
%mend check_vars_for_mrn ;
%macro check_vars_for_oldsters(eldest_age = 89, obs_lim = max) ;
%local dtfmts ;
%let dtfmts = 'B8601DA','B8601DN','B8601DT','B8601DZ','B8601LZ','B8601TM','B8601TZ','DATE','DATEAMPM','DATETIME','DAY','DDMMYY',
'DDMMYYB','DDMMYYC','DDMMYYD','DDMMYYN','DDMMYYP','DDMMYYS','DOWNAME','DTDATE','DTMONYY','DTWKDATX','DTYEAR',
'DTYYQC','E8601DA','E8601DN','E8601DT','E8601DZ','E8601LZ','E8601TM','E8601TZ','HHMM','HOUR','JULDAY','JULIAN',
'MMDDYY','MMDDYYB','MMDDYYC','MMDDYYD','MMDDYYN','MMDDYYP','MMDDYYS','MMSS','MMYY','MMYY','MONNAME','MONTH','MONYY',
'PDJULG','PDJULI','QTR','QTRR','WEEKDATE','WEEKDATX','WEEKDAY','WEEKU','WEEKV','WEEKW','WORDDATE','WORDDATX',
'YEAR','YYMM','YYMMC','YYMMD','YYMMN','YYMMP','YYMMS','YYMMDD','YYMMDDB','YYMMDDC','YYMMDDD','YYMMDDN','YYMMDDP',
'YYMMDDS','YYMON','YYQ','YYQC','YYQD','YYQN','YYQP','YYQS','YYQR','YYQRC','YYQRD','YYQRN','YYQRP','YYQRS' ;
%local num ;
%let num = 1 ;
proc sql noprint ;
select name
into :dat_array separated by ' '
from these_vars
where type = &num and (format in (&dtfmts) or lowcase(name) like '%date%')
;
/* added by cb to shorten the process of looking at all dates */
%if &sqlobs > 0 %then %do ;
%put Checking these vars for possible DOB contents: &dat_array ;
select 'min(' || trim(name) || ') as ' || name into :var_list separated by ','
from these_vars
where type = &num and (format in (&dtfmts) or lowcase(name) like '%date%')
;
create table __gnu as
select &var_list from &inset_name
;
/* end cb additions */
quit ;
data __gnu ;
set __gnu (obs = &obs_lim keep = &dat_array) ;
array d &dat_array ;
do i = 1 to dim(d) ;
if n(d{i}) then maybe_age = %calcage(bdtvar = d{i}, refdate = "&sysdate9."d) ;
if maybe_age ge &eldest_age then do ;
badvar = vname(d{i}) ;
badvalue = d{i} ;
output ;
end ;
keep badvar badvalue maybe_age ;
end ;
run ;
proc sql outobs = 30 nowarn ;
insert into phi_warnings(dset, variable, warning)
select distinct "&inset_name", badvar, "If this is a date, at least one value is " || compress(put(maybe_age, best.)) || " years ago, which is older than &eldest_age.. " ||
"If this date applies to a person, the record is probably PHI."
from __gnu ;
drop table __gnu ;
quit ;
%end ;
%else %do ;
%put No obvious date variables found in &inset_name.--skipping age checks. ;
%end ;
%mend check_vars_for_oldsters ;
proc contents noprint data = &inset_name out = these_vars ;
run ;
proc sql noprint ;
create table phi_warnings (dset char(50), variable char(256), label char(256), warning char(200)) ;
%check_varname(regx = mrn|hrn , msg = %str(Name suggests this var may be an MRN, which should never move across sites.)) ;
%check_varname(regx = birth_date|BirthDate|DOB|BDate , msg = %str(Name suggests this var may be a date of birth.)) ;
%check_varname(regx = SSN|SocialSecurityNumber|social_security_number|socsec, msg = %str(Name suggests this var may be a social security number.)) ;
%if %symexist(locally_forbidden_varnames) %then %do ;
%check_varname(regx = &locally_forbidden_varnames, msg = %str(May be on the locally defined list of variables not allowed to be sent to other sites.)) ;
%end ;
quit ;
%check_vars_for_mrn(obs_lim = &obs_lim) ;
%check_vars_for_oldsters(obs_lim = &obs_lim, eldest_age = &eldest_age) ;
title3 "WARNINGS for dataset &inset_name:" ;
proc sql noprint ;
select count(*) as num_warns into :num_warns from phi_warnings ;
%if &num_warns = 0 %then %do ;
reset print outobs = 5 NOWARN ;
select "No obvious PHI-like data elements in &inset_name--BUT PLEASE INSPECT THE CONTENTS AND PRINTs TO FOLLOW" as x label = "No warnings for &inset_name"
from &inset_name
;
%do i = 1 %to 5 ;
%put No obvious phi-like data elements in &inset_name. BUT PLEASE INSPECT THE CONTENTS AND PRINTs CAREFULLY TO MAKE SURE OF THIS! ;
%end ;
%end ;
%else %do ;
reset print ;
select variable, warning from phi_warnings
order by variable, warning
;
quit ;
%end ;
title3 "Dataset &inset_name" ;
proc contents data = &inset_name varnum ;
run ;
/*
proc print data = &inset_name (obs = 20) ;
run ;
*/
** TODO: make the print print out recs that trip the value warnings. ;
proc sql number ;
select *
from &inset_name (obs = 20)
;
quit ;
quit ;
%RemoveDset(dset = __sub_dset) ;
%RemoveDset(dset = possible_bad_vars) ;
%RemoveDset(dset = phi_warnings) ;
%RemoveDset(dset = these_vars) ;
%mend check_dataset ;
%macro detect_phi(transfer_lib, obs_lim = max, eldest_age = 89) ;
%put ;
%put ;
%put ============================================================== ;
%put ;
%put Macro detect_phi: ;
%put ;
%put Checking all datasets found in %sysfunc(pathname(&transfer_lib)) for the following signs of PHI: ;
%put - Variable names signifying sensitive items like 'MRN', 'birth_date', 'SSN' and so forth. ;
%if %symexist(locally_forbidden_varnames) %then %do ;
%put - Variable names on the list defined in the standard macro variable locally_forbidden_varnames (here those names are: &locally_forbidden_varnames). ;
%end ;
%put - Contents of CHARACTER variables that match the pattern given in the standard macro variable mrn_regex (here that var is &mrn_regex) ;
%put Please note that numeric variables ARE NOT CHECKED FOR MRN-LIKE CONTENT. ;
%put - The contents of date variables (as divined by their formats) for values that, if they were DOBs, would indicate a person older than &eldest_age years. ;
%put ;
%put THIS IS BETA SOFTWARE-PLEASE SCRUTINIZE THE RESULTS AND REPORT PROBLEMS TO pardee.r@ghc.org. ;
%put ;
%put THIS MACRO IS NOT A SUBSTITUTE FOR HUMAN INSPECTION AND THOUGHT--PLEASE CAREFULLY INSPECT ALL VARIABLES--WHETHER ;
%put OR NOT THEY TRIP A WARNING--TO MAKE SURE THE DATA COMPORTS WITH YOUR DATA SHARING AGREEMENT!!! ;
%put THIS MACRO IS NOT A SUBSTITUTE FOR HUMAN INSPECTION AND THOUGHT--PLEASE CAREFULLY INSPECT ALL VARIABLES--WHETHER ;
%put OR NOT THEY TRIP A WARNING--TO MAKE SURE THE DATA COMPORTS WITH YOUR DATA SHARING AGREEMENT!!! ;
%put ;
%put THIS MACRO IS NOT A SUBSTITUTE FOR HUMAN INSPECTION AND THOUGHT--PLEASE CAREFULLY INSPECT ALL VARIABLES--WHETHER ;
%put OR NOT THEY TRIP A WARNING--TO MAKE SURE THE DATA COMPORTS WITH YOUR DATA SHARING AGREEMENT!!! ;
%put THIS MACRO IS NOT A SUBSTITUTE FOR HUMAN INSPECTION AND THOUGHT--PLEASE CAREFULLY INSPECT ALL VARIABLES--WHETHER ;
%put OR NOT THEY TRIP A WARNING--TO MAKE SURE THE DATA COMPORTS WITH YOUR DATA SHARING AGREEMENT!!! ;
%put ;
%put THIS MACRO IS NOT A SUBSTITUTE FOR HUMAN INSPECTION AND THOUGHT--PLEASE CAREFULLY INSPECT ALL VARIABLES--WHETHER ;
%put OR NOT THEY TRIP A WARNING--TO MAKE SURE THE DATA COMPORTS WITH YOUR DATA SHARING AGREEMENT!!! ;
%put THIS MACRO IS NOT A SUBSTITUTE FOR HUMAN INSPECTION AND THOUGHT--PLEASE CAREFULLY INSPECT ALL VARIABLES--WHETHER ;
%put OR NOT THEY TRIP A WARNING--TO MAKE SURE THE DATA COMPORTS WITH YOUR DATA SHARING AGREEMENT!!! ;
%put ;
%put THIS MACRO IS NOT A SUBSTITUTE FOR HUMAN INSPECTION AND THOUGHT--PLEASE CAREFULLY INSPECT ALL VARIABLES--WHETHER ;
%put OR NOT THEY TRIP A WARNING--TO MAKE SURE THE DATA COMPORTS WITH YOUR DATA SHARING AGREEMENT!!! ;
%put THIS MACRO IS NOT A SUBSTITUTE FOR HUMAN INSPECTION AND THOUGHT--PLEASE CAREFULLY INSPECT ALL VARIABLES--WHETHER ;
%put OR NOT THEY TRIP A WARNING--TO MAKE SURE THE DATA COMPORTS WITH YOUR DATA SHARING AGREEMENT!!! ;
%put ;
%put ;
%put ============================================================== ;
%put ;
%put ;
title1 "PHI-Detection Report for the datasets in %sysfunc(pathname(&transfer_lib))." ;
title2 "please inspect all output carefully to make sure it comports with your data sharing agreement!!!" ;
proc sql noprint ;
** describe table dictionary.tables ;
select trim(libname) || '.' || memname as dset
into :d1-:d999
from dictionary.tables
where libname = "%upcase(&transfer_lib)" AND
memtype = 'DATA'
;
%local num_dsets ;
%let num_dsets = &sqlobs ;
quit ;
%local i ;
%if &num_dsets = 0 %then %do i = 1 %to 10 ;
%put ERROR: NO DATASETS FOUND IN &transfer_lib!!!! ;
%end ;
%do i = 1 %to &num_dsets ;
%put about to check &&d&i ;
%check_dataset(dset = &&d&i, obs_lim = &obs_lim, eldest_age = &eldest_age) ;
%end ;
%mend detect_phi ;

View File

@@ -0,0 +1,7 @@
PREFIX foaf: <http://xmlns.com/foaf/0.1/>
SELECT ?name ?email
WHERE {
?person a foaf:Person.
?person foaf:name ?name.
?person foaf:mbox ?email.
}

View File

@@ -0,0 +1,40 @@
PREFIX owl: <http://www.w3.org/2002/07/owl#>
PREFIX rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#>
PREFIX skos: <http://www.w3.org/2004/02/skos/core#>
SELECT DISTINCT ?s ?label
WHERE {
SERVICE <http://api.finto.fi/sparql>
{
SELECT DISTINCT ?s ?label ?plabel ?alabel ?hlabel (GROUP_CONCAT(DISTINCT STR(?type)) as ?types)
WHERE {
GRAPH <http://www.yso.fi/onto/kauno/>
{
?s rdf:type <http://www.w3.org/2004/02/skos/core#Concept>
{
?s rdf:type ?type .
?s ?prop ?match .
FILTER (
strstarts(lcase(str(?match)), "test") && !(?match != ?label && strstarts(lcase(str(?label)), "test"))
)
OPTIONAL {
?s skos:prefLabel ?label .
FILTER (langMatches(lang(?label), "en"))
}
OPTIONAL { # in case previous OPTIONAL block gives no labels
?s ?prop ?match .
?s skos:prefLabel ?label .
FILTER (langMatches(lang(?label), lang(?match))) }
}
FILTER NOT EXISTS { ?s owl:deprecated true }
}
BIND(IF(?prop = skos:prefLabel && ?match != ?label, ?match, "") as ?plabel)
BIND(IF(?prop = skos:altLabel, ?match, "") as ?alabel)
BIND(IF(?prop = skos:hiddenLabel, ?match, "") as ?hlabel)
VALUES (?prop) { (skos:prefLabel) (skos:altLabel) (skos:hiddenLabel) }
}
GROUP BY ?match ?s ?label ?plabel ?alabel ?hlabel ?prop
ORDER BY lcase(str(?match)) lang(?match)
LIMIT 10
}
}

85
samples/SQL/videodb.cql Normal file
View File

@@ -0,0 +1,85 @@
CREATE KEYSPACE videodb WITH REPLICATION = { 'class' : 'SimpleStrategy', 'replication_factor' : 1 };
use videodb;
// Basic entity table
// Object mapping ?
CREATE TABLE users (
username varchar,
firstname varchar,
lastname varchar,
email varchar,
password varchar,
created_date timestamp,
total_credits int,
credit_change_date timeuuid,
PRIMARY KEY (username)
);
// One-to-many entity table
CREATE TABLE videos (
videoid uuid,
videoname varchar,
username varchar,
description varchar,
tags list<varchar>,
upload_date timestamp,
PRIMARY KEY (videoid)
);
// One-to-many from the user point of view
// Also know as a lookup table
CREATE TABLE username_video_index (
username varchar,
videoid uuid,
upload_date timestamp,
videoname varchar,
PRIMARY KEY (username, videoid)
);
// Counter table
CREATE TABLE video_rating (
videoid uuid,
rating_counter counter,
rating_total counter,
PRIMARY KEY (videoid)
);
// Creating index tables for tab keywords
CREATE TABLE tag_index (
tag varchar,
videoid uuid,
timestamp timestamp,
PRIMARY KEY (tag, videoid)
);
// Comments as a many-to-many
// Looking from the video side to many users
CREATE TABLE comments_by_video (
videoid uuid,
username varchar,
comment_ts timestamp,
comment varchar,
PRIMARY KEY (videoid,comment_ts,username)
) WITH CLUSTERING ORDER BY (comment_ts DESC, username ASC);
// looking from the user side to many videos
CREATE TABLE comments_by_user (
username varchar,
videoid uuid,
comment_ts timestamp,
comment varchar,
PRIMARY KEY (username,comment_ts,videoid)
) WITH CLUSTERING ORDER BY (comment_ts DESC, videoid ASC);
// Time series wide row with reverse comparator
CREATE TABLE video_event (
videoid uuid,
username varchar,
event varchar,
event_timestamp timeuuid,
video_timestamp bigint,
PRIMARY KEY ((videoid,username), event_timestamp,event)
) WITH CLUSTERING ORDER BY (event_timestamp DESC,event ASC);

85
samples/SQL/videodb.ddl Normal file
View File

@@ -0,0 +1,85 @@
CREATE KEYSPACE videodb WITH REPLICATION = { 'class' : 'SimpleStrategy', 'replication_factor' : 1 };
use videodb;
// Basic entity table
// Object mapping ?
CREATE TABLE users (
username varchar,
firstname varchar,
lastname varchar,
email varchar,
password varchar,
created_date timestamp,
total_credits int,
credit_change_date timeuuid,
PRIMARY KEY (username)
);
// One-to-many entity table
CREATE TABLE videos (
videoid uuid,
videoname varchar,
username varchar,
description varchar,
tags list<varchar>,
upload_date timestamp,
PRIMARY KEY (videoid)
);
// One-to-many from the user point of view
// Also know as a lookup table
CREATE TABLE username_video_index (
username varchar,
videoid uuid,
upload_date timestamp,
videoname varchar,
PRIMARY KEY (username, videoid)
);
// Counter table
CREATE TABLE video_rating (
videoid uuid,
rating_counter counter,
rating_total counter,
PRIMARY KEY (videoid)
);
// Creating index tables for tab keywords
CREATE TABLE tag_index (
tag varchar,
videoid uuid,
timestamp timestamp,
PRIMARY KEY (tag, videoid)
);
// Comments as a many-to-many
// Looking from the video side to many users
CREATE TABLE comments_by_video (
videoid uuid,
username varchar,
comment_ts timestamp,
comment varchar,
PRIMARY KEY (videoid,comment_ts,username)
) WITH CLUSTERING ORDER BY (comment_ts DESC, username ASC);
// looking from the user side to many videos
CREATE TABLE comments_by_user (
username varchar,
videoid uuid,
comment_ts timestamp,
comment varchar,
PRIMARY KEY (username,comment_ts,videoid)
) WITH CLUSTERING ORDER BY (comment_ts DESC, videoid ASC);
// Time series wide row with reverse comparator
CREATE TABLE video_event (
videoid uuid,
username varchar,
event varchar,
event_timestamp timeuuid,
video_timestamp bigint,
PRIMARY KEY ((videoid,username), event_timestamp,event)
) WITH CLUSTERING ORDER BY (event_timestamp DESC,event ASC);

View File

@@ -0,0 +1,136 @@
# -*- coding: utf-8 -*-
#
# Funciones en Python/Sage para el trabajo con polinomios con una
# incógnita (x).
#
# Copyright (C) 2014-2015, David Abián <davidabian [at] davidabian.com>
#
# This program is free software: you can redistribute it and/or modify it
# under the terms of the GNU General Public License as published by the Free
# Software Foundation, either version 3 of the License, or (at your option)
# any later version.
#
# This program is distributed in the hope that it will be useful, but WITHOUT
# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
# FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
# more details.
#
# You should have received a copy of the GNU General Public License along with
# this program. If not, see <http://www.gnu.org/licenses/>.
def pols (grado=-1, K=GF(2), mostrar=False):
"""Devuelve la lista de polinomios constantes y no constantes de
coeficientes mónicos y grado igual o menor que el especificado.
Si el grado indicado no es válido, devuelve una lista vacía.
"""
lpols = []
if not grado.is_integer():
grado = grado.round()
if grado >= 0:
var('x')
xs = vector([(x^i) for i in range(grado+1)])
V = VectorSpace(K,grado+1)
lpols = [cs*xs for cs in V]
if mostrar:
for pol in lpols:
print pol
return lpols
def polsNoCtes (grado=-1, K=GF(2), mostrar=False):
"""Devuelve la lista de polinomios no constantes de coeficientes mónicos y
grado igual o menor que el especificado.
Si el grado indicado no es válido, devuelve una lista vacía.
"""
lpols = []
if not grado.is_integer():
grado = grado.round()
if grado >= 0:
var('x')
xs = vector([(x^i) for i in range(grado+1)])
for cs in K^(grado+1):
if cs[:grado] != vector(grado*[0]): # no constantes
lpols += [cs*xs]
if mostrar:
for pol in lpols:
print pol
return lpols
def polsMismoGrado (grado=-1, K=GF(2), mostrar=False):
"""Devuelve la lista de polinomios de coeficientes mónicos del grado
especificado.
Si el grado indicado no es válido, devuelve una lista vacía.
"""
lpols = []
if not grado.is_integer():
grado = grado.round()
if grado >= 0:
var('x')
xs = vector([(x^(grado-i)) for i in [0..grado]])
for cs in K^(grado+1):
if cs[0] != 0: # polinomios del mismo grado
lpols += [cs*xs]
if mostrar:
for pol in lpols:
print pol
return lpols
def excluirReducibles (lpols=[], mostrar=False):
"""Filtra una lista dada de polinomios de coeficientes mónicos y devuelve
aquellos irreducibles.
"""
var('x')
irreds = []
for p in lpols:
fp = (p.factor_list())
if len(fp) == 1 and fp[0][1] == 1:
irreds += [p]
if mostrar:
for pol in irreds:
print pol
return irreds
def vecPol (vec=random_vector(GF(2),0)):
"""Transforma los coeficientes dados en forma de vector en el polinomio
que representan.
Por ejemplo, con vecPol(vector([1,0,3,1])) se obtiene x³ + 3*x + 1.
Para la función opuesta, véase polVec().
"""
var('x')
xs = vector([x^(len(vec)-1-i) for i in range(len(vec))])
return vec*xs
def polVec (p=None):
"""Devuelve el vector de coeficientes del polinomio dado que acompañan a la
incógnita x, de mayor a menor grado.
Por ejemplo, con polVec(x^3 + 3*x + 1) se obtiene el vector (1, 0, 3, 1).
Para la función opuesta, véase vecPol().
"""
cs = []
if p != None:
var('x')
p(x) = p
for i in [0..p(x).degree(x)]:
cs.append(p(x).coefficient(x,i))
cs = list(reversed(cs))
return vector(cs)
def completar2 (p=0):
"""Aplica el método de completar cuadrados en parábolas al polinomio dado de
grado 2 y lo devuelve en su nueva forma.
Si el polinomio dado no es válido, devuelve 0.
Por ejemplo, con complCuad(3*x^2 + 12*x + 5) se obtiene 3*(x + 2)^2 - 7.
"""
var('x')
p(x) = p.expand()
if p(x).degree(x) != 2:
p(x) = 0
else:
cs = polVec(p(x))
p(x) = cs[0]*(x+(cs[1]/(2*cs[0])))^2+(4*cs[0]*cs[2]-cs[1]^2)/(4*cs[0])
return p(x)

View File

@@ -0,0 +1,48 @@
ceph:
pkg.installed:
- refresh: True
service:
- dead
- enable: False
- require:
- file: /etc/eval.conf
{% if grains['os'] == 'Ubuntu'%}
- file: /etc/apt/sources.list.d/ceph.list
{% endif %}
ceph-mds:
pkg.installed:
- require:
- pkg: ceph
include:
- ceph.extras
{% if grains['os'] == 'Ubuntu'%}
/etc/apt/sources.list.d/ceph.list:
file.managed:
- source: salt://ceph/apt.list
- template: jinja
- require:
- cmd: repo-key
repo-key:
cmd.run:
- name: 'wget -q -O - https://raw.github.com/release.asc | sudo apt-key add -'
- unless: 'apt-key list | grep -q -i ceph'
{% endif %}
/etc/ceph/ceph.conf:
file.managed:
- source: salt://ceph/eval.conf
- template: jinja
- makedirs: true
/var/lib/ceph:
file.directory:
- names:
{% for dir in 'mon.a','osd.0','osd.1','mds.a' %}
- /var/lib/ceph/{{ dir.split('.')[0] }}/ceph-{{ dir.split('.')[1] }}
{% endfor %}
- require:
- pkg: ceph

View File

@@ -0,0 +1,4 @@
base:
'*':
- packages
- coffeestats

View File

@@ -0,0 +1,46 @@
(library (lambdastar)
(export (rename (lambda* lambda)))
(import (rnrs))
(define-syntax lambda*
(syntax-rules ()
((_ a* e* ...)
( lambda*-h a* (let () e* ...)))))
(define-syntax lambda*-h
(syntax-rules ()
((_ () e)
(lambda a* (if (null? a*) e (apply (e) a*))))
((_ (a a* ...) e) (posary-h (a a* ...) e))
((_ (a a* ... . rest) e)
(polyvariadic-h (a a* ... . rest) e))
((_ a* e) (lambda a* e))))
(define-syntax posary-h
(syntax-rules ()
((_ (a a* ...) e)
(letrec
((rec
(case-lambda
(() rec)
((a a* ...) e)
((a a* ... . rest)
(apply (rec a a* ...) rest))
(some (get-more rec some)))))
rec))))
(define-syntax polyvariadic-h
(syntax-rules ()
((_ (a a* ... . rest) e)
(letrec
((rec
(case-lambda
(() rec)
((a a* ... . rest) e)
(some (get-more rec some)))))
rec))))
(define get-more
(lambda (rec some)
(lambda more
(apply rec (append some more))))))

View File

@@ -1 +1,2 @@
the green potato=la pomme de terre verte
le nouveau type de musique=the new type of music

View File

@@ -0,0 +1,183 @@
@prefix foaf: <http://xmlns.com/foaf/0.1/> .
@prefix owl: <http://www.w3.org/2002/07/owl#> .
@prefix gndo: <http://d-nb.info/standards/elementset/gnd#> .
@prefix xsd: <http://www.w3.org/2001/XMLSchema#> .
<http://d-nb.info/gnd/118514768>
a <http://d-nb.info/standards/elementset/gnd#Pseudonym> ;
foaf:page <http://de.wikipedia.org/wiki/Bertolt_Brecht> ;
owl:sameAs <http://dbpedia.org/resource/Bertolt_Brecht>, <http://viaf.org/viaf/2467372>, <http://www.filmportal.de/person/261E2D3A93D54134BF8AB5F21F0B2399> ;
gndo:gndIdentifier "118514768" ;
gndo:oldAuthorityNumber "(DE-588)1022091077", "(DE-588a)118514768", "(DE-588a)141399074", "(DE-588a)139089691", "(DE-588a)141300248", "(DE-588a)136949541", "(DE-588a)134336232", "(DE-588a)12794544X", "(DE-588a)12736630X", "(DE-588a)12722811X", "(DE-588a)127228098", "(DE-588a)127228101" ;
gndo:variantNameForThePerson "Brêcht, Becton", "Brecht, Bert", "Brecht, Bertolʹ", "Brecht, Berthold", "Brecht, Bertholt", "Brecht, Bertold", "Brecht, B.", "Brecht, Eugen Berthold Friedrich", "Brecht, ...", "Brecht-Eisler, ...", "Becht, Bertolt", "Beituo'erte-Bulaixite", "Berchito, B.", "Brechtas, B.", "Brechts, Bertolts", "Brehd, Berd", "Breht, Bertolt", "Brehts, Bertolts", "Breḳhṭ, Bārṭolṭ", "Brekt, Berṭolṭ", "Brekṭ, Berṭōlṭ", "Breḳṭ, Berṭôlṭ", "Breśṭ, Berṭalṭa", "Breṣṭa, Barṭolṭa", "Brišt, Bartūlt", "Brišt, Birtūld", "Brišt, Birtult", "Buchito, Berutorutu", "Bulaixite, Beituo'erte", "Bulaixite, ...", "Burehito, Berutoruto", "Burehito, ...", "B. B.", "Larsen, Berthold", "Mprecht, Mpertolt", "Mprecht, ...", "Pulaihsit'ê, Peit'oĉrht'ê", "Pulaihsit'ê, ...", "Pŭrehit'ŭ, Peŏt'olt'ŭ", "Bŭrehit'ŭ, Beŏt'olt'ŭ", "برشت، برتولد", "브레히트, 베르톨트", "ברכט, ברטולט", "贝·布莱希特", "布莱希特, 贝", "ブレヒト, ベルトルト" ;
gndo:variantNameEntityForThePerson [
gndo:forename "Becton" ;
gndo:surname "Brêcht"
], [
gndo:forename "Bert" ;
gndo:surname "Brecht"
], [
gndo:forename "Bertolʹ" ;
gndo:surname "Brecht"
], [
gndo:forename "Berthold" ;
gndo:surname "Brecht"
], [
gndo:forename "Bertholt" ;
gndo:surname "Brecht"
], [
gndo:forename "Bertold" ;
gndo:surname "Brecht"
], [
gndo:forename "B." ;
gndo:surname "Brecht"
], [
gndo:forename "Eugen Berthold Friedrich" ;
gndo:surname "Brecht"
], [
gndo:forename "..." ;
gndo:surname "Brecht"
], [
gndo:forename "..." ;
gndo:surname "Brecht-Eisler"
], [
gndo:forename "Bertolt" ;
gndo:surname "Becht"
], [ gndo:personalName "Beituo'erte-Bulaixite" ], [
gndo:forename "B." ;
gndo:surname "Berchito"
], [
gndo:forename "B." ;
gndo:surname "Brechtas"
], [
gndo:forename "Bertolts" ;
gndo:surname "Brechts"
], [
gndo:forename "Berd" ;
gndo:surname "Brehd"
], [
gndo:forename "Bertolt" ;
gndo:surname "Breht"
], [
gndo:forename "Bertolts" ;
gndo:surname "Brehts"
], [
gndo:forename "Bārṭolṭ" ;
gndo:surname "Breḳhṭ"
], [
gndo:forename "Berṭolṭ" ;
gndo:surname "Brekt"
], [
gndo:forename "Berṭōlṭ" ;
gndo:surname "Brekṭ"
], [
gndo:forename "Berṭôlṭ" ;
gndo:surname "Breḳṭ"
], [
gndo:forename "Berṭalṭa" ;
gndo:surname "Breśṭ"
], [
gndo:forename "Barṭolṭa" ;
gndo:surname "Breṣṭa"
], [
gndo:forename "Bartūlt" ;
gndo:surname "Brišt"
], [
gndo:forename "Birtūld" ;
gndo:surname "Brišt"
], [
gndo:forename "Birtult" ;
gndo:surname "Brišt"
], [
gndo:forename "Berutorutu" ;
gndo:surname "Buchito"
], [
gndo:forename "Beituo'erte" ;
gndo:surname "Bulaixite"
], [
gndo:forename "..." ;
gndo:surname "Bulaixite"
], [
gndo:forename "Berutoruto" ;
gndo:surname "Burehito"
], [
gndo:forename "..." ;
gndo:surname "Burehito"
], [ gndo:personalName "B. B." ], [
gndo:forename "Berthold" ;
gndo:surname "Larsen"
], [
gndo:forename "Mpertolt" ;
gndo:surname "Mprecht"
], [
gndo:forename "..." ;
gndo:surname "Mprecht"
], [
gndo:forename "Peit'oĉrht'ê" ;
gndo:surname "Pulaihsit'ê"
], [
gndo:forename "..." ;
gndo:surname "Pulaihsit'ê"
], [
gndo:forename "Peŏt'olt'ŭ" ;
gndo:surname "Pŭrehit'ŭ"
], [
gndo:forename "Beŏt'olt'ŭ" ;
gndo:surname "Bŭrehit'ŭ"
], [ gndo:personalName "برشت، برتولد" ], [
gndo:forename "베르톨트" ;
gndo:surname "브레히트"
], [
gndo:forename "ברטולט" ;
gndo:surname "ברכט"
], [ gndo:personalName "贝·布莱希特" ], [
gndo:forename "" ;
gndo:surname "布莱希特"
], [
gndo:forename "ベルトルト" ;
gndo:surname "ブレヒト"
] ;
gndo:preferredNameForThePerson "Brecht, Bertolt" ;
gndo:preferredNameEntityForThePerson [
gndo:forename "Bertolt" ;
gndo:surname "Brecht"
] ;
gndo:familialRelationship <http://d-nb.info/gnd/121608557>, <http://d-nb.info/gnd/119056011>, <http://d-nb.info/gnd/118738348>, <http://d-nb.info/gnd/137070411>, <http://d-nb.info/gnd/118809849>, <http://d-nb.info/gnd/119027615>, <http://d-nb.info/gnd/118940163>, <http://d-nb.info/gnd/118630091>, <http://d-nb.info/gnd/123783283>, <http://d-nb.info/gnd/118940155>, <http://d-nb.info/gnd/110005449>, <http://d-nb.info/gnd/13612495X>, <http://d-nb.info/gnd/123757398>, <http://d-nb.info/gnd/1030496250>, <http://d-nb.info/gnd/1030496366> ;
gndo:professionOrOccupation <http://d-nb.info/gnd/4185053-1>, <http://d-nb.info/gnd/4140241-8>, <http://d-nb.info/gnd/4052154-0>, <http://d-nb.info/gnd/4168391-2>, <http://d-nb.info/gnd/4053309-8>, <http://d-nb.info/gnd/4049050-6>, <http://d-nb.info/gnd/4294338-3> ;
gndo:playedInstrument <http://d-nb.info/gnd/4057587-1> ;
gndo:gndSubjectCategory <http://d-nb.info/standards/vocab/gnd/gnd-sc#12.2p>, <http://d-nb.info/standards/vocab/gnd/gnd-sc#15.1p>, <http://d-nb.info/standards/vocab/gnd/gnd-sc#15.3p> ;
gndo:geographicAreaCode <http://d-nb.info/standards/vocab/gnd/geographic-area-code#XA-DE> ;
gndo:languageCode <http://id.loc.gov/vocabulary/iso639-2/ger> ;
gndo:placeOfBirth <http://d-nb.info/gnd/4003614-5> ;
gndo:placeOfDeath <http://d-nb.info/gnd/4005728-8> ;
gndo:placeOfExile <http://d-nb.info/gnd/4010877-6>, <http://d-nb.info/gnd/4077258-5> ;
gndo:gender <http://d-nb.info/standards/vocab/gnd/Gender#male> ;
gndo:dateOfBirth "1898-02-10"^^xsd:date ;
gndo:dateOfDeath "1956-08-14"^^xsd:date .
<http://d-nb.info/gnd/121608557> gndo:preferredNameForThePerson "Brecht, Berthold Friedrich" .
<http://d-nb.info/gnd/119056011> gndo:preferredNameForThePerson "Banholzer, Paula" .
<http://d-nb.info/gnd/118738348> gndo:preferredNameForThePerson "Neher, Carola" .
<http://d-nb.info/gnd/137070411> gndo:preferredNameForThePerson "Banholzer, Frank" .
<http://d-nb.info/gnd/118809849> gndo:preferredNameForThePerson "Berlau, Ruth" .
<http://d-nb.info/gnd/119027615> gndo:preferredNameForThePerson "Steffin, Margarete" .
<http://d-nb.info/gnd/118940163> gndo:preferredNameForThePerson "Zoff, Marianne" .
<http://d-nb.info/gnd/118630091> gndo:preferredNameForThePerson "Weigel, Helene" .
<http://d-nb.info/gnd/123783283> gndo:preferredNameForThePerson "Reichel, Käthe" .
<http://d-nb.info/gnd/118940155> gndo:preferredNameForThePerson "Hiob, Hanne" .
<http://d-nb.info/gnd/110005449> gndo:preferredNameForThePerson "Brecht, Stefan" .
<http://d-nb.info/gnd/13612495X> gndo:preferredNameForThePerson "Brecht-Schall, Barbara" .
<http://d-nb.info/gnd/123757398> gndo:preferredNameForThePerson "Schall, Ekkehard" .
<http://d-nb.info/gnd/1030496250> gndo:preferredNameForThePerson "Brezing, Joseph Friedrich" .
<http://d-nb.info/gnd/1030496366> gndo:preferredNameForThePerson "Brezing, Friederike" .
<http://d-nb.info/gnd/4185053-1> gndo:preferredNameForTheSubjectHeading "Theaterregisseur" .
<http://d-nb.info/gnd/4140241-8> gndo:preferredNameForTheSubjectHeading "Dramatiker" .
<http://d-nb.info/gnd/4052154-0> gndo:preferredNameForTheSubjectHeading "Schauspieler" .
<http://d-nb.info/gnd/4168391-2> gndo:preferredNameForTheSubjectHeading "Lyriker" .
<http://d-nb.info/gnd/4053309-8> gndo:preferredNameForTheSubjectHeading "Schriftsteller" .
<http://d-nb.info/gnd/4049050-6> gndo:preferredNameForTheSubjectHeading "Regisseur" .
<http://d-nb.info/gnd/4294338-3> gndo:preferredNameForTheSubjectHeading "Drehbuchautor" .
<http://d-nb.info/gnd/4003614-5> gndo:preferredNameForThePlaceOrGeographicName "Augsburg" .
<http://d-nb.info/gnd/4005728-8> gndo:preferredNameForThePlaceOrGeographicName "Berlin" .
<http://d-nb.info/gnd/4010877-6> gndo:preferredNameForThePlaceOrGeographicName "Dänemark" .
<http://d-nb.info/gnd/4077258-5> gndo:preferredNameForThePlaceOrGeographicName "Schweden" .

View File

@@ -0,0 +1,10 @@
@prefix rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#> .
@prefix dc: <http://purl.org/dc/elements/1.1/> .
@prefix ex: <http://example.org/stuff/1.0/> .
<http://www.w3.org/TR/rdf-syntax-grammar>
dc:title "RDF/XML Syntax Specification (Revised)" ;
ex:editor [
ex:fullname "Dave Beckett";
ex:homePage <http://purl.org/net/dajobe/>
] .

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,30 @@
<?xml version="1.0" encoding="UTF-8"?>
<?import javafx.geometry.*?>
<?import javafx.scene.control.*?>
<?import java.lang.*?>
<?import javafx.scene.layout.*?>
<BorderPane maxHeight="-Infinity" maxWidth="-Infinity" minHeight="-Infinity" minWidth="-Infinity" prefHeight="400.0" prefWidth="600.0" xmlns="http://javafx.com/javafx/8" xmlns:fx="http://javafx.com/fxml/1">
<center>
<TableView prefHeight="200.0" prefWidth="200.0" BorderPane.alignment="CENTER">
<columns>
<TableColumn prefWidth="114.0" text="Column 1" />
<TableColumn minWidth="0.0" prefWidth="243.0" text="Column 2" />
<TableColumn prefWidth="214.0" text="Column 3" />
</columns>
</TableView>
</center>
<bottom>
<HBox alignment="CENTER_RIGHT" prefWidth="200.0" spacing="10.0" BorderPane.alignment="CENTER">
<children>
<Button mnemonicParsing="false" text="Button">
<HBox.margin>
<Insets bottom="10.0" left="10.0" right="10.0" top="10.0" />
</HBox.margin>
</Button>
</children>
</HBox>
</bottom>
</BorderPane>

View File

@@ -0,0 +1,6 @@
<configuration>
<dllmap dll="libsomething">
<dllentry dll="libdifferent.so" name="somefunction" target="differentfunction" />
<dllentry os="solaris,freebsd" dll="libanother.so" name="somefunction" target="differentfunction" />
</dllmap>
</configuration>

View File

@@ -0,0 +1,14 @@
<?xml version="1.0" encoding="UTF-8"?>
<phpunit bootstrap="./tests/bootstrap.php"
colors="true">
<testsuites>
<testsuite>
<directory>tests</directory>
</testsuite>
</testsuites>
<filter>
<whitelist>
<directory suffix=".php">src</directory>
</whitelist>
</filter>
</phpunit>

View File

@@ -2,8 +2,10 @@
require 'json'
require 'net/http'
require 'optparse'
require 'plist'
require 'set'
require 'thread'
require 'tmpdir'
require 'uri'
require 'yaml'
@@ -13,6 +15,13 @@ GRAMMARS_PATH = File.join(ROOT, "grammars")
SOURCES_FILE = File.join(ROOT, "grammars.yml")
CSONC = File.join(ROOT, "node_modules", ".bin", "csonc")
$options = {
:add => false,
:install => true,
:output => SOURCES_FILE,
:remote => true,
}
class SingleFile
def initialize(path)
@path = path
@@ -35,7 +44,7 @@ class DirectoryPackage
path.split('/')[-2] == 'Syntaxes'
when '.tmlanguage'
true
when '.cson'
when '.cson', '.json'
path.split('/')[-2] == 'grammars'
else
false
@@ -143,22 +152,24 @@ def load_grammar(path)
cson = `"#{CSONC}" "#{path}"`
raise "Failed to convert CSON grammar '#{path}': #{$?.to_s}" unless $?.success?
JSON.parse(cson)
when '.json'
JSON.parse(File.read(path))
else
raise "Invalid document type #{path}"
end
end
def install_grammar(tmp_dir, source, all_scopes)
def load_grammars(tmp_dir, source, all_scopes)
is_url = source.start_with?("http:", "https:")
is_single_file = source.end_with?('.tmLanguage', '.plist')
return [] if is_url && !$options[:remote]
p = if !is_url
if is_single_file
SingleFile.new(source)
else
if File.directory?(source)
DirectoryPackage.new(source)
else
SingleFile.new(source)
end
elsif is_single_file
elsif source.end_with?('.tmLanguage', '.plist')
SingleGrammar.new(source)
elsif source.start_with?('https://github.com')
GitHubPackage.new(source)
@@ -172,9 +183,7 @@ def install_grammar(tmp_dir, source, all_scopes)
raise "Unsupported source: #{source}" unless p
installed = []
p.fetch(tmp_dir).each do |path|
p.fetch(tmp_dir).map do |path|
grammar = load_grammar(path)
scope = grammar['scopeName']
@@ -184,13 +193,21 @@ def install_grammar(tmp_dir, source, all_scopes)
" Previous package: #{all_scopes[scope]}"
next
end
File.write(File.join(GRAMMARS_PATH, "#{scope}.json"), JSON.pretty_generate(grammar))
all_scopes[scope] = p.url
grammar
end
end
def install_grammars(grammars, path)
installed = []
grammars.each do |grammar|
scope = grammar['scopeName']
File.write(File.join(GRAMMARS_PATH, "#{scope}.json"), JSON.pretty_generate(grammar))
installed << scope
end
$stderr.puts("OK #{p.url} (#{installed.join(', ')})")
$stderr.puts("OK #{path} (#{installed.join(', ')})")
end
def run_thread(queue, all_scopes)
@@ -206,7 +223,8 @@ def run_thread(queue, all_scopes)
dir = "#{tmpdir}/#{index}"
Dir.mkdir(dir)
install_grammar(dir, source, all_scopes)
grammars = load_grammars(dir, source, all_scopes)
install_grammars(grammars, source) if $options[:install]
end
end
end
@@ -217,7 +235,7 @@ def generate_yaml(all_scopes, base)
out[value] << key
end
yaml = yaml.sort.to_h
yaml = Hash[yaml.sort]
yaml.each { |k, v| v.sort! }
yaml
end
@@ -232,9 +250,10 @@ def main(sources)
all_scopes = {}
if ARGV[0] == '--add'
if source = $options[:add]
Dir.mktmpdir do |tmpdir|
install_grammar(tmpdir, ARGV[1], all_scopes)
grammars = load_grammars(tmpdir, source, all_scopes)
install_grammars(grammars, source) if $options[:install]
end
generate_yaml(all_scopes, sources)
else
@@ -252,12 +271,34 @@ def main(sources)
end
end
OptionParser.new do |opts|
opts.banner = "Usage: #{$0} [options]"
opts.on("--add GRAMMAR", "Add a new grammar. GRAMMAR may be a file path or URL.") do |a|
$options[:add] = a
end
opts.on("--[no-]install", "Install grammars into grammars/ directory.") do |i|
$options[:install] = i
end
opts.on("--output FILE", "Write output to FILE. Use - for stdout.") do |o|
$options[:output] = o == "-" ? $stdout : o
end
opts.on("--[no-]remote", "Download remote grammars.") do |r|
$options[:remote] = r
end
end.parse!
sources = File.open(SOURCES_FILE) do |file|
YAML.load(file)
end
yaml = main(sources)
File.write(SOURCES_FILE, YAML.dump(yaml))
$stderr.puts("Done")
if $options[:output].is_a?(IO)
$options[:output].write(YAML.dump(yaml))
else
File.write($options[:output], YAML.dump(yaml))
end

20
script/travis/before_install Executable file
View File

@@ -0,0 +1,20 @@
#!/bin/sh
set -ex
# Fetch all commits/refs needed to run our tests.
git fetch origin master:master v2.0.0:v2.0.0 test/attributes:test/attributes test/master:test/master
script/vendor-deb libicu48 libicu-dev
if ruby -e 'exit RUBY_VERSION >= "2.0" && RUBY_VERSION < "2.1"'; then
# Workaround for https://bugs.ruby-lang.org/issues/8074. We can't use this
# solution on all versions of Ruby due to
# https://github.com/bundler/bundler/pull/3338.
bundle config build.charlock_holmes --with-icu-include=$(pwd)/vendor/debs/include --with-icu-lib=$(pwd)/vendor/debs/lib
else
bundle config build.charlock_holmes --with-icu-dir=$(pwd)/vendor/debs
fi
git submodule init
git submodule sync --quiet
script/fast-submodule-update

13
script/vendor-deb Executable file
View File

@@ -0,0 +1,13 @@
#!/bin/sh
set -ex
cd "$(dirname "$0")/.."
mkdir -p vendor/apt vendor/debs
(cd vendor/apt && apt-get --assume-yes download "$@")
for deb in vendor/apt/*.deb; do
ar p $deb data.tar.gz | tar -vzxC vendor/debs --strip-components=2
done

View File

@@ -1,4 +1,4 @@
require "bundler/setup"
require "test/unit"
require "minitest/autorun"
require "mocha/setup"
require "linguist"

View File

@@ -1,6 +1,6 @@
require_relative "./helper"
class TestBlob < Test::Unit::TestCase
class TestBlob < Minitest::Test
include Linguist
def setup
@@ -251,8 +251,7 @@ class TestBlob < Test::Unit::TestCase
assert sample_blob("Zephir/filenames/exception.zep.php").generated?
assert !sample_blob("Zephir/Router.zep").generated?
assert Linguist::Generated.generated?("node_modules/grunt/lib/grunt.js", nil)
assert sample_blob("node_modules/grunt/lib/grunt.js").generated?
# Godep saved dependencies
assert sample_blob("Godeps/Godeps.json").generated?
@@ -292,6 +291,8 @@ class TestBlob < Test::Unit::TestCase
assert sample_blob("deps/http_parser/http_parser.c").vendored?
assert sample_blob("deps/v8/src/v8.h").vendored?
assert sample_blob("tools/something/else.c").vendored?
# Chart.js
assert sample_blob("some/vendored/path/Chart.js").vendored?
assert !sample_blob("some/vendored/path/chart.js").vendored?
@@ -302,6 +303,9 @@ class TestBlob < Test::Unit::TestCase
# Debian packaging
assert sample_blob("debian/cron.d").vendored?
# Erlang
assert sample_blob("rebar").vendored?
# Minified JavaScript and CSS
assert sample_blob("foo.min.js").vendored?
assert sample_blob("foo.min.css").vendored?
@@ -310,6 +314,9 @@ class TestBlob < Test::Unit::TestCase
assert !sample_blob("foomin.css").vendored?
assert !sample_blob("foo.min.txt").vendored?
#.osx
assert sample_blob(".osx").vendored?
# Prototype
assert !sample_blob("public/javascripts/application.js").vendored?
assert sample_blob("public/javascripts/prototype.js").vendored?
@@ -317,6 +324,9 @@ class TestBlob < Test::Unit::TestCase
assert sample_blob("public/javascripts/controls.js").vendored?
assert sample_blob("public/javascripts/dragdrop.js").vendored?
# Samples
assert sample_blob("Samples/Ruby/foo.rb").vendored?
# jQuery
assert sample_blob("jquery.js").vendored?
assert sample_blob("public/javascripts/jquery.js").vendored?

View File

@@ -1,6 +1,6 @@
require_relative "./helper"
class TestClassifier < Test::Unit::TestCase
class TestClassifier < Minitest::Test
include Linguist
def samples_path

View File

@@ -1,7 +1,6 @@
require 'linguist/file_blob'
require 'test/unit'
require_relative "./helper"
class TestFileBlob < Test::Unit::TestCase
class TestFileBlob < Minitest::Test
def test_extensions
assert_equal [".gitignore"], Linguist::FileBlob.new(".gitignore").extensions
assert_equal [".xml"], Linguist::FileBlob.new("build.xml").extensions

55
test/test_generated.rb Normal file
View File

@@ -0,0 +1,55 @@
require_relative "./helper"
class TestGenerated < Minitest::Test
include Linguist
def samples_path
File.expand_path("../../samples", __FILE__)
end
class DataLoadedError < StandardError; end
def generated_without_loading_data(name)
blob = File.join(samples_path, name)
begin
assert Generated.generated?(blob, lambda { raise DataLoadedError.new }), "#{name} was not recognized as a generated file"
rescue DataLoadedError
assert false, "Data was loaded when calling generated? on #{name}"
end
end
def generated_loading_data(name)
blob = File.join(samples_path, name)
assert_raises(DataLoadedError, "Data wasn't loaded when calling generated? on #{name}") do
Generated.generated?(blob, lambda { raise DataLoadedError.new })
end
end
def test_check_generated_without_loading_data
# Xcode project files
generated_without_loading_data("Binary/MainMenu.nib")
generated_without_loading_data("Dummy/foo.xcworkspacedata")
generated_without_loading_data("Dummy/foo.xcuserstate")
# .NET designer file
generated_without_loading_data("Dummu/foo.designer.cs")
# Composer generated composer.lock file
generated_without_loading_data("JSON/composer.lock")
# Node modules
generated_without_loading_data("Dummy/node_modules/foo.js")
# Godep saved dependencies
generated_without_loading_data("Godeps/Godeps.json")
generated_without_loading_data("Godeps/_workspace/src/github.com/kr/s3/sign.go")
# Generated by Zephir
generated_without_loading_data("C/exception.zep.c")
generated_without_loading_data("C/exception.zep.h")
generated_without_loading_data("PHP/exception.zep.php")
# Minified files
generated_loading_data("JavaScript/jquery-1.6.1.min.js")
end
end

View File

@@ -1,8 +1,16 @@
require_relative "./helper"
class TestGrammars < Test::Unit::TestCase
class TestGrammars < Minitest::Test
ROOT = File.expand_path("../..", __FILE__)
# These grammars have no license but have been grandfathered in. New grammars
# must have a license that allows redistribution.
UNLICENSED_GRAMMARS_WHITELIST = %w[
vendor/grammars/Sublime-Lasso
vendor/grammars/Sublime-REBOL
vendor/grammars/x86-assembly-textmate-bundle
].freeze
def setup
@grammars = YAML.load(File.read(File.join(ROOT, "grammars.yml")))
end
@@ -14,12 +22,11 @@ class TestGrammars < Test::Unit::TestCase
end
def test_submodules_are_in_sync
submodules = `git config --list --file "#{File.join(ROOT, ".gitmodules")}"`.lines.grep(/\.path=/).map { |line| line.chomp.split("=", 2).last }
# Strip off paths inside the submodule so that just the submodule path remains.
listed_submodules = @grammars.keys.grep(/vendor\/grammars/).map { |source| source[%r{vendor/grammars/[^/]+}] }
nonexistent_submodules = listed_submodules - submodules
unlisted_submodules = submodules - listed_submodules
nonexistent_submodules = listed_submodules - submodule_paths
unlisted_submodules = submodule_paths - listed_submodules
message = ""
unless nonexistent_submodules.empty?
@@ -36,4 +43,94 @@ class TestGrammars < Test::Unit::TestCase
assert nonexistent_submodules.empty? && unlisted_submodules.empty?, message
end
def test_local_scopes_are_in_sync
actual = YAML.load(`"#{File.join(ROOT, "script", "convert-grammars")}" --output - --no-install --no-remote`)
assert $?.success?, "script/convert-grammars failed"
# We're not checking remote grammars. That can take a long time and make CI
# flaky if network conditions are poor.
@grammars.delete_if { |k, v| k.start_with?("http:", "https:") }
@grammars.each do |k, v|
assert_equal v, actual[k], "The scopes listed for #{k} in grammars.yml don't match the scopes found in that repository"
end
end
def test_submodules_have_licenses
categories = submodule_paths.group_by do |submodule|
files = Dir[File.join(ROOT, submodule, "*")]
license = files.find { |path| File.basename(path) =~ /\b(un)?licen[cs]e\b/i } || files.find { |path| File.basename(path) =~ /\bcopying\b/i }
if license.nil?
if readme = files.find { |path| File.basename(path) =~ /\Areadme\b/i }
license = readme if File.read(readme) =~ /\blicen[cs]e\b/i
end
end
if license.nil?
:unlicensed
elsif classify_license(license)
:licensed
else
:unrecognized
end
end
unlicensed = categories[:unlicensed] || []
unrecognized = categories[:unrecognized] || []
disallowed_unlicensed = unlicensed - UNLICENSED_GRAMMARS_WHITELIST
disallowed_unrecognized = unrecognized - UNLICENSED_GRAMMARS_WHITELIST
extra_whitelist_entries = UNLICENSED_GRAMMARS_WHITELIST - (unlicensed | unrecognized)
message = ""
if disallowed_unlicensed.any?
message << "The following grammar submodules don't seem to have a license. All grammars must have a license that permits redistribution.\n"
message << disallowed_unlicensed.sort.join("\n")
end
if disallowed_unrecognized.any?
message << "\n\n" unless message.empty?
message << "The following grammar submodules have an unrecognized license. Please update #{__FILE__} to recognize the license.\n"
message << disallowed_unrecognized.sort.join("\n")
end
if extra_whitelist_entries.any?
message << "\n\n" unless message.empty?
message << "The following grammar submodules are listed in UNLICENSED_GRAMMARS_WHITELIST but either have a license (yay!)\n"
message << "or have been removed from the repository. Please remove them from the whitelist.\n"
message << extra_whitelist_entries.sort.join("\n")
end
assert disallowed_unlicensed.empty? && disallowed_unrecognized.empty? && extra_whitelist_entries.empty?, message
end
private
def submodule_paths
@submodule_paths ||= `git config --list --file "#{File.join(ROOT, ".gitmodules")}"`.lines.grep(/\.path=/).map { |line| line.chomp.split("=", 2).last }
end
def classify_license(path)
content = File.read(path)
if content.include?("Apache License") && content.include?("2.0")
"Apache 2.0"
elsif content.include?("GNU") && content =~ /general/i && content =~ /public/i
if content =~ /version 2/i
"GPLv2"
elsif content =~ /version 3/i
"GPLv3"
end
elsif content.include?("GPL") && content.include?("http://www.gnu.org/licenses/gpl.html")
"GPLv3"
elsif content.include?("Creative Commons")
"CC"
elsif content.include?("tidy-license.txt") || content.include?("If not otherwise specified (see below)")
"textmate"
elsif content =~ /^\s*[*-]\s+Redistribution/ || content.include?("Redistributions of source code")
"BSD"
elsif content.include?("Permission is hereby granted") || content =~ /\bMIT\b/
"MIT"
elsif content.include?("unlicense.org")
"unlicense"
elsif content.include?("http://www.wtfpl.net/txt/copying/")
"WTFPL"
end
end
end

View File

@@ -1,6 +1,6 @@
require_relative "./helper"
class TestHeuristcs < Test::Unit::TestCase
class TestHeuristcs < Minitest::Test
include Linguist
def samples_path

View File

@@ -1,6 +1,6 @@
require_relative "./helper"
class TestLanguage < Test::Unit::TestCase
class TestLanguage < Minitest::Test
include Linguist
def test_find_by_alias
@@ -198,7 +198,7 @@ class TestLanguage < Test::Unit::TestCase
def test_find_all_by_extension
Language.all.each do |language|
language.extensions.each do |extension|
assert_include Language.find_by_extension(extension), language
assert_includes Language.find_by_extension(extension), language
end
end
end
@@ -283,7 +283,7 @@ class TestLanguage < Test::Unit::TestCase
end
def test_error_without_name
assert_raise ArgumentError do
assert_raises ArgumentError do
Language.new :name => nil
end
end

View File

@@ -1,6 +1,6 @@
require_relative "./helper"
class TestMD5 < Test::Unit::TestCase
class TestMD5 < Minitest::Test
include Linguist
def test_hexdigest_string
@@ -12,28 +12,28 @@ class TestMD5 < Test::Unit::TestCase
assert_equal "450c1ae043459546517b3dd2f98250f0", MD5.hexdigest(:foo)
assert_equal "f06967526af9d7a512594b0a81b31ede", MD5.hexdigest(:bar)
assert_not_equal MD5.hexdigest("foo"), MD5.hexdigest(:foo)
refute_equal MD5.hexdigest("foo"), MD5.hexdigest(:foo)
end
def test_hexdigest_integer
assert_equal "7605ec17fd7fd213fdcd23cac302cbb4", MD5.hexdigest(1)
assert_equal "097c311a46d330e4e119ba2b1dc0f9a5", MD5.hexdigest(2)
assert_not_equal MD5.hexdigest("1"), MD5.hexdigest(1)
refute_equal MD5.hexdigest("1"), MD5.hexdigest(1)
end
def test_hexdigest_boolean
assert_equal "a690a0615820e2e5c53901d8b8958509", MD5.hexdigest(true)
assert_equal "fca6a9b459e702fa93513c6a8b8c5dfe", MD5.hexdigest(false)
assert_not_equal MD5.hexdigest("true"), MD5.hexdigest(true)
assert_not_equal MD5.hexdigest("false"), MD5.hexdigest(false)
refute_equal MD5.hexdigest("true"), MD5.hexdigest(true)
refute_equal MD5.hexdigest("false"), MD5.hexdigest(false)
end
def test_hexdigest_nil
assert_equal "35589a1cc0b3ca90fc52d0e711c0c434", MD5.hexdigest(nil)
assert_not_equal MD5.hexdigest("nil"), MD5.hexdigest(nil)
refute_equal MD5.hexdigest("nil"), MD5.hexdigest(nil)
end
def test_hexdigest_array
@@ -49,7 +49,7 @@ class TestMD5 < Test::Unit::TestCase
assert_equal "868ee214faf277829a85667cf332749f", MD5.hexdigest({:a => 1})
assert_equal "fa9df957c2b26de6fcca9d062ea8701e", MD5.hexdigest({:b => 2})
assert_not_equal MD5.hexdigest([:b, 2]), MD5.hexdigest({:b => 2})
refute_equal MD5.hexdigest([:b, 2]), MD5.hexdigest({:b => 2})
assert_equal MD5.hexdigest({:b => 2, :a => 1}), MD5.hexdigest({:a => 1, :b => 2})
assert_equal MD5.hexdigest({:c => 3, :b => 2, :a => 1}), MD5.hexdigest({:a => 1, :b => 2, :c => 3})

View File

@@ -1,6 +1,6 @@
require_relative "./helper"
class TestPedantic < Test::Unit::TestCase
class TestPedantic < Minitest::Test
filename = File.expand_path("../../lib/linguist/languages.yml", __FILE__)
LANGUAGES = YAML.load(File.read(filename))
GRAMMARS = YAML.load(File.read(File.expand_path("../../grammars.yml", __FILE__)))

View File

@@ -1,6 +1,6 @@
require_relative "./helper"
class TestRepository < Test::Unit::TestCase
class TestRepository < Minitest::Test
def rugged_repository
@rugged ||= Rugged::Repository.new(File.expand_path("../../.git", __FILE__))
end

View File

@@ -1,7 +1,7 @@
require_relative "./helper"
require "tempfile"
class TestSamples < Test::Unit::TestCase
class TestSamples < Minitest::Test
include Linguist
def test_up_to_date
@@ -43,7 +43,7 @@ class TestSamples < Test::Unit::TestCase
if extnames = Samples.cache['extnames'][name]
extnames.each do |extname|
next if extname == '.script!'
assert options['extensions'].include?(extname), "#{name} has a sample with extension (#{extname}) that isn't explicitly defined in languages.yml"
assert options['extensions'].index { |x| x.end_with? extname }, "#{name} has a sample with extension (#{extname}) that isn't explicitly defined in languages.yml"
end
end

View File

@@ -1,6 +1,6 @@
require_relative "./helper"
class TestShebang < Test::Unit::TestCase
class TestShebang < Minitest::Test
include Linguist
def assert_interpreter(interpreter, body)

View File

@@ -1,6 +1,6 @@
require_relative "./helper"
class TestTokenizer < Test::Unit::TestCase
class TestTokenizer < Minitest::Test
include Linguist
def samples_path

1
vendor/grammars/AutoHotkey vendored Submodule

1
vendor/grammars/Racket vendored Submodule

Submodule vendor/grammars/Racket added at 02739c25ae

1
vendor/grammars/Sublime-HTTP vendored Submodule

1
vendor/grammars/Sublime-Nit vendored Submodule

1
vendor/grammars/atom-salt vendored Submodule

1
vendor/grammars/carto-atom vendored Submodule

1
vendor/grammars/language-hy vendored Submodule

1
vendor/grammars/sas.tmbundle vendored Submodule

1
vendor/grammars/sublime-bsv vendored Submodule