2015-12-29

solr4tcl v0.1: A lightweight Tcl client interface to Apache Solr

Homepage


solr4tcl

About

A lightweight Tcl client interface to Apache Solr. The library consists of a single Tcl Module file.

solr4tcl is using Tcl built-in package http to send request to Aapache Solr.

This library requires package tdom.

Interface

The library has 1 TclOO class, Solr_Request.

Provide below things:
  • A simple search interface
  • A simple interface to add, delete and update documents to the index (using XML Formatted Index Updates)
  • Uploading file data by using Apache Tika

一些說明


經過閱讀 Apache Solr 的文件以後,使用 Tcl http package 送出 request 並且得到結果的記錄。應該是 work 的,起碼沒有出現網頁找不到或者是錯誤回應。

更新 2016/01/02:
雖然都是傳回 OK,但是 ping method 填的參數不是完全正確,我試著更新一份正確的。

2015-12-28

Apache Solr test code

solr4tcl-0.1.tm

# solr4tcl --
#

package require Tcl 8.6
package require TclOO
package require http
package require tdom

package provide solr4tcl 0.1


#
# Solr_Request class: handle send request to Apache Solr
#
oo::class create Solr_Request {
    variable server
    variable path
    variable solr_writer

    constructor {SERVER} {
        set server $SERVER
        set path ""

        set solr_writer "xml"
    }

    destructor {
    }

    method setDocumentPath {PATH} {
        set path $PATH
    }

    #
    # support type: xml, json and cvs
    #
    method setSolrWriter {WRITER} {
        set solr_writer $WRITER
    }

    method send_request {url method {headers ""} {data ""}} {
        variable tok

        if {[string length $data] < 1} {
            if {[catch {set tok [http::geturl $url -method $method \
                -headers $headers]}]} {
                return "error"
            }
        } else {
            if {[catch {set tok [http::geturl $url -method $method \
                -headers $headers -query $data]}]} {
                return "error"
            }
        }

        if {[string compare -nocase $method "HEAD"] == 1} {
            set res [http::data $tok]
        } else {
            set res [http::status $tok]
        }

        http::cleanup $tok
        return $res
    }

    #
    # Call the /admin/ping servlet
    #
    method ping {} {
        set myurl "$server/solr"

        if {[string length $path] < 1} {
            append myurl "/admin/ping"
        } else {
            append myurl "/$path/admin/ping"
        }

        set headerl ""
        set res [my send_request $myurl HEAD $headerl]
        return $res
    }

    #
    # Simple Search interface
    # params is a list, give this funcition name-value pair parameter
    #
    method search {query {offset 0} {limit 10} {params ""}} {
        set myurl "$server/solr"

        if {[string length $path] < 1} {
            append myurl "/select"
        } else {
            append myurl "/$path/select"
        }

        lappend params q $query
        lappend params wt $solr_writer
        lappend params start $offset
        lappend params rows $limit
        set querystring [http::formatQuery {*}$params]

        #
        # The return data format is defined by wt, $solr_writer setting.
        #
        set headerl [list Content-Type "application/x-www-form-urlencoded; charset=UTF-8"]
        set res [my send_request $myurl POST $headerl $querystring]

        return $res
    }

    #
    # parameters - a list include key-value pair
    #
    method add {parameters {OVERWRITE true} {BOOST "1.0"} {COMMIT true}} {
        # Try to build our XML document
        set doc [dom createDocument add]

        set root [$doc documentElement]
        $root setAttribute overwrite $OVERWRITE

        set docnode [$doc createElement doc]
        $docnode setAttribute boost $BOOST
        $root appendChild $docnode

        foreach {key value} $parameters {
            set node [$doc createElement field]
            $node setAttribute name $key
            $node appendChild [$doc createTextNode $value]
            $docnode appendChild $node
        }

        set myaddString [$root asXML]
        set myurl "$server/solr"

        set params [list commit $COMMIT]
        set querystring [http::formatQuery {*}$params]

        if {[string length $path] < 1} {
            append myurl "/update?$querystring"
        } else {
            append myurl "/$path/update?$querystring"
        }

        set headerl [list Content-Type "text/xml; charset=UTF-8"]
        set res [my send_request $myurl POST $headerl $myaddString]

        return $res
    }

    #
    # xmldata - xml data string want to add
    #
    method addData {xmldata {COMMIT true}} {
        set myurl "$server/solr"

        set params [list commit $COMMIT]
        set querystring [http::formatQuery {*}$params]

        if {[string length $path] < 1} {
            append myurl "/update?$querystring"
        } else {
            append myurl "/$path/update?$querystring"
        }

        set headerl [list Content-Type "text/xml; charset=UTF-8"]
        set res [my send_request $myurl POST $headerl $xmldata]

        return $res
    }

    #
    # The <commit>  operation writes all documents loaded since the last
    # commit to one or more segment files on the disk
    #
    method commit {{WAITSEARCHER true} {EXPUNGEDELETES false}} {
        set mycommitString "<commit waitSearcher=\"$WAITSEARCHER\" expungeDeletes=\"$EXPUNGEDELETES\"/>"
        set myurl "$server/solr"

        if {[string length $path] < 1} {
            append myurl "/update"
        } else {
            append myurl "/$path/update"
        }

        set headerl [list Content-Type "text/xml; charset=UTF-8"]
        set res [my send_request $myurl POST $headerl $mycommitString]

        return $res
    }

    #
    # The <optimize> operation requests Solr to merge internal data structures
    # in order to improve search performance.
    #
    method optimize {{WAITSEARCHER true} {MAXSegments 1}} {
        set myoptimizeString "<optimize waitSearcher=\"$WAITSEARCHER\" maxSegments=\"$MAXSegments\"/>"
        set myurl "$server/solr"

        if {[string length $path] < 1} {
            append myurl "/update"
        } else {
            append myurl "/$path/update"
        }

        set headerl [list Content-Type "text/xml; charset=UTF-8"]
        set res [my send_request $myurl POST $headerl $myoptimizeString]

        return $res
    }

    #
    #  "Delete by ID" deletes the document with the specified ID
    #
    method deleteById {ID {COMMIT true}} {
        set mydeleteString "<delete><id>$ID</id></delete>"
        set myurl "$server/solr"

        set params [list commit $COMMIT]
        set querystring [http::formatQuery {*}$params]

        if {[string length $path] < 1} {
            append myurl "/update?$querystring"
        } else {
            append myurl "/$path/update?$querystring"
        }

        set headerl [list Content-Type "text/xml; charset=UTF-8"]
        set res [my send_request $myurl POST $headerl $mydeleteString]

        return $res
    }

    #
    #  "Delete by Query" deletes all documents matching a specified query
    #
    method deleteByQuery {QUERY {COMMIT true}} {
        set mydeleteString "<delete><query>$QUERY</query></delete>"
        set myurl "$server/solr"

        set params [list commit $COMMIT]
        set querystring [http::formatQuery {*}$params]

        if {[string length $path] < 1} {
            append myurl "/update?$querystring"
        } else {
            append myurl "/$path/update?$querystring"
        }

        set headerl [list Content-Type "text/xml; charset=UTF-8"]
        set res [my send_request $myurl POST $headerl $mydeleteString]

        return $res
    }

    #
    #  Uploading Data by using Apache Tika
    #
    method upload {fileContent {FILENAME ""} {COMMIT true} {ExtractOnly false} {params ""}} {
        set myurl "$server/solr"

        lappend params commit $COMMIT extractOnly $ExtractOnly

        if {[string length $FILENAME] > 1} {
            lappend params "resource.name" $FILENAME
        }

        set querystring [http::formatQuery {*}$params]

        if {[string length $path] < 1} {
            append myurl "/update/extract?$querystring"
        } else {
            append myurl "/$path/update/extract?$querystring"
        }

        set headerl [list Content-Type "text/xml; charset=UTF-8"]
        set res [my send_request $myurl POST $headerl $fileContent]

        return $res
    }
}



Test code:

#!/usr/bin/tclsh

package require solr4tcl

set solrresquest [Solr_Request new "http://localhost:8983"]
$solrresquest setDocumentPath gettingstarted

# support xml, json or csv
$solrresquest setSolrWriter xml

set res [$solrresquest ping]
if {[string compare -nocase $res "ok"]!=0} {
    puts "Apache Solr server returns not OK, close."
    exit
}

set res [$solrresquest search "foundation"]
puts "Search result:"
puts $res

set parameters [list authors "Patrick Eagar" subject "Sports" dd "796.35" \
                     isdn "0002166313" yearpub "1982" publisher "Collins"]
set res [$solrresquest add $parameters true]
puts $res

set res [$solrresquest commit]
puts $res

set res [$solrresquest optimize]
puts $res

set res [$solrresquest deleteById "0002166313"]
puts $res

set res [$solrresquest deleteByQuery "publisher:Collins"]
puts $res

#set size [file size "./solr-word.pdf"] 
#set fd [open "./solr-word.pdf" {RDWR BINARY}]  
#fconfigure $fd -blocking 1 -encoding binary -translation binary 
#set data [read $fd $size]  
#close $fd  
#set res [$solrresquest upload $data "solr-word.pdf"]
#puts $res

set res [$solrresquest search "pdf"]
puts "Search result:"
puts $res



Apache Solr 沒有像 Apache CouchDB 一樣,有將 REST API 做一個整理表,所以我只是對 search 的部份做一個簡單的 search method,然後確定有正確的回應回來(而不是無法連線,或者是傳回來一個網頁不存在的 404 錯誤網頁,如果有執行 Solr 的 Solr Quick Start)。

更新 2015/12/29:
加入上傳檔案和 index 處理的部份

更新 2016/01/02:
更新 method ping 的參數

2015-12-26

PostgreSQL extension: hstore

PostgreSQL: hstore


厲害的地方在於,不只可以放一個 key-value pair,而是可以在一個 column 上放一個以上的 key-value pair。

在 Windows 平台上測試最近安裝的  PostgreSQL 9.4.5 是否有支援 hstore:

package require tdbc::postgres
tdbc::postgres::connection create db -user postgres -password postgres -port 5432

set statement [db prepare {
     CREATE EXTENSION hstore
}]

$statement foreach row {
 puts $row
}

$statement close
db close


得到 extension "hstore" already exists 的答案。

更新:列出目前系統上的 extension name:

package require tdbc::postgres
tdbc::postgres::connection create db -user postgres -password postgres -port 5432

set statement [db prepare {
     select extname from pg_extension
}]

$statement foreach row {
 puts $row
}

$statement close
db close

2015-12-24

SSDB 與 retcl

SSDB

Features

  • An alternative to Redis, 100x Redis
  • LevelDB client-server support, written in C/C++
  • Redis API compatible, Redis clients are supported
  • Designed to store collection data, such as list, hash, zset...
  • Client API supports including C++, PHP, Python, Java, Go
  • Persistent queue service
  • Replication(master-slave), load balance

retcl: Tcl client library for Redis


看完了 SSDB 寫的 features,所以我可以拿一個 Redis 的 client 來連線並且測試囉?閱讀 SSDB 的文件,SSDB 的 port 是 8888。

retcl create r 127.0.0.1 8888

接下來使用 retcl 的範例進行一些測試,真的可以直接使用(除了 Publish / Subscribe and callbacks, Handling errors 我沒試,另外 retcl  Commands pipelining 的範例要先 set i 0,總之要先設定 i 的值),SSDB 有些 command 回來的反應和說明不太一致(例如 info 只傳回 OK),不過大體上 SSDB 確實是 Redis API compatible,也就是如果 Redis client 寫的彈性一點,理論上就可以無痛的拿來 SSDB 這邊使用。

PS. 不要嘗試在 Windows 平台使用 SSDB,雖然在網路上可以找到 binary files,但是跟作者網站上講的一樣,不建議在 Windows 平台上使用。

couchdbtcl: rename oauth-1.0.tm

couchdbtcl


我沒注意到 tcllib 中已經有 package 名稱是 oauth,雖然我測試的時候可以正確執行,不過為了避免套件名稱衝突,所以 oauth-1.0.tm 被我改名為 myoauth-1.0.tm,提供的套件從 oauth 變成 myoauth。

2015-12-23

tclusb v0.3: remove hotplug command

Tclusb


因為 hotplug 已經失效很久了(libusb 的 hotplug 是正常的,但是我不知道怎麼修正 Tcl channel notify 失效的問題),經過思考,我決定保持版本在 v0.3,移掉 hotplug 的部份,然後重新整理一次目前 Tclusb 的網頁。雖然這樣 Tclusb 就只剩下簡單的 list devices 功能。

我會再整理並且 review code,確定我有移乾淨,然後把 LICENSE 改成 2-clause BSD license。

2015-12-23 更新:
整理結束。

2015-12-22

Converting Characters (format command)

資料來源:Tcler's wiki


使用 format command 來做到 chr() function 的效果。
interp alias {} chr {} format %c

測試:
set a admin[chr 48]

很有趣的寫法。

2015-12-15

OpenACS 5.9.0 is released

ANNOUNCE: OpenACS 5.9.0 final released

This release contains many security and performance improvements as well as new functionality. The new release differs from OpenACS 5.8.1 in the following points:
- PostgreSQL enhancements and cleanups to improve performance and maintainability of the basic data model
- Greatly improved HTML validity and protection against XSS attacks.
- Improved theming support to create responsive interfaces.
- Various new functionalities to ease debugging and to improve performance. 



看起來更新幅度還蠻大的版本。

更新:
就閱讀的資料,目前支援 AOLserver 與 NaviServer。但是我沒有時間測試,只是對 OpenACS ChangeLog 的 use tcl8.5 idioms 和一些變動有興趣,我想這應該是一個已經全部使用 Tcl 8.5 的版本。

Redland Tcl support is dead

RDF at Tcler's Wiki

根據我搜尋網路的結果,Redland 對於 Tcl binding 已經不再支援(也已經 drop 掉 C# 和 Java),避免有人跟我一樣傻傻的使用各個關鍵字搜尋半天,所以我已經更新了 Tcler's Wiki 的資料。

目前 Tcl 惟一還可能可以用的 RDF Tool 是 XOTcl 的 xoRDF,不過我對 XOTcl 並不是很熟悉,所以也沒有去嘗試使用。

如果要自己使用 Tcl parse RDF/XML 檔案,可能需要使用 XML parer 去解析才行,但是根據 Problems of the RDF syntax 的說法,簡單的資料或許還可以,但是複雜的資料應該會囧。

我本來想研究一下 RDF 的狀況,如果我接下來的搜尋無法取得更有用的資料,那麼 RDF 的研究工作就到這裡了。

更新:
Google 搜尋會跑出來 RDF is dead 基本上有點…… 下面是另外一個參考文章:
Why Microdata, Not RDF, Will Power the Semantic Web

(接在上面連結以後的更新:RDFa Lite 和 Microdata 是二個直接競爭的標準,然後加上 JSON-LD,就是目前可以讓 web page 具備 structured data 特性的三個主要競爭者,不過三個都距離我目前專注的東西太遠了,跳過)

所以我想除了有一些特別領域的需求,大家可以放生 RDF 了。

2015-12-12

tcljsonnet v0.1

檔案放置網頁


tcljsonnet - Jsonnet wrapper for Tcl

About

Jsonnet is a domain specific configuration language that helps you define JSON data. Jsonnet lets you compute fragments of JSON within the structure, bringing the same benefit to structured data that templating languages bring to plain text.

For additional information on Jsonnet see
http://google.github.io/jsonnet/doc/


This package is a Jsonnet wrapper for Tcl.

一些說明


因為覺得 Jsonnet 是個很有趣的工具,所以今天寫了一個很簡單的 Tcl interface 來使用。

2015-12-09

couchdbtcl v0.1

檔案放置網頁


couchdbtcl - A Tcl client interface to Apache CouchDB

About

A Tcl client interface to Apache CouchDB.The library consists of a single Tcl Module file.

couchdbtcl is using Tcl built-in package http and Tcllib base64 package (for HTTP Basic Authentication) to send request to Aapache CouchDB.

一些說明


安裝 CouchDB 1.6.1 在 Windows XP SP3 上並且測試 CouchDB 的設定。這個套件是我閱讀 CouchDB API 文件的過程中使用 Tcl http package 對 CouchDB 進行 API request 並且得到 response,與測試認證方式(Basic Authentication, Cookie Authentication and OAuth Authentication)所撰寫的 client 端程式。

oauth

可以參考的連結:
Tcler's wiki: oauth


Apache CouchDB 支援了四種認證方式,Basic Authentication, Cookie Authentication, Proxy Authentication and OAuth Authentication。

其中我已經確定 Basic Authentication, Cookie Authentication 的使用方式,Proxy Authentication 我目前沒有適合的環境可以測試,所以剩下 OAuth Authentication,我會先測試看看目前 Tcler's wiki 上的 code 是不是可以使用。


更新:
測試成功,所以可以用 Tcler's wiki: oauth 的方式來進行 Apache CouchDB 的 OAuth Authentication。

2015-12-07

Apache CouchDB test code 2

#
# The Database endpoint provides an interface to an entire database with
# in CouchDB. These are database-level, rather than document-level requests.
#
oo::class create CouchDB_Database {
    variable host
    variable port
    variable database
    variable authtype
    variable username
    variable password
    variable firstcookie
    variable authSession

    constructor {HOST PORT DATABASE AUTHTYPE {USERNAME ""} {PASSWORD ""}} {
        set host $HOST
        set port $PORT
        set database $DATABASE
        set authtype $AUTHTYPE
        set username $USERNAME
        set password $PASSWORD
        set firstcookie 0
        set authSession ""
    }

    destructor {
    }

    method send_request {url method {headers ""} {data ""}} {
        # Now support authtype: no basic cookie
        if {[string compare -nocase $authtype "basic"]==0} {
            set auth "Basic [base64::encode $username:$password]"
            lappend headers Authorization $auth
        } elseif {[string compare -nocase $authtype "cookie"]==0} {
            set cookiestring "AuthSession=$authSession"
            lappend headers Cookie $cookiestring
        }

        if { [string length $data] < 1 } {
            set tok [http::geturl $url -method $method -headers $headers]
        } else {
            set tok [http::geturl $url -method $method -headers $headers -query $data]
        }

        if {[string compare -nocase $authtype "cookie"]==0 && $firstcookie==1} {
            set meta [http::meta $tok]
            foreach {name value} $meta {
                if {[string compare $name Set-Cookie]==0} {
                    set firstlocation [string first "=" $value]
                    incr firstlocation 1
                    set lastlocation  [string first "; " $value]
                    incr lastlocation -1
                    set authSession [string range $value $firstlocation $lastlocation]
                }
            }
        }

        set res [http::data $tok]
        http::cleanup $tok
        return $res
    }

    # Initiates new session for specified user credentials by providing Cookie value.
    method cookie_post {} {
        set firstcookie 1
        set myurl "http://$host:$port/_session"
        set headerl [list Accept "application/json" Content-Type "application/x-www-form-urlencoded"]
        set data [::http::formatQuery name $username password $password]
        set res [my send_request $myurl POST $headerl $data]

        set firstcookie 0

        return $res
    }

    # Returns complete information about authenticated user.
    method cookie_get {} {
        set myurl "http://$host:$port/_session"
        set headerl [list Accept "application/json" Content-Type "application/json"]
        set cookiestring "AuthSession=$authSession"
        lappend headers Cookie $cookiestring
        set res [my send_request $myurl GET $headerl]

        return $res
    }

    # Closes user’s session.
    method cookie_delete {} {
        set myurl "http://$host:$port/_session"
        set headerl [list Accept "application/json" Content-Type "application/json"]
        set cookiestring "AuthSession=$authSession"
        lappend headers Cookie $cookiestring
        set res [my send_request $myurl DELETE $headerl]

        return $res
    }

    # Creates a new database.
    method create {} {
        set myurl "http://$host:$port/$database"
        set headerl [list Accept "application/json" Content-Type "application/json"]
        set res [my send_request $myurl PUT $headerl]

        return $res
    }

    # Gets information about the specified database.
    method info {} {
        set myurl "http://$host:$port/$database"
        set headerl [list Accept "application/json" Content-Type "application/json"]
        set res [my send_request $myurl GET $headerl]

        return $res
    }

    # Delete a new database.
    method delete {} {
        set myurl "http://$host:$port/$database"
        set headerl [list Accept "application/json" Content-Type "application/json"]
        set res [my send_request $myurl DELETE $headerl]

        return $res
    }

    # Creates a new document in the specified database,
    # using the supplied JSON document structure.
    method db_post {data} {
        set myurl "http://$host:$port/$database"
        set headerl [list Accept "application/json" Content-Type "application/json"]
        set res [my send_request $myurl POST $headerl $data]

        return $res
    }

    # Returns a JSON structure of all of the documents in a given database.
    method all_docs_get {{data ""}} {
        set myurl "http://$host:$port/$database/_all_docs"
        set headerl [list Accept "application/json" Content-Type "application/json"]
        set res [my send_request $myurl GET  $headerl $data]

        return $res
    }

    # The POST to _all_docs allows to specify multiple keys to be
    # selected from the database.
    method all_docs_post {data} {
        set myurl "http://$host:$port/$database/_all_docs"
        set headerl [list Accept "application/json" Content-Type "application/json"]
        set res [my send_request $myurl POST $headerl $data]

        return $res
    }

    # The bulk document API allows you to create and update multiple
    # documents at the same time within a single request.
    method bulk_docs {data} {
        set myurl "http://$host:$port/$database/_bulk_docs"
        set headerl [list Accept "application/json" Content-Type "application/json"]
        set res [my send_request $myurl POST $headerl $data]

        return $res
    }

    # Requests the database changes feed
    method changes {{data ""}} {
        set myurl "http://$host:$port/$database/_changes"
        set headerl [list Accept "application/json" Content-Type "application/json"]
        set res [my send_request $myurl GET $headerl $data]

        return $res
    }

    # Request compaction of the specified database.
    method compact {} {
        set myurl "http://$host:$port/$database/_compact"
        set headerl [list Accept "application/json" Content-Type "application/json"]
        set res [my send_request $myurl POST $headerl]

        return $res
    }

    # Commits any recent changes to the specified database to disk.
    method ensure_full_commit {} {
        set myurl "http://$host:$port/$database/_ensure_full_commit"
        set headerl [list Accept "application/json" Content-Type "application/json"]
        set res [my send_request $myurl POST $headerl]

        return $res
    }

    # Removes view index files that are no longer required by CouchDB as a
    # result of changed views within design documents.
    method view_cleanup {} {
        set myurl "http://$host:$port/$database/_view_cleanup"
        set headerl [list Accept "application/json" Content-Type "application/json"]
        set res [my send_request $myurl POST $headerl]

        return $res
    }

    # Returns the current security object from the specified database.
    #
    # If the security object for a database has never been set, then the
    # value returned will be empty.
    method security_get {} {
        set myurl "http://$host:$port/$database/_security"
        set headerl [list Accept "application/json" Content-Type "application/json"]
        set res [my send_request $myurl GET $headerl]

        return $res
    }

    # Sets the security object for the given database.
    method security_put {data} {
        set myurl "http://$host:$port/$database/_security"
        set headerl [list Accept "application/json" Content-Type "application/json"]
        set res [my send_request $myurl PUT $headerl $data]

        return $res
    }

    # Creates (and executes) a temporary view based on the view function
    # supplied in the JSON request.
    method temp_view {data} {
        set myurl "http://$host:$port/$database/_temp_view"
        set headerl [list Accept "application/json" Content-Type "application/json"]
        set res [my send_request $myurl POST $headerl $data]

        return $res
    }

    # A database purge permanently removes the references to deleted
    # documents from the database.
    method purge {data} {
        set myurl "http://$host:$port/$database/_purge"
        set headerl [list Accept "application/json" Content-Type "application/json"]
        set res [my send_request $myurl POST $headerl $data]

        return $res
    }

    # With given a list of document revisions, returns the document
    # revisions that do not exist in the database.
    method missing_revs {data} {
        set myurl "http://$host:$port/$database/_missing_revs"
        set headerl [list Accept "application/json" Content-Type "application/json"]
        set res [my send_request $myurl POST $headerl $data]

        return $res
    }

    # Given a set of document/revision IDs, returns the subset of those
    # that do not correspond to revisions stored in the database.
    method revs_diff {data} {
        set myurl "http://$host:$port/$database/_revs_diff"
        set headerl [list Accept "application/json" Content-Type "application/json"]
        set res [my send_request $myurl POST $headerl $data]

        return $res
    }

    # Gets the current revs_limit (revision limit) setting.
    method revs_limit_get {} {
        set myurl "http://$host:$port/$database/_revs_limit"
        set headerl [list Accept "application/json" Content-Type "application/json"]
        set res [my send_request $myurl GET $headerl]

        return $res
    }

    # Sets the maximum number of document revisions that will be tracked by
    # CouchDB, even after compaction has occurred.
    method revs_limit_put {data} {
        set myurl "http://$host:$port/$database/_revs_limit"
        set headerl [list Accept "application/json" Content-Type "application/json"]
        set res [my send_request $myurl PUT $headerl $data]

        return $res
    }

    #
    # Method for Apache couchDB Document API
    # Each document in CouchDB has an ID. This ID is unique per database.
    #

    # Gets the specified document.
    method doc_get {id {data ""}} {
        set myurl "http://$host:$port/$database/$id"
        set headerl [list Accept "application/json" Content-Type "application/json"]
        set res [my send_request $myurl GET $headerl $data]

        return $res
    }

    # Stores the specified document.
    method doc_put {id data} {
        set myurl "http://$host:$port/$database/$id"
        set headerl [list Accept "application/json" Content-Type "application/json"]
        set res [my send_request $myurl PUT $headerl $data]

        return $res
    }

    # Deletes the specified document.
    # rev - Actual document’s revision
    method doc_delete {id rev} {
        set myurl "http://$host:$port/$database/$id"
        set headerl [list Accept "application/json" Content-Type "application/json"]
        lappend headerl If-Match $rev
        set res [my send_request $myurl DELETE $headerl]

        return $res
    }

    # Copies the specified document.
    # destination – Destination document
    method doc_copy {id destination} {
        set myurl "http://$host:$port/$database/$id"
        set headerl [list Accept "application/json" Content-Type "application/json"]
        lappend headerl Destination $destination
        set res [my send_request $myurl COPY $headerl]

        return $res
    }

    # Returns the file attachment associated with the document.
    # revision is Document revision.
    method docid_attachment_get {id attname revision} {
        set myurl "http://$host:$port/$database/$id/$attname"
        set headerl [list Accept "application/json" Content-Type "application/json"]
        lappend  headerl If-Match $revision
        set res [my send_request $myurl GET $headerl]

        return $res
    }

    # Uploads the supplied content as an attachment to the specified document.
    # revision is Document revision.
    # ContentType need give it a Attachment MIME type. Required!
    method docid_attachment_put {id attname revision ContentType data} {
        set myurl "http://$host:$port/$database/$id/$attname"
        set headerl [list Content-Type $ContentType]
        lappend  headerl If-Match $revision
        set res [my send_request $myurl PUT $headerl $data]

        return $res
    }

    # Deletes the attachment attachment of the specified doc.
    # revision is Document revision.
    method docid_attachment_delete {id attname revision} {
        set myurl "http://$host:$port/$database/$id/$attname"
        set headerl [list Accept "application/json" Content-Type "application/json"]
        lappend  headerl If-Match $revision
        set res [my send_request $myurl DELETE $headerl]

        return $res
    }

    #
    # In CouchDB, design documents provide the main interface for building
    # a CouchDB application. The design document defines the views used to
    # extract information from CouchDB through one or more views.
    #

    # Returns the contents of the design document specified with the name
    # of the design document and from the specified database from the URL.
    method designdoc_get {ddocument} {
        set myurl "http://$host:$port/$database/_design/$ddocument"
        set headerl [list Accept "application/json" Content-Type "application/json"]
        set res [my send_request $myurl GET $headerl]

        return $res
    }

    # The PUT method creates a new named design document, or creates a new
    # revision of the existing design document.
    method designdoc_put {ddocument data} {
        set myurl "http://$host:$port/$database/_design/$ddocument"
        set headerl [list Accept "application/json" Content-Type "application/json"]
        set res [my send_request $myurl PUT $headerl $data]

        return $res
    }

    # Deletes the specified document from the database.
    method designdoc_delete {ddocument revision} {
        set myurl "http://$host:$port/$database/_design/$ddocument"
        set headerl [list Accept "application/json" Content-Type "application/json"]
        lappend  headerl If-Match $revision
        set res [my send_request $myurl DELETE $headerl]

        return $res
    }

    # The COPY (which is non-standard HTTP) copies an existing
    # design document to a new or existing one.
    # destination – Destination document
    method designdoc_copy {ddocument destination} {
        set myurl "http://$host:$port/$database/_design/$ddocument"
        set headerl [list Accept "application/json" Content-Type "application/json"]
        lappend header1 Destination $destination
        set res [my send_request $myurl COPY $headerl]

        return $res
    }

    # Returns the file attachment associated with the design document.
    # The raw data of the associated attachment is returned (just as if
    # you were accessing a static file.
    method designdoc_attachment_get {ddocument attname revision} {
        set myurl "http://$host:$port/$database/_design/$ddocument/$attname"
        set headerl [list Accept "application/json" Content-Type "application/json"]
        lappend  headerl If-Match $revision
        set res [my send_request $myurl GET $headerl]

        return $res
    }

    # Uploads the supplied content as an attachment to the specified
    # design document. The attachment name provided must be a URL encoded string.
    # revision is Document revision.
    # ContentType need give it a Attachment MIME type. Required!
    method designdoc_attachment_put {ddocument attname revision ContentType data} {
        set myurl "http://$host:$port/$database/_design/$ddocument/$attname"
        set headerl [list Content-Type $ContentType]
        lappend  headerl If-Match $revision
        set res [my send_request $myurl PUT $headerl $data]

        return $res
    }

    # Deletes the attachment of the specified design document.
    # revision is Document revision.
    method designdoc_attachment_delete {ddocument attname revision} {
        set myurl "http://$host:$port/$database/_design/$ddocument/$attname"
        set headerl [list Accept "application/json" Content-Type "application/json"]
        lappend  headerl If-Match $revision
        set res [my send_request $myurl DELETE $headerl]

        return $res
    }

    # Obtains information about the specified design document, including
    # the index, index size and current status of the design document and
    # associated index information.
    method designdoc_info {ddocument} {
        set myurl "http://$host:$port/$database/_design/$ddocument/_info"
        set headerl [list Accept "application/json" Content-Type "application/json"]
        set res [my send_request $myurl GET $headerl]

        return $res
    }

    # Executes the specified view function from the specified design document.
    method designdoc_view_get {ddocument viewname {data ""}} {
        set myurl "http://$host:$port/$database/_design/$ddocument/_view/$viewname"
        set headerl [list Accept "application/json" Content-Type "application/json"]
        set res [my send_request $myurl GET $headerl $data]

        return $res
    }

    # Executes the specified view function from the specified design document.
    method designdoc_view_post {ddocument viewname data} {
        set myurl "http://$host:$port/$database/_design/$ddocument/_view/$viewname"
        set headerl [list Accept "application/json" Content-Type "application/json"]
        set res [my send_request $myurl POST $headerl $data]

        return $res
    }

    # Executes update function on server side for null document.
    method designdoc_update_post {ddocument updatename data} {
        set myurl "http://$host:$port/$database/_design/$ddocument/_update/$updatename"
        set headerl [list Accept "application/json" Content-Type "application/json"]
        set res [my send_request $myurl POST $headerl $data]

        return $res
    }

    # Executes update function on server side for null document.
    method designdoc_updatename_post {ddocument updatename docid data} {
        set myurl "http://$host:$port/$database/_design/$ddocument/_update/$updatename/$docid"
        set headerl [list Accept "application/json" Content-Type "application/json"]
        set res [my send_request $myurl PUT $headerl $data]

        return $res
    }
}

使用 TclOO 包裝一小部份 Apache CouchDB 的 API (使用 http package 與 Apache CouchDB 溝通) ,可以用來建立 database,建立新的文件,取得目前的資訊,與刪除資料庫。

(* 2015/12/08 更新)

package require couchdbtcl

set username danilo
set password danilo

set myserver [CouchDB_Database new localhost 5984 wiki basic $username $password]

set response [$myserver create]
puts $response 

set response [$myserver db_post {{"text" : "Wikipedia on CouchDB", "rating": 5}}]
puts $response 

set response [$myserver info]
puts $response 

set response [$myserver delete]
puts $response 

Apache CouchDB test code

package require Tcl 8.6
package require TclOO
package require http
package require base64

# The CouchDB server interface provides the basic interface to a CouchDB
# server for obtaining CouchDB information and getting and setting
# configuration information.
#
oo::class create CouchDB_Server {
    variable host
    variable port
    variable authtype
    variable username
    variable password
    variable firstcookie
    variable authSession

    constructor {HOST PORT AUTHTYPE {USERNAME ""} {PASSWORD ""}} {
        set host $HOST
        set port $PORT
        set authtype $AUTHTYPE
        set username $USERNAME
        set password $PASSWORD
        set firstcookie 0
        set authSession ""
    }

    destructor {
    }

    method send_request {url method {headers ""} {data ""}} {
        # Now support authtype: no basic cookie
        if {[string compare -nocase $authtype "basic"]==0} {
            set auth "Basic [base64::encode $username:$password]"
            lappend headers Authorization $auth
        } elseif {[string compare -nocase $authtype "cookie"]==0} {
            set cookiestring "AuthSession=$authSession"
            lappend headers Cookie $cookiestring
        }

        if { [string length $data] < 1 } {
            set tok [http::geturl $url -method $method -headers $headers]
        } else {
            set tok [http::geturl $url -method $method -headers $headers -query $data]
        }

        if {[string compare -nocase $authtype "cookie"]==0 && $firstcookie==1} {
            set meta [http::meta $tok]
            foreach {name value} $meta {
                if {[string compare $name Set-Cookie]==0} {
                    set firstlocation [string first "=" $value]
                    incr firstlocation 1
                    set lastlocation  [string first "; " $value]
                    incr lastlocation -1
                    set authSession [string range $value $firstlocation $lastlocation]
                }
            }
        }
        
        set res [http::data $tok] 
        http::cleanup $tok
        return $res
    }

    # Initiates new session for specified user credentials by providing Cookie value.
    method cookie_post {} {
        set firstcookie 1
        set myurl "http://$host:$port/_session"
        set headerl [list Accept "application/json" Content-Type "application/x-www-form-urlencoded"]
        set data [::http::formatQuery name $username password $password]
        set res [my send_request $myurl POST $headerl $data]

        set firstcookie 0

        return $res
    }

    # Returns complete information about authenticated user.
    method cookie_get {} {
        set myurl "http://$host:$port/_session"
        set headerl [list Accept "application/json" Content-Type "application/json"]
        set cookiestring "AuthSession=$authSession"
        lappend headers Cookie $cookiestring
        set res [my send_request $myurl GET $headerl]

        return $res
    }

    # Closes user’s session.
    method cookie_delete {} {
        set myurl "http://$host:$port/_session"
        set headerl [list Accept "application/json" Content-Type "application/json"]
        set cookiestring "AuthSession=$authSession"
        lappend headers Cookie $cookiestring
        set res [my send_request $myurl DELETE $headerl]

        return $res
    }

    # Accessing the root of a CouchDB instance returns meta information
    # about the instance.
    method hello {} {
        set myurl "http://$host:$port/"
        set headerl [list Accept "application/json" Content-Type "application/json"]
        set res [my send_request $myurl GET $headerl]

        return $res
    }

    # List of running tasks, including the task type, name, status and process ID.
    method active_tasks {} {
        set myurl "http://$host:$port/_active_tasks"
        set headerl [list Accept "application/json" Content-Type "application/json"]
        set res [my send_request $myurl GET $headerl]

        return $res
    }

    # Returns a list of all the databases in the CouchDB instance.
    method all_dbs {} {
        set myurl "http://$host:$port/_all_dbs"
        set headerl [list Accept "application/json" Content-Type "application/json"]
        set res [my send_request $myurl GET $headerl]

        return $res
    }

    # Returns a list of all database events in the CouchDB instance.
    method log {} {
        set myurl "http://$host:$port/_log"
        set headerl [list Accept "application/json" Content-Type "application/json"]
        set res [my send_request $myurl GET $headerl]

        return $res
    }

    #
    # CouchDB replication is a mechanism to synchronize databases. Much
    # like rsync synchronizes two directories locally or over a network,
    # replication synchronizes two databases locally or remotely.
    #

    # Request, configure, or stop, a replication operation.
    method replicate {data} {
        set myurl "http://$host:$port/_replicate"
        set headerl [list Accept "application/json" Content-Type "application/json"]
        set res [my send_request $myurl POST $headerl $data]

        return $res
    }

    # Restarts the CouchDB instance.
    method restart {} {
        set myurl "http://$host:$port/_restart"
        set headerl [list Accept "application/json" Content-Type "application/json"]
        set res [my send_request $myurl POST $headerl]

        return $res
    }

    # Returns a JSON object containing the statistics for the running server.
    method stats {} {
        set myurl "http://$host:$port/_stats"
        set headerl [list Accept "application/json" Content-Type "application/json"]
        set res [my send_request $myurl GET $headerl]

        return $res
    }
}

_db_updates 我在執行的時候反應怪怪的(可能是我設定 timeout 的方式錯誤),只會 timeout 以後傳回來空白,所以沒有加上在這段 code,其它關於 CouchDB server interface 的部份,我都試過了,最少不會向 _db_updates 一樣全無反應。

測試環境:
Windows XP SP3
ActiveTcl 8.6.4.1
Apache CouchDB 1.6.1
有設定 HTTP Basic Authentication,所以這份 code 反而是不需要認證的部份沒有測試過。

( *2015-12-08 更新)
( *2015-12-09 更新, add Cookie Authentication support)

下面就是測試這個 class 的 code:

source "./couchdb.tcl"

set username admin
set password mypasswd

set myserver [CouchDB_Server new localhost 5984 basic $username $password]

set response [$myserver all_dbs]
puts $response

set response [$myserver active_tasks]
puts $response

set response [$myserver log]
puts $response

set response [$myserver stats]
puts $response

set response [$myserver restart]
puts $response

2015-12-06

HTTP Basic Authentication

資料來自於 Tcler's wiki, http authentication

需要使用 Tcllib 的 base64 與 Tcl 內建的 http 套件。
 
package require http
package require base64

set username admin
set password mypasswd

proc geturl_auth {url username password} {
    set auth "Basic [base64::encode $username:$password]"
    set headerl [list Authorization $auth]
    set tok [http::geturl $url -headers $headerl]
    set res [http::data $tok]
    http::cleanup $tok
    return $res
}


puts [geturl_auth http://127.0.0.1:5984/  $username $password]

在這個例子中,username 和 password 是之前就使用 set 設定好的變數(要替換真正的使用者帳號和密碼)。

另外一個範例,使用 POST method 送出 _restart 要求給 CouchDB。

package require http
package require base64

set username admin
set password mypasswd

proc geturl_auth {url method username password} {
    set auth "Basic [base64::encode $username:$password]"
    set headerl [list Authorization $auth  Content-Type "application/json"]
    set tok [http::geturl $url -method $method -headers $headerl]
    set res [http::data $tok]
    http::cleanup $tok
    return $res
}


puts [geturl_auth http://127.0.0.1:5984/_restart POST $username $password]

另外一個範例,使用 GET method 送出 _all_dbs 要求給 CouchDB,取得目前的資料庫列表。

package require http
package require base64

set username admin
set password mypasswd

proc geturl_auth {url method username password} {
    set auth "Basic [base64::encode $username:$password]"
    set headerl [list Authorization $auth  Content-Type "application/json"]
    set tok [http::geturl $url -method $method -headers $headerl]
    set res [http::data $tok]
    http::cleanup $tok
    return $res
}


puts [geturl_auth http://127.0.0.1:5984/_all_dbs GET $username $password]

另外一個範例,使用 GET method 送出 _uuids 要求給 CouchDB,CouchDB 會回傳一個 UUID 值。

package require http
package require base64

set username admin
set password mypasswd

proc geturl_auth {url method username password} {
    set auth "Basic [base64::encode $username:$password]"
    set headerl [list Authorization $auth  Content-Type "application/json"]
    set tok [http::geturl $url -method $method -headers $headerl]
    set res [http::data $tok]
    http::cleanup $tok
    return $res
}


puts [geturl_auth http://127.0.0.1:5984/_uuids GET $username $password]

另外一個範例,可以說是 CouchDB 的基本操作,使用 PUT method 送出要求,要求給 CouchDB 建立一個資料庫。

package require http
package require base64

set username admin
set password mypasswd

proc geturl_auth {url method username password} {
    set auth "Basic [base64::encode $username:$password]"
    set headerl [list Authorization $auth  Content-Type "application/json"]
    set tok [http::geturl $url -method $method -headers $headerl]
    set res [http::data $tok]
    http::cleanup $tok
    return $res
}


puts [geturl_auth http://127.0.0.1:5984/albums PUT $username $password]

另外一個範例,一樣是 CouchDB 的基本操作,使用 DELETE method 送出要求,要求 CouchDB 刪除一個資料庫。

package require http
package require base64

set username admin
set password mypasswd

proc geturl_auth {url method username password} {
    set auth "Basic [base64::encode $username:$password]"
    set headerl [list Authorization $auth  Content-Type "application/json"]
    set tok [http::geturl $url -method $method -headers $headerl]
    set res [http::data $tok]
    http::cleanup $tok
    return $res
}


puts [geturl_auth http://127.0.0.1:5984/albums DELETE $username $password]

2015-11-30

Build TclBlend

TclBlend 相關文件:
Tcl/Java interoperability with TclBlend 1.4. A new precompiled binary for Window


環境:
Windows XP
MINGW/MSYS
ActiveTcl 8.6.4.1
Java SE 8u65


使用 MINGW/MSYS 參考文章:
Building TclBlend with msys_mingw


下載 Tclblend 1.4.1 以後,加上 Source Forge 的 3 個 patch:
21 allow TclBlend to load Tcl 8.5 into a Java process
22 Compile/​Run against Tcl 8.6
24 Patch for TclJava Bug 2866640


然後加上 tclBlend\src\native\javaInterp.c 的一些小改變(for Tcl/Tk 8.6):

    if (exception) {
        (*env)->DeleteLocalRef(env, exception);
        (void) Tcl_GetStringResult(interp);
        tPtr->errMsg = (char *) 
            ckalloc((unsigned) (strlen(Tcl_GetStringResult(interp)) + 1));
            //ckalloc((unsigned) (strlen(interp->result) + 1));
        //strcpy(tPtr->errMsg, interp->result);
        strcpy(tPtr->errMsg, Tcl_GetStringResult(interp));
        result = tPtr->errMsg;
    }

就 build 出來了。能不能使用則還不知道。

(update) 嘗試使用更加相容其它版本的寫法

        #if TCL_MAJOR_VERSION > 8 || \
           (TCL_MAJOR_VERSION == 8 && TCL_MINOR_VERSION > 5)
            ckalloc((unsigned) (strlen(Tcl_GetStringResult(interp)) + 1));
        #else
            ckalloc((unsigned) (strlen(interp->result) + 1));
        #endif
        #if TCL_MAJOR_VERSION > 8 || \
           (TCL_MAJOR_VERSION == 8 && TCL_MINOR_VERSION > 5)
        strcpy(tPtr->errMsg, Tcl_GetStringResult(interp));
        #else
        strcpy(tPtr->errMsg, interp->result);
        #endif



更新:
無法正確執行範例,會發生 crash 的問題。
使用參考文件所提供的檔案可以成功執行,但是我自己編譯的 tclBlend 如果執行範例會 crash,不知道問題出在哪裡。

更新:
使用 TclBlend, Tcl/Tk 8.5.18 and Thread extension 2.7.2 and JDK 8
(單純的 TclBlend 1.4.1,沒有使用 patch)
測試成功

% package require java
1.4.1

% java::import java.net.InetAddress

% puts "My IP Address is: [ [ java::call InetAddress getLocalHost ] getHostAddress ] "
My IP Address is: 192.168.2.105


更新 2015/12/01:
會是 Tcl 8.5.18 與 Tcl 8.6.4 的版本差異所造成的問題嗎?


更新 2015/12/01:
測試 Tcl 8.6.0,
測試成功
看起來跟 Thread Extension 與 JDK 版本無關


更新 2015/12/01:
測試 Tcl 8.6.1, 結果 crash

TCLBLEND_DEBUG: Interp.evalString()
TCLBLEND_DEBUG: cmd is : ->package provide java 1.4.1<- br="">TCLBLEND_DEBUG: JavaInitBlend returning
TCLBLEND_DEBUG: Tclblend_Init finished
TCLBLEND_DEBUG: JavaInitBlend returned TCL_OK
1.4.1
% java::import java.net.InetAddress
TCLBLEND_DEBUG: JavaCmdProc()
% puts "My IP Address is: [ [ java::call InetAddress getLocalHost ] getHostAddre
ss ] "
TCLBLEND_DEBUG: JavaCmdProc()


更新:
記錄我對 TclBlend 在 MinGW/MSYS 與 Tcl 8.6.x 編譯的做法(雖然 TclBlend 在 Tcl 8.6.1 及以後的版本沒辦法用)

1. 使用下列的 script 編譯出 Tcl/Tk 8.6.x

(Tcl 與 Tk 的 source code 放在 /src/tcl 與 /src/tk 的目錄下)

#!/bin/sh
mkdir -p /src
mkdir -p /opt/tcl
mkdir -p /build/tcl
mkdir -p /build/tk
[ -e /src/tcl ] && {
    cd /build/tcl
    /src/tcl/win/configure --prefix=/opt/tcl --enable-threads && make && make install && {
        [ -e /src/tk ] && {
        cd /build/tk
        /src/tk/win/configure --prefix=/opt/tcl --enable-threads --with-tcl=/build/tcl \
            && make && make install
        }
    }
}

2.Build TclBlend

假設已經下載完 source code package,並且放在 /src/tclBlend

2-1 mkdir /build/tclblend
2-2 cd /build/tclblend
2-3 /src/tclBlend/configure --prefix=/opt/tcl --with-tcl=/build/tcl --with-thread=/build/tcl/pkgs/thread2.7.2/ --with-jdk=/c/JDK8/
2-4 make
2-5 make install

在 2-3 的 thread 要看 Tcl/Tk 8.6.x 當時附帶的 Thread extension 版本。因為 8.6.x 已經會附帶一個 Thread extension 而且在第一個步驟就已經編譯完成,所以不用自己再去下載一個來編譯。

要執行 TclBlend 的時候,使用 jtclsh.bat 或者是 jtclsh 啟動。jtclsh.bat 或者是 jtclsh 都是在設定環境變數,主要有下列幾個可以指定的地方:

set PREFIX=C:/MinGW/msys/1.0/opt/tcl
set EXEC_PREFIX=C:/MinGW/msys/1.0/opt/tcl
set TCLSH=C:/MinGW/msys/1.0/opt/tcl/bin/tclsh86

例如說,如果你複製 C:/MinGW/msys/1.0/opt/tcl 目錄下的檔案到 C:/Tcl/ 下,那麼就需要對這幾個環境變數做相對應的修改。


更新:
TclBlend: Tcl core patch

所以預計在 8.6.6 可以修正 crash 問題,8.6.1 ~ 8.6.5 則需要自己 patch Tcl source code。

2015-11-27

tcl-lmdb v0.2.4

檔案放置網頁


tcl-lmdb - Tcl interface to the Lightning Memory-Mapped Database

About


This is the Lightning Memory-Mapped Database (LMDB) extension for Tcl using the Tcl Extension Architecture (TEA).

LMDB is a Btree-based database management library with an API similar to BerkeleyDB. The library is thread-aware and supports concurrent read/write access from multiple processes and threads. The DB structure is multi-versioned, and data pages use a copy-on-write strategy, which also provides resistance to corruption and eliminates the need for any recovery procedures. The database is exposed in a memory map, requiring no page cache layer of its own. This extension provides an easy to use interface for accessing LMDB database files from Tcl.

Change Log

  • generic/mdb.c: Update source code. LMDB 0.9.16 with extra fixes from github.
  • generic/tclmdb.c: env_handle open command add option -nosubdir

一些說明


這個版本只有進行一點小更新,env_handle open command 新增一個 option -nosubdir,然後更新 LMDB 到目前的 code base。

2015-11-24

Firebird/Interbase Extension

如果是用 Firebird 尋找,答案是 SQL Relay。

如果是用 Interbase 跟在 Great Unified Tcl/Tk Extension Repository 找答案,
  • dbi - Generic Tcl interface to SQL databases,有支援 Firebird 與 Interbase
  • ibtcl
  • tcl-sql-ibase
這些應該就是除了 TDBC-ODBC 以外的選擇了,但是我就沒有去實際測試,只是尋找一下目前 Firebird 的 Tcl extension 狀況。

2015-11-19

Push tclunqlite and tcl-lmdb code to Github

二個我都只是建立一個「沒有版本」的空白 Git 儲存庫,然後把 source code push 上去。


tcl-lmdb
Tclunqlite


我不知道這樣對於這二個案子有沒有幫助,不過試看看。


* 更新,也將 Tcltaglib 的 source code push 上去。
tcltaglib

2015-11-18

TDBC-ODBC and Firebird ODBC

測試環境:
Windows XP
Firebird 2.5.4
Firebird ODBC 2.0.3
Active Tcl 8.6.4.1


目前如果要使用 Firebird,TDBC 沒有提供相關的 driver,但是因為 Firebird 有 ODBC driver 而 TDBC-ODBC 已經有提供了,所以測試一下目前 TDBC-ODBC via Firebird ODBC 的狀況。

ODBC DSN 設定如下:


然後使用下列的方式測試:

package require tdbc::odbc

set connStr "DSN=Firebird DSN; UID=danilo; PWD=danilo;"
tdbc::odbc::connection create db $connStr

set statement [db prepare {create table person (id integer, name varchar(40))}]
$statement execute
$statement close

set statement [db prepare {insert into person values(1, 'leo')}]
$statement execute
$statement close

set statement [db prepare {insert into person values(2, 'yui')}]
$statement execute
$statement close

set statement [db prepare {SELECT * FROM person}]

$statement foreach row {
    puts [dict get $row ID]
    puts [dict get $row NAME]
}

$statement close

db close


然後再測試刪除 table:

package require tdbc::odbc

set connStr "DSN=Firebird DSN; UID=danilo; PWD=danilo;"
tdbc::odbc::connection create db $connStr

set statement [db prepare {drop table person}]
$statement execute
$statement close
db close


看起來是正確工作的。

2015-11-13

tcl-lmdb v0.2.3

檔案放置網頁


tcl-lmdb - Tcl interface to the Lightning Memory-Mapped Database

About


This is the Lightning Memory-Mapped Database (LMDB) extension for Tcl using the Tcl Extension Architecture (TEA).

LMDB is a Btree-based database management library with an API similar to BerkeleyDB. The library is thread-aware and supports concurrent read/write access from multiple processes and threads. The DB structure is multi-versioned, and data pages use a copy-on-write strategy, which also provides resistance to corruption and eliminates the need for any recovery procedures. The database is exposed in a memory map, requiring no page cache layer of its own. This extension provides an easy to use interface for accessing LMDB database files from Tcl.

Change Log

  • generic/tclmdb.c: env_handle open command add option -fixedmap
  • generic/tclmdb.c: Add a thread exit handler to delete hash table.
  • generic/tclmdb.c: Try to use ThreadSpecificData to per thread hash table.
  • generic/mdb.c: Update source code. LMDB 0.9.16 with extra fixes from github.

一些說明


試著使用更 thread-safe 的寫法。

2015-11-12

TDBC stub (tdbcpostgres)

Tool:
pkgs/tdbc1.0.3/tools/genExtStubs.tcl


就實驗的結果來看(接下來以 tdbcpostgres1.0.3 為例),應該要準備二個檔案:
  • pqStubDefs.txt:會使用到的 client function 與可能的 client 名稱
  • pqStubInit.c -> DO NOT EDIT THESE NAMES 的部份就是程式產生的部份 (pqStubLibNames, pqSymbolNames),其它的部份要參考其它的 driver 先寫好
然後準備一個沒有標準的 client header 檔時會用到的 fakepq.h,tdbcpostgres.c 就是透過剛才產生的 stub 使用 client function 來存取資料庫,也就是所有的實作都集中在這個檔案。

2015-11-08

tcl-lmdb v0.2.2

檔案放置網頁


tcl-lmdb - Tcl interface to the Lightning Memory-Mapped Database

About


This is the Lightning Memory-Mapped Database (LMDB) extension for Tcl using the Tcl Extension Architecture (TEA).

LMDB is a Btree-based database management library with an API similar to BerkeleyDB. The library is thread-aware and supports concurrent read/write access from multiple processes and threads. The DB structure is multi-versioned, and data pages use a copy-on-write strategy, which also provides resistance to corruption and eliminates the need for any recovery procedures. The database is exposed in a memory map, requiring no page cache layer of its own. This extension provides an easy to use interface for accessing LMDB database files from Tcl.

Change Log

  • generic/tclmdb.c: Fix cursor_handle renew issue. (update)
  • generic/tclmdb.c: Implement dbi_handle del command behavior.

一些說明


加強 dbi_handle del 的功能。希望我網頁的說明足夠清楚。

The command dbi_handle del delete items from a database. If the database supports sorted duplicates and the data parameter is "" (empty string), all of the duplicate data items for the key will be deleted. Otherwise, if the data parameter is non-NULL only the matching data item will be deleted.

2015-11-07

tcl-lmdb v0.2.1

檔案放置網頁


tcl-lmdb - Tcl interface to the Lightning Memory-Mapped Database

About


This is the Lightning Memory-Mapped Database (LMDB) extension for Tcl using the Tcl Extension Architecture (TEA).

LMDB is a Btree-based database management library with an API similar to BerkeleyDB. The library is thread-aware and supports concurrent read/write access from multiple processes and threads. The DB structure is multi-versioned, and data pages use a copy-on-write strategy, which also provides resistance to corruption and eliminates the need for any recovery procedures. The database is exposed in a memory map, requiring no page cache layer of its own. This extension provides an easy to use interface for accessing LMDB database files from Tcl.

Change Log

  • generic/tclmdb.c: Implement cursor_handle get command option: -set_range -get_both_range
  • generic/mdb.c: Update source code. LMDB 0.9.16 with extra fixes from github.

Update, keep version to v0.2.1

  • generic/tclmdb.c: Add more check in env_handle copy command. Fix --with-system-lmdb option issue.

2015-11-05

tcl-lmdb v0.2

檔案放置網頁


tcl-lmdb - Tcl interface to the Lightning Memory-Mapped Database

About


This is the Lightning Memory-Mapped Database (LMDB) extension for Tcl using the Tcl Extension Architecture (TEA).

LMDB is a Btree-based database management library with an API similar to BerkeleyDB. The library is thread-aware and supports concurrent read/write access from multiple processes and threads. The DB structure is multi-versioned, and data pages use a copy-on-write strategy, which also provides resistance to corruption and eliminates the need for any recovery procedures. The database is exposed in a memory map, requiring no page cache layer of its own. This extension provides an easy to use interface for accessing LMDB database files from Tcl.

Change Log


  • generic/tclmdb.c: Update source code
  • generic/mdb.c: Update source code
  • generic/tclmdb.c: Implement dbi_handle stat command
  • generic/tclmdb.c: lmdb open command add option:-reversekey and -reversedup
  • Makefile.in: add workaround for glibc pthread robust mutex support fix (for Linux glibc < 2.12)

2015/11/06 update, keep version to 0.2


  • configure.ac: add --with-system-lmdb option, causes the TCL bindings to LMDB to use the system shared library for LMDB. Default setting is no.
  • generic/tclmdb.c: Add LMDB version check in env_handle copy command. Using mdb_env_copy2 function need >= version 0.9.14.

Add --with-system-lmdb option to configure file (default setting is no).

If your Linux distribution (ex. Debian, Ubuntu, Fedora, and OpenSuSE) includes LMDB, now tcl-lmdb support to use the system shared library for LMDB.

Below is an example:
./configure --with-system-lmdb=yes

2015-11-04

tcl-lmdb v0.1.1

檔案放置網頁


tcl-lmdb - Tcl interface to the Lightning Memory-Mapped Database

About


This is the Lightning Memory-Mapped Database (LMDB) extension for Tcl using the Tcl Extension Architecture (TEA).

LMDB is a Btree-based database management library with an API similar to BerkeleyDB. The library is thread-aware and supports concurrent read/write access from multiple processes and threads. The DB structure is multi-versioned, and data pages use a copy-on-write strategy, which also provides resistance to corruption and eliminates the need for any recovery procedures. The database is exposed in a memory map, requiring no page cache layer of its own. This extension provides an easy to use interface for accessing LMDB database files from Tcl.

一些說明


實作 cursor_handle count command。

本來我以為解決 GLIBC 2.11 版本(及以下)對於 pthread robust mutex 支援度的問題,結果發現我寫錯了,所以沒有放進來這個版本。這個版本只有多實作一個 command。

所以,這是一個測試 (Tclunqlite and tcl-lmdb)

設定為 1000

UQLite (without transction)
寫入 26590 microseconds per iteration
讀取 11439 microseconds per iteration

LMDB
寫入 64458 microseconds per iteration
讀取 11631 microseconds per iteration


設定為 10000

UQLite (without transction) 
寫入 57926 microseconds per iteration
讀取 75323 microseconds per iteration

LMDB
寫入 97469 microseconds per iteration
讀取 67560 microseconds per iteration

設定為 100000

UQLite (without transction)
寫入 616260 microseconds per iteration
讀取 777202 microseconds per iteration

LMDB
寫入 371342 microseconds per iteration
讀取 632069 microseconds per iteration


測試程式 (UNQLite - write):

#!/usr/bin/tclsh

package require unqlite

unqlite db1 "test.db"

set result [time {
for {set i 1} {$i <= 100000} {incr i} {
    db1 kv_store $i $i
}
}]

puts $result

db1 close

測試程式 (UNQLite - read):

#!/usr/bin/tclsh

package require unqlite

unqlite db1 "test.db"

set result [time {
for {set i 1} {$i <= 100000} {incr i} {
    puts [db1 kv_fetch $i]
}
}]

puts $result

db1 close


測試程式 (LMDB - write):

#!/usr/bin/tclsh

package require lmdb

set myenv [lmdb env]
$myenv set_mapsize 1073741824
file mkdir "testdb"
$myenv open -path "testdb"
set mydbi [lmdb open -env $myenv]

set result [time {
set mytxn [env0 txn]
for {set i 1} {$i <= 100000} {incr i} {
    $mydbi put $i $i -txn $mytxn
}
$mytxn commit
}]

puts $result

$mytxn close
$mydbi close -env env0
$myenv close

測試程式 (LMDB - read):

#!/usr/bin/tclsh

package require lmdb

set myenv [lmdb env]
$myenv set_mapsize 1073741824
file mkdir "testdb"
$myenv open -path "testdb"
set mydbi [lmdb open -env $myenv]

set result [time {
set mytxn [env0 txn]
for {set i 1} {$i <= 100000} {incr i} {
    puts [$mydbi get $i -txn $mytxn]
}
$mytxn commit
}]

puts $result

$mytxn close
$mydbi close -env env0
$myenv close

tcl-lmdb v0.1

檔案放置網頁


tcl-lmdb - Tcl interface to the Lightning Memory-Mapped Database

About


This is the Lightning Memory-Mapped Database (LMDB) extension for Tcl using the Tcl Extension Architecture (TEA).

LMDB is a Btree-based database management library with an API similar to BerkeleyDB. The library is thread-aware and supports concurrent read/write access from multiple processes and threads. The DB structure is multi-versioned, and data pages use a copy-on-write strategy, which also provides resistance to corruption and eliminates the need for any recovery procedures. The database is exposed in a memory map, requiring no page cache layer of its own. This extension provides an easy to use interface for accessing LMDB database files from Tcl.

一些說明


實作一部份 Lightning Memory-Mapped Database (LMDB) API 的 Tcl command。

2015-11-01

wiki.tcl.tk has been hijacked

wiki.tcl.tk has been hijacked


(節錄資訊)

It remains down.  The server that manages .tk registrations was hacked into. By exploring DNS resolutions of other .tk sites, we found at least half a dozen other .tk domains that were also redirected to the same unpleasant place. 

Note that neither tcl.tk nor its DNS provider, cloudflare.net, has been attacked. The problem is with the administration of the top-level .tk domain and is not limited to tcl.tk sites. 


看起來是 .tk 網域的安全問題,所以在這個網域下的網站都會被轉址,而不只是 www.tcl.tk 或者是 wiki.tcl.tk 的問題。所以目前有暫時替代的網站網址可以用。

2015-10-24

Tclunqlite v0.2.5

檔案放置網頁


Tclunqlite

About


This is the UnQLite extension for Tcl using the Tcl Extension Architecture (TEA).

UnQLite is a in-process software library which implements a self-contained, serverless, zero-configuration, transactional NoSQL (Key/Value store and Document-store) database engine. This extension provides an easy to use interface for accessing UnQLite database files from Tcl.

Change Log


* Update README
* Makefile.in: Add 'PTHREAD_MUTEX_RECURSIVE' undeclared solution. A workaround for glibc 2.11 issue (add _GNU_SOURCE definition)
* configure.ac: Add UNQLITE_ENABLE_THREADS flag
* generic/tclunqlite.c: Enable UnQLite multi-thread support in Unqlite_Init function
* generic/tclunqlite.c: Add unqlite -enable-threads command to check current UNQLITE_ENABLE_THREADS setting

說明


UnQLite 有支援 Thread-Safe,但是需要在編譯的時候加入 UNQLITE_ENABLE_THREADS 宣告才行(但是有可能速度會變慢)。這版只是加入相關的宣告與處理。

2015-10-22

'PTHREAD_MUTEX_RECURSIVE' undeclared solution

如果使用 glibc 2.11 的 Linux 機器,有可能會遇到這個問題(如果有使用 PTHREAD_MUTEX_RECURSIVE 的話)。

解決方法:

在 Makefile 或者是 Makefile.in (如果是使用 autoconf/configure)加入下面的判斷:

# glibc 2.11 not declaring pthread_mutexattr_settype and PTHREAD_MUTEX_RECURSIVE
# by default, causing compilation failures on some Debian and Ubuntu version.
ifneq ("$(OS)","Windows_NT")
ifneq ($(shell ldd --version | head -n 1 | grep 2.11),)
        PKG_CFLAGS += -D_GNU_SOURCE
endif
endif

這樣就可以解決問題。但是我還沒有在不是使用 glibc 2.11 的 Linux 機器與 Windows 平台上測試,確定加入以後只針對 glibc 2.11 所造成的問題。

2015-10-19

Tclunqlite v0.2.4

檔案放置網頁


Tclunqlite

About


This is the UnQLite extension for Tcl using the Tcl Extension Architecture (TEA).

UnQLite is a in-process software library which implements a self-contained, serverless, zero-configuration, transactional NoSQL (Key/Value store and Document-store) database engine. This extension provides an easy to use interface for accessing UnQLite database files from Tcl.

一些說明


主要是一些我打錯地方的小修改與整理,然後實作了 doc_current_id, doc_last_id 二個新的 command。

實作了 doc_current_id 以後,我才發現 db_current_record_id 和 Jx9 內建的函式 db_fetch 有一樣的問題,這需要修改 UnQLite 才能夠修正問題,不過我不想要直接放上去我自己硬解的版本(這樣會有分散版本的問題,而且以後有新的版本出來,我就需要 UnQLite 每個新版本出來就修改一次)。

2015-10-15

Tclunqlite v0.2.2

檔案放置網頁


Tclunqlite

About


This is the UnQLite extension for Tcl using the Tcl Extension Architecture (TEA).

UnQLite is a in-process software library which implements a self-contained, serverless, zero-configuration, transactional NoSQL database engine. This extension provides an easy to use interface for accessing UnQLite database files from Tcl.

一些說明


實作下列 command 可以存取 binary data (for value, key 仍然要用字串):

DBNAME kv_store key value ?-binary BOOLEAN?
DBNAME kv_append key value ?-binary BOOLEAN?
DBNAME kv_fetch key ?-binary BOOLEAN?
CURSORNAME getdata ?-binary BOOLEAN?

所以如果是 binary data,現在也可以使用 Tclunqlite 來下 kv_store 或者是 kv_append 然後儲存在 UnQLite 的資料庫中,然後用 kv_fetch 或者是 cursor_name getdata 拿出來。

2015-10-13

Tclunqlite v0.2.1

檔案放置網頁


Tclunqlite

一些說明


在 Tclunqlite v0.2.1 主要實作下面二個 command,主要用來測試。

DBNAME jx9_eval Jx9_script_string
DBNAME jx9_eval_file Jx9_script_file

用途就是用來執行 UnQLite 提供的 Jx9 script。二個執行的順序很像,大致是這樣的流程:
  • unqlite_compile or unqlite_compile_file
  • unqlite_vm_exec
  • unqlite_vm_release
  • 運用 unqlite_vm_config 所提供的功能,將 output buffer 的內容傳回來
然後考慮下面的 Jx9 測試 script:

if( !db_exists('users') ) { print db_create('users'); }

$zRec = [
{
   name : 'james',
   age  : 27,
   mail : 'dude@example.com'
},
{
   name : 'robert',
   age  : 35,
   mail : 'rob@example.com'
},
{
   name : 'monji',
   age  : 47,
   mail : 'monji@example.com'
},
{
  name : 'barzini',
  age  : 52,
  mail : 'barz@mobster.com'
}
];

print db_store('users',$zRec);

print db_fetch('users');
print db_fetch('users');

就可以發現,DBNAME jx9_eval_file ( 呼叫 unqlite_compile_file)執行完整個的 Jx9 script 以後,db_fetch 的結果是正常的,也就是可以正確的移往下一筆資料。

但是如果使用 DBNAME jx9_eval ( 呼叫 unqlite_compile)執行,那麼 db_fetch 就只會拿到第一筆資料,而且不會正確的移往下一筆。

經過思考,雖然沒有去 trace 這一段 code,但是我大致上認為原因是因為我在寫 jx9_eval command 的時候,都會經歷 compile, execute and release 的流程,所以原本用來記錄目前 id 的資料就被設為初始值。但是要求經歷這個流程以後還能夠記住好像也有點奇怪(因為 VM 都被我 release 了)。

如果要硬解的話,那麼就是設定一個 global variable,初始值一樣為 0。然後在 db_fetch 操作之前,覆寫回目前的 record id,並且在執行動作以後紀錄目前的值。同樣的,db_reset_record_cursor 追蹤到最源頭,在對 record id 動作的地方也做一樣的事情(將這個 global variable 設為 0),這樣就可以解決問題。但是因為使用了 global variable,所以可能會有其它的麻煩,以及可能會造成記錄不一致的問題。

不過我最後沒有放上去我硬解的做法,只有將 jx9_eval 與 jx9_eval_file 二個 command 放到套件裡。

2015-10-11

Tclunqlite v0.2

檔案放置網頁

Tclunqlite

更新

加入下面的 command:

DBNAME doc_create collection_name
DBNAME doc_fetch
DBNAME doc_fetch_id record_id
DBNAME doc_fetchall
DBNAME doc_store json_record
DBNAME doc_count
DBNAME doc_delete record_id
DBNAME doc_begin
DBNAME doc_commit
DBNAME doc_rollback
DBNAME doc_drop
DBNAME doc_close


其它

UnQLite Jx9 Built-in function db_fetch 並不會自動將 Cursor 移到下一筆,所以我在測試的時候無法拿到下一筆記錄。我不知道是我使用方式有問題,還是 UnQLite 的實作有 issue。

2015-10-09

Tclunqlite v0.1

我實作了 UnQLite 一部份的 Database Engine Handle, Key/Value Store Interfaces, Cursor Iterator Interfaces 與 Manual Transaction Manager 功能。

UnQLite 的儲存方式有二種,一種是 key-pair,一種是 JSON 文件。目前我已經實作了一部份 key-pair 方式的功能。

這是一個嘗試實作的 draft 版本,我有寫一些簡單的測試檔案進行測試目前實作的部份,不過因為沒有大量測試,所以請不要用在需要高可靠度的環境。

檔案放置網頁

Tclunqlite

About

This is the UnQLite extension for Tcl using the Tcl Extension Architecture (TEA).

UnQLite is a in-process software library which implements a self-contained, serverless, zero-configuration, transactional NoSQL database engine. This extension provides an easy to use interface for accessing UnQLite database files from Tcl.

License and version

BSD license, v0.1 (draft version)

UNIX BUILD

Building under most UNIX systems is easy, just run the configure script and then run make. For more information about the build process, see the tcl/unix/README file in the Tcl src dist. The following minimal example will install the extension in the /opt/tcl directory.

$ cd tclunqlite
$ ./configure --prefix=/opt/tcl
$ make
$ make install

WINDOWS BUILD

The recommended method to build extensions under windows is to use the Msys + Mingw build process. This provides a Unix-style build while generating native Windows binaries. Using the Msys + Mingw build tools means that you can use the same configure script as per the Unix build to create a Makefile.

Implement commands

Basic usage

unqlite DBNAME FILENAME ?-readonly BOOLEAN? ?-mmap BOOLEAN? ?-create BOOLEAN? ?-in-memory BOOLEAN? ?-nomutex BOOLEAN?
DBNAME close
DBNAME config ?-disableautocommit BOOLEAN?

Key/value features

DBNAME kv_store key value
DBNAME kv_append key value
DBNAME kv_fetch key
DBNAME kv_delete key

Transactions

DBNAME begin
DBNAME commit
DBNAME rollback

Cursors

DBNAME cursor_init CURSORNAME
CURSORNAME seek
CURSORNAME first
CURSORNAME last
CURSORNAME next
CURSORNAME prev
CURSORNAME isvalid
CURSORNAME getkey
CURSORNAME getdata
CURSORNAME delete
CURSORNAME reset
CURSORNAME release

Misc

DBNAME random_string buf_size
DBNAME version

2015-09-15

TclRAL: Tcl Relational Algebra Library

最近剛釋出了新版:
ANNOUNCE TclRAL Version 0.11.7 

網頁上的介紹:

TclRAL is a "C" based extention of the TCL language that is an implementation of the Relational Algebra. From a programmers point of view, the library is a Tcl package that introduces two new native data types and a set of commands that operate on them. The new data types are:
  • Tuple
  • Relation

也是很有趣的套件。

2015-09-06

A 3-D Canvas Widget For Tcl/Tk

我在閱讀 AndroWish 資料的時候,發現 AndroWish 包含了一個 3-D Canvas Widget,我想應該是用來支援 Open GL 的部份。

官網的介紹如下:

The 3-D Canvas Widget provides Tk programs with 3-D graphics capabilities through the use of OpenGL. But the 3-D Canvas widget is not another thin wrapper around the OpenGL interface. This widget is a much higher-level abstraction. Just as the built-in canvas widget of Tk is a high-level abstraction of the X11 drawing routines, so too the 3-D Canvas widget is a high-level abstraction of OpenGL. 

The 3dcanvas project is stable and has been in use and relatively unchanged for several years. New contributors who want to expand and enhance 3dcanvas are welcomed.

我覺得很有趣,所以從 3-D Canvas Widget timeline page 選取了 tag 1.2.1 然後下載,並且使用 MSYS/MinGW 的組合來編譯看看。

結果很順利的編譯成功,同時 demo 的部份也可以執行,所以看起來是可以使用的。

2015-08-23

Tcl extension: VecTcl

我在 comp.lang.tcl 看到了一篇文章:
Ann: VecTcl 0.2 release + binaries available


VecTcl 釋出了 0.2 版,並且有 binaries 可以使用。


官網:
VecTcl - numerical math in Tcl


官網的介紹如下:

This package provides a numerical array extension for Tcl with support for vectors, matrices and higher-rank tensors of integers, floating point and complex numbers. It has builtin support for basic array shaping, slicing and linear algebra subroutines and is designed to integrate seamlessly with Tcl. The user interface consists of a single command, vexpr, which acts as an expression evaluator similar to expr. The language supported by vexpr is inspired by Matlab, which closely models the language used by textbook math.


很有趣的一個套件,如果執行的時候夠高效的話,應該對於一些數學運算有幫助。

2015-07-14

Build Tix extension package

Tix Widget Set for Tk/Tcl and Python


使用 MSYS/MINGW 來編譯。在 make 的時候發現編譯 tixGrSort.c 會發生錯誤。觀察錯誤訊息以後,應該是要設 flag 否則在 Tcl 8.6,無法直接使用 Tcl_Interp member result。

typedef struct Tcl_Interp
#ifndef TCL_NO_DEPRECATED
{
    /* TIP #330: Strongly discourage extensions from using the string
     * result. */
#ifdef USE_INTERP_RESULT
    char *result TCL_DEPRECATED_API("use Tcl_GetStringResult/Tcl_SetResult");
                /* If the last command returned a string
                 * result, this points to it. */
    void (*freeProc) (char *blockPtr)
        TCL_DEPRECATED_API("use Tcl_GetStringResult/Tcl_SetResult");
                /* Zero means the string result is statically
                 * allocated. TCL_DYNAMIC means it was
                 * allocated with ckalloc and should be freed
                 * with ckfree. Other values give the address
                 * of function to invoke to free the result.
                 * Tcl_Eval must free it before executing next
                 * command. */
#else
    char *resultDontUse; /* Don't use in extensions! */
    void (*freeProcDontUse) (char *); /* Don't use in extensions! */
#endif
#ifdef USE_INTERP_ERRORLINE
    int errorLine TCL_DEPRECATED_API("use Tcl_GetErrorLine/Tcl_SetErrorLine");
                /* When TCL_ERROR is returned, this gives the
                 * line number within the command where the
                 * error occurred (1 if first line). */
#else
    int errorLineDontUse; /* Don't use in extensions! */
#endif
}
#endif /* TCL_NO_DEPRECATED */
Tcl_Interp;

所以我把 source code 改寫為下列的樣子,只是不知道改的對不對。
    order = strtol(Tcl_GetStringResult(sortInterp), &end, 0);
    if ((end == Tcl_GetStringResult(sortInterp)) || (*end != 0)) {

2015-07-10

TLS 1.6.7 與 OpenSSL

TLS 1.6.7 與 OpenSSL 的更新都在最近這一、二天釋出。

TLS 釋出了新版,版本號為 1.6.7。

另外一方面,OpenSSL專案小組近日緊急預告,將於7月9日釋出OpenSSL 1.0.2d和1.0.1p新版,來修補一個高風險漏洞。(新聞:記得要更新!OpenSSL又爆罕見高風險漏洞,官網預告7月9日釋修補)所以如果使用 1.0.2 與 1.0.1 版本 OpenSSL 的使用者需要更新到新版才行。

2015-05-04

Tcllib 1.17 released

Tcllib 1.17 released


An excerpt from the release README:

    6   new packages                in 5   modules
    66  changed packages            in 39  modules
    46  internally changed packages in 31  modules
    293 unchanged packages          in 74  modules
    418 packages, total             in 118 modules, total

The full details can be found in the
[release technote](http://core.tcl.tk/tcllib/technote/56416fe9cf1b0c0f5cd535861225a56af8228999).

2015-02-28

TLS v1.6.4 is out

TLS and Openssl - Static libraries (w32, mingw/msys)


下載 TLS v1.6.4 以後使用之前 MinGW/MSYS 的方式編譯,確定可以編譯。


測試的 script 從 Tcler's Wiki 來的:
package require tls
variable quit 0

proc Read {chan} {
    if {[eof $chan]} { fileevent $chan readable {}; puts "Closed"; set ::forever 1; return }
    puts [read $chan]
    variable quit ; if {!$quit} { puts $chan QUIT; set quit 0 }
}

proc Write {chan} {
    fileevent $chan writable {}
    tls::handshake $chan
    fconfigure $chan -buffering line -blocking 0 -translation crlf
}

set sock [tls::socket -async pop.gmail.com 995]
fconfigure $sock -encoding binary -buffering none -blocking 1
fileevent $sock writable [list Write $sock]
fileevent $sock readable [list Read $sock]

vwait ::forever

2015-02-26

Wub Direct example

Tcler's Wiki: Wub Direct


稍微嘗試了一下 Tcler's Wiki 提供的範例,發現 local.tcl 如果是使用 svn checkout 目前的 code,要放在跟 site.config 與 Wub.tcl 同一個目錄中。


我發現在 examples 有放一個 hello-direct.tcl,想要測試這個檔案,所以在 local.tcl 使用:
source examples/hello-direct.tcl

然後加入下列的設定到 site.config -
hello {
       domain Direct
       url /hello
       namespace ::Hello
}

再打 http://localhost:8080/hello/ 測試,是有網頁內容顯示的(而不是丟出錯誤訊息),看起來有成功。


但是在 local.tcl 直接使用 source 的方式感覺還是有點怪怪的,不知是否有更好的方式。


更新:
在設定  hello-direct.tcl 以後,又用 hello 來測試 Wub 的 rewrite 功能,site.config 設定如下:
testrewrite {
    url /testrewrite
    rewrite /hello
}

如果使用 http://localhost:8080/testrewrite/ 測試,確實會被轉到 http://localhost:8080/hello/。

2015-02-12

REST library at Tcllib

a rest framework
rest - A framework for RESTful web services


寫了一個 FACEBOOK REST API 的小程式來測試:
package require rest

set urls "http://tcl-eval.blogspot.tw/"
set res [rest::get http://api.facebook.com/restserver.php [list method links.getStats urls $urls]]
puts $res

Facebook 回傳回來的是 XML 形式的資料,我只有印出來確定沒問題,但是看樣子是可以使用的。

下面就是結果:
<?xml version="1.0" encoding="UTF-8"?>
<links_getStats_response xmlns="http://api.facebook.com/1.0/" xmlns:xsi="http://
www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://api.facebook.com/
1.0/ http://api.facebook.com/1.0/facebook.xsd" list="true">
  <link_stat>
    <url>http://tcl-eval.blogspot.tw/</url>
    <normalized_url>http://tcl-eval.blogspot.tw/</normalized_url>
    <share_count>0</share_count>
    <like_count>0</like_count>
    <comment_count>0</comment_count>
    <total_count>0</total_count>
    <click_count>0</click_count>
    <comments_fbid xsi:nil="true"/>
    <commentsbox_count>0</commentsbox_count>
  </link_stat>
</links_getStats_response>

2015-02-01

Tclqrencode v0.2, update libpng dll and header files

版本一樣維持在 v0.2,因為主要的 code base 沒有變動。主要的改變是更新 win 目錄下的 libpng,使用 libpng v1.6. 16 的 source code 編譯出一個新的,並且也更新了 header files。

希望我全部的事情都有做對。


連結網址:
Tclqrencode


我只有使用 Tclqrencode v0.1 這篇文章的範例進行測試,希望不會有什麼問題。

2015-01-29

TclOO Past Present Future

TclOO Past Present Future - Tcl Community Association


在網路上找到的關於 TclOO 的簡報,簡介了 TclOO 的來由以及發展。

2015-01-26

FreeWrap 6.64 released

在 com.lang.tcl 上看到訊息:
ANNOUNCE: FreeWrap 6.64 released


Tcl/Tk 版本已更新到 8.6.3,Tcllib 更新到 v1.16。下面是更新的內容:
 
Changes implemented in version 6.64
------------------------------

  1. FreeWrap 6.64 is based on TCL/TK 8.6.3
  2. The 64-bit Windows version has tcllib1.16 included (instead of tcllib1.15).
  3. FreeWrap 6.64 now returns the correct time stamp for a file contained within a wrapped application or within a ZIP archive.
  4. The following additional ttk themes are now installed into freeWrap.
    - aquablue
    - aquativo
    - black
    - blue
    - clearlooks
    - keramik
    - keramik_alt
    - plastik
    - radiance
    - winxpblue

2015-01-24

tdbc::sqlite3 test script

只是測試用的 script。

#!/usr/bin/tclsh

package require tdbc::sqlite3
tdbc::sqlite3::connection create db "sample.db" 


set statement [db prepare {drop table if exists person}]
$statement execute
$statement close

set statement [db prepare {create table person (id integer, name varchar(40))}]
$statement execute
$statement close

set statement [db prepare {insert into person values(1, 'leo')}]
$statement execute
$statement close

set statement [db prepare {insert into person values(2, 'yui')}]
$statement execute
$statement close

set statement [db prepare {SELECT * FROM person}]

$statement foreach row {
    puts [dict get $row id]
    puts [dict get $row name]
}

$statement close
db close

2015-01-18

SQLite extension

Tcl 的 source package 已經有內建 SQLite,放置在 pkgs 的目錄裡。


如果要自己編譯一份最新的 (使用 MinGW/MSYS),我目前的做法是這樣:

首先去 SQLite 的下載網頁,下載 C source code as an amalgamation. Also includes a "configure" script and TEA makefiles for the TCL Interface. 這一個檔案。以目前的版本為例,就是 sqlite-autoconf-3080800.tar.gz

再來是建好一個 SQLite 的目錄,以目前的版本為例,就是 sqlite3.8.8

解壓縮從 SQLite 下載網頁下載的檔案以後,可以看到裡面有一個 tea 目錄,將 tea 目錄下的全部檔案都複製到剛好建立的 sqlite3.8.8 下,然後將 sqlite3.c 複製到 sqlite3.8.8 下的 generic 目錄。

接下來就使用

./configure
make
make install

就可以了。