Redoing the old benchmark on my office's Mac, I have installed cockroachdb on Mac using
brew install cockroach
Start the server
cockroach start --insecure --listen-addr=localhost
Create the database
cockroach sql --insecure
CREATE DATABASE test3;
GRANT ALL ON DATABASE test3 TO test3;
And do the benchmark:
alias time='/usr/local/bin/gtime -f "\nCPU: %Us\tReal: %es\tRAM: %MKB"'
time go run cockroach.go lib.go
INSERT: 1.653080201s (0.17 ms/op)
UPDATE: 1.947936008s (0.19 ms/op)
SELECT: 1m44.425784366s (1.60 ms/op: 65096)
CPU: 43.20s Real: 108.71s RAM: 89072KB
The benchmark performed on i7-4770HQ, 16GB RAM, SSD harddisk, with OSX Mojave 10.14.2 under Go 1.11. Conclusion: the result is really impressive :3 Great job CockroachDB Team.
programming: the action or process of writing computer programs. | rants: speak or shout at length in a wild, [im]passioned way.
Showing posts with label database. Show all posts
Showing posts with label database. Show all posts
2019-01-10
2017-05-19
PostgreSQL 9.6.2 vs ScyllaDB 1.7RC2
Since CockroachDB 1.0 not yet performant (ain't got time to wait), today we're gonna test PostgreSQL 9.6.2 vs ScyllaDB 1.7RC2 on Ubuntu XFS filesystem.
test1: postgresql
INSERT: 34.667685316s (3.47 ms/op)
UPDATE: 35.117617526s (3.51 ms/op)
SELECT: 47.529755777s (0.73 ms/op)
CPU: 35.14s Real: 117.64s RAM: 58 544 KB
test2: postgresql jsonb
INSERT: 33.861673279s (3.39 ms/op)
UPDATE: 34.038996914s (3.40 ms/op)
SELECT: 45.340834079s (0.70 ms/op)
CPU: 33.62s Real: 113.58s RAM: 58 140 KB
test4: scylladb
INSERT: 2.133985799s (0.21 ms/op)
UPDATE: 2.167973712s (0.22 ms/op)
SELECT: 2m24.804415353s (2.22 ms/op)
CPU: 41.29s Real: 152.57s RAM: 79 708 KB
Hmm.. This is weird, because the difference between this and previous benchmark are:
Probably the kernel factor? Oh yeah, you can get the source on github (you can create PR if there's bug). Now let's test it in parallel with one third of data (only show first 3 and last 3 result):
test1: postgresql
I-18: (6.17 ms/op: 101)
I-12: (6.16 ms/op: 101)
I-14: (6.19 ms/op: 101)
I-32: (157.80 ms/op: 101)
I-05: (167.31 ms/op: 101)
I-31: (168.23 ms/op: 101)
INSERT: 16.99355453s (5.10 ms/op)
U-13: (5.92 ms/op: 101)
U-16: (5.97 ms/op: 101)
U-04: (6.00 ms/op: 101)
U-21: (1312.46 ms/op: 101)
U-28: (1333.33 ms/op: 101)
U-20: (1333.60 ms/op: 101)
UPDATE: 2m14.695128106s (40.41 ms/op)
S-24: (14.56 ms/op: 139)
S-69: (17.86 ms/op: 115)
S-78: (24.18 ms/op: 88)
S-06: (18.14 ms/op: 556)
S-46: (18.54 ms/op: 556)
S-37: (1427.27 ms/op: 91)
SELECT: 2m9.888985893s (5.98 ms/op: 21716)
CPU: 13.07s Real: 281.90s RAM: 59 072 KB
test2: postgresql jsonb
I-03: (6.23 ms/op: 101)
I-10: (6.28 ms/op: 101)
I-04: (6.30 ms/op: 101)
I-08: (157.54 ms/op: 101)
I-29: (225.82 ms/op: 101)
I-22: (314.39 ms/op: 101)
INSERT: 31.754559744s (9.53 ms/op)
U-23: (6.45 ms/op: 101)
U-13: (6.51 ms/op: 101)
U-04: (6.54 ms/op: 101)
U-05: (1287.63 ms/op: 101)
U-01: (1287.77 ms/op: 101)
U-26: (1953.67 ms/op: 101)
UPDATE: 3m17.321405467s (59.20 ms/op)
S-27: (27.54 ms/op: 124)
S-21: (21.89 ms/op: 159)
S-29: (30.55 ms/op: 115)
S-48: (323.69 ms/op: 417)
S-05: (202.41 ms/op: 667)
S-03: (121.61 ms/op: 1111)
SELECT: 2m15.109162326s (6.22 ms/op: 21716)
CPU: 13.78s Real: 364.50s RAM: 56 688 KB
test4: scylladb
I-17: (1.59 ms/op: 101)
I-12: (1.59 ms/op: 101)
I-30: (1.60 ms/op: 101)
I-13: (1.65 ms/op: 101)
I-22: (1.65 ms/op: 101)
I-06: (1.65 ms/op: 101)
INSERT: 166.992399ms (0.05 ms/op)
U-06: (1.60 ms/op: 101)
U-12: (1.61 ms/op: 101)
U-29: (1.62 ms/op: 101)
U-31: (1.68 ms/op: 101)
U-30: (1.68 ms/op: 101)
U-11: (1.68 ms/op: 101)
UPDATE: 170.240627ms (0.05 ms/op)
S-79: (52.05 ms/op: 86)
S-74: (50.06 ms/op: 98)
S-75: (53.78 ms/op: 96)
S-03: (12.18 ms/op: 1111)
S-02: (8.43 ms/op: 1666)
S-42: (8.53 ms/op: 1666)
SELECT: 14.678651323s (0.68 ms/op: 21716)
CPU: 16.92s Real: 18.08s RAM: 76 824 KB
WTF! These numbers are blazing fast!
2017/05/19 14:16:28 gocql: unable to create session: unable to discover protocol version: dial tcp 127.0.0.1:9042: i/o timeout
2017/05/19 14:18:28 gocql: unable to create session: unable to discover protocol version: dial tcp 127.0.0.1:9042: getsockopt: connection refused
test1: postgresql
INSERT: 34.667685316s (3.47 ms/op)
UPDATE: 35.117617526s (3.51 ms/op)
SELECT: 47.529755777s (0.73 ms/op)
CPU: 35.14s Real: 117.64s RAM: 58 544 KB
test2: postgresql jsonb
INSERT: 33.861673279s (3.39 ms/op)
UPDATE: 34.038996914s (3.40 ms/op)
SELECT: 45.340834079s (0.70 ms/op)
CPU: 33.62s Real: 113.58s RAM: 58 140 KB
test4: scylladb
INSERT: 2.133985799s (0.21 ms/op)
UPDATE: 2.167973712s (0.22 ms/op)
SELECT: 2m24.804415353s (2.22 ms/op)
CPU: 41.29s Real: 152.57s RAM: 79 708 KB
Label | This | Previous |
---|---|---|
Operating System | Ubuntu 17.04 | Manjaro |
Kernel | 4.10.0-19-generic tuned by scylla_setup |
aufs_friendly 4.10.13-1 |
PostgreSQL config: effective_cache shared_buffers work_mem |
ubuntu default: 4G/128M/4M |
modified: 2G/2G/16M |
PostgreSQL total time | 113.58s | 116.34s |
ScyllaDB | official deb 1.7RC2 | official docker 1.6.4 |
ScyllaDB config | 8GB / 8 / XFS | 4GB / 4 / EXT4 |
ScyllaDB total time | 152.57s | 141.34s |
Probably the kernel factor? Oh yeah, you can get the source on github (you can create PR if there's bug). Now let's test it in parallel with one third of data (only show first 3 and last 3 result):
test1: postgresql
I-18: (6.17 ms/op: 101)
I-12: (6.16 ms/op: 101)
I-14: (6.19 ms/op: 101)
I-32: (157.80 ms/op: 101)
I-05: (167.31 ms/op: 101)
I-31: (168.23 ms/op: 101)
INSERT: 16.99355453s (5.10 ms/op)
U-13: (5.92 ms/op: 101)
U-16: (5.97 ms/op: 101)
U-04: (6.00 ms/op: 101)
U-21: (1312.46 ms/op: 101)
U-28: (1333.33 ms/op: 101)
U-20: (1333.60 ms/op: 101)
UPDATE: 2m14.695128106s (40.41 ms/op)
S-24: (14.56 ms/op: 139)
S-69: (17.86 ms/op: 115)
S-78: (24.18 ms/op: 88)
S-06: (18.14 ms/op: 556)
S-46: (18.54 ms/op: 556)
S-37: (1427.27 ms/op: 91)
SELECT: 2m9.888985893s (5.98 ms/op: 21716)
CPU: 13.07s Real: 281.90s RAM: 59 072 KB
test2: postgresql jsonb
I-03: (6.23 ms/op: 101)
I-10: (6.28 ms/op: 101)
I-04: (6.30 ms/op: 101)
I-08: (157.54 ms/op: 101)
I-29: (225.82 ms/op: 101)
I-22: (314.39 ms/op: 101)
INSERT: 31.754559744s (9.53 ms/op)
U-23: (6.45 ms/op: 101)
U-13: (6.51 ms/op: 101)
U-04: (6.54 ms/op: 101)
U-05: (1287.63 ms/op: 101)
U-01: (1287.77 ms/op: 101)
U-26: (1953.67 ms/op: 101)
UPDATE: 3m17.321405467s (59.20 ms/op)
S-27: (27.54 ms/op: 124)
S-21: (21.89 ms/op: 159)
S-29: (30.55 ms/op: 115)
S-48: (323.69 ms/op: 417)
S-05: (202.41 ms/op: 667)
S-03: (121.61 ms/op: 1111)
SELECT: 2m15.109162326s (6.22 ms/op: 21716)
CPU: 13.78s Real: 364.50s RAM: 56 688 KB
test4: scylladb
I-17: (1.59 ms/op: 101)
I-12: (1.59 ms/op: 101)
I-30: (1.60 ms/op: 101)
I-13: (1.65 ms/op: 101)
I-22: (1.65 ms/op: 101)
I-06: (1.65 ms/op: 101)
INSERT: 166.992399ms (0.05 ms/op)
U-06: (1.60 ms/op: 101)
U-12: (1.61 ms/op: 101)
U-29: (1.62 ms/op: 101)
U-31: (1.68 ms/op: 101)
U-30: (1.68 ms/op: 101)
U-11: (1.68 ms/op: 101)
UPDATE: 170.240627ms (0.05 ms/op)
S-79: (52.05 ms/op: 86)
S-74: (50.06 ms/op: 98)
S-75: (53.78 ms/op: 96)
S-03: (12.18 ms/op: 1111)
S-02: (8.43 ms/op: 1666)
S-42: (8.53 ms/op: 1666)
SELECT: 14.678651323s (0.68 ms/op: 21716)
CPU: 16.92s Real: 18.08s RAM: 76 824 KB
WTF! These numbers are blazing fast!
Too bad that the gocql or probably the scylla-server sometimes refused to connect:
2017/05/19 14:16:28 gocql: unable to create session: unable to discover protocol version: dial tcp 127.0.0.1:9042: i/o timeout
2017/05/19 14:16:33 gocql: unable to create session: unable to discover protocol version: dial tcp 127.0.0.1:9042: i/o timeout
2017/05/19 14:20:30 gocql: unable to create session: unable to discover protocol version: dial tcp 127.0.0.1:9042: i/o timeout
2017/05/19 14:18:28 gocql: unable to create session: unable to discover protocol version: dial tcp 127.0.0.1:9042: getsockopt: connection refused
2017/05/19 14:18:44 gocql: unable to dial control conn 127.0.0.1: dial tcp 127.0.0.1:9042: i/o timeout
2017/05/19 14:18:44 gocql: unable to create session: control: unable to connect to initial hosts: dial tcp 127.0.0.1:9042: i/o timeout
2017/05/19 14:20:34 gocql: unable to create session: unable to discover protocol version: dial tcp 127.0.0.1:9042: i/o timeout
2017/05/19 14:20:37 gocql: unable to create session: unable to discover protocol version: dial tcp 127.0.0.1:9042: i/o timeout
2017/05/19 14:27:46 gocql: unable to create session: unable to discover protocol version: dial tcp 127.0.0.1:9042: i/o timeout
Or when using cqlsh 127.0.0.1:
Connection error: ('Unable to connect to any servers', {'127.0.0.1': error(None, "Tried connecting to [('127.0.0.1', 9042)]. Last error: timed out")})
Connection error: ('Unable to complete the operation against any hosts', {})
Connection error: ('Unable to connect to any servers', {'127.0.0.1': error(4, "Tried connecting to [('127.0.0.1', 9042)]. Last error: Interrupted system call")})
Other those issue, I think ScyllaDB is freakingly awesome!
EDIT: this issue only happened after doing these intensive benchmark, after restarting this no longer happened
Connection error: ('Unable to connect to any servers', {'127.0.0.1': error(None, "Tried connecting to [('127.0.0.1', 9042)]. Last error: timed out")})
Connection error: ('Unable to complete the operation against any hosts', {})
Connection error: ('Unable to connect to any servers', {'127.0.0.1': error(4, "Tried connecting to [('127.0.0.1', 9042)]. Last error: Interrupted system call")})
EDIT: this issue only happened after doing these intensive benchmark, after restarting this no longer happened
2016-12-02
List of Tech Migrations
I'm quite fascinated with the decision (and obviously the effort) of leaving a language or database to another technology, here's the list that I found:
NOTE: this list will be no longer updated, you can see the lastest changes on github repo so anyone can contribute/update.
- Twitter (2011) from Ruby to Scala
- Urban Airship (2011) from MongoDB to PostgreSQL
- Mixpanel (2011) from Erlang to Python
- LinkedIn (2012) from Ruby to NodeJS
- MoovWeb (2012) from ? to Go
- NTP Pool (2012) from Perl to Go
- DLGoogle (2012) from C++ to Go
- Bitly (2012) from Python/C to Go
- SoundCloud (2012) from Ruby to Go
- SmartyStreets (2012) from .NET to Go
- Toggl (2012) from Ruby to Go
- Shareaholic (2012) from MongoDB to Riak
- DigiDoc (2012) from MongoDB to PostgreSQL
- Etsy (2012) from MongoDB to MySQL
- TekPub (2012) from MongoDB to PostgreSQL
- TargeterApp (2012) from NodeJS to Ruby
- PayPal (2013) from Java to NodeJS
- Koding (2013) from NodeJS to Go
- Iron.io (2013) from Ruby to Go
- Zalora (2013) from PHP to Go
- TimeHop (2013) from Ruby to Go
- Canonical Juju (2013) from Python to Go
- Instill (2014) from AngularDart to Polymer
- Workia (2014) from ActionScript to Dart
- TrustWave (2014) from ActionScript/Flex to Dart
- Facebook Chat (2014) from Erlang to C++
- Grab (2014) from Ruby/NodeJS to Go
- Auth0 (2015) from Apache Kafka to ZeroMQ
- Pinterest (2015) from Java to Elixir
- President University (2015) from PHP/MySQL to Go/PostgreSQL
- UserLike (2015) from MySQL and MongoDB to PostgreSQL
- Olery (2015) from MongoDB to PostgreSQL
- CrowdStrike (2015) from Scala to Go
- DockYard (2015) from Ruby to Elixir
- CrowdStrike (2016) from Scala to Go
- Tokopedia (2016) from Perl to Go
- Uber (2016) from PostgreSQL to MySQL
- Uber (2016) from NodeJS to Go and Java
- Uber (2016) from Python to Go
- Digg (2016) from NodeJS to Go
- Google AdWords (2016) from GWT to Angular Dart
- Bleacher Report (2017) from Ruby to Elixir
- UpGuard (2017) from JRuby to Go
- Movio (2017) from Scala to Go
- Discord (2017) from MongoDB to Cassandra
- Uber (2017) Riak/Solr to Cassandra
- Swat.io (2017) from MySQL to PostgreSQL
- Poki (2017) from PHP to Go
- Stream (2017) from Python to Go
- Baidu (2017) from MySQL to CockroachDB
- Boxzilla (2017) from PHP to Go
- Shippable (2017) from MongoDB to PostgreSQL
- GeekyAnts (2017) from ReactNative to Google Flutter (confirmed)
- AppTree (2018) from Java/Kotlin/ObjC/Swift to Dart
- SendGrid (2018) from Perl/Python to Go
- Bukalapak (2018) from Ruby/MySQL to Go/MongoDB/ElasticSearch
- Centrifugo (2018) from Python to Go
- The Guardian (2018) from MongoDB to PostgreSQL
- AppsFlyer (2019) from Clojure to Go
- Deliveroo (2019) from Ruby to Rust
- Amazon (2019) from OracleDB to Aurora Postgres and DynamoDB
- Two Fintech Giants (2019) from OracleDB to CockroachDB
- Comcast (2019) from Cassandra to ScyllaDB
- Codism (2019) from Python to Go
- Discord (2020) from Go to Rust
- HSBC (2020) from DB2 to MongoDB
- Unnamed US Wireless Carrier (2020) from Oracle to PostgreSQL
- ListenBrainz (2020) from InfluxDB to TimescaleDB
- Dropbox (2020) from Nginx to Envoy
- Repustate (2020) from Python to Go
- SimilarWeb (2020) from NodeJS to Go
- PayPal (2020) from C++ to Go
- KhanAcademy (2021) from Python to Go
- ownCloud (2021) from PHP to Go
- Percona (2021) from Prometheus to VictoriaMetrics
- Aluma (2021) from C# to Go
So many migrating to Go ^^. If it's not about migration, there's a lot more here that apparently choose Go (interviews, from X to Go, and also more here).
NOTE: this list will be no longer updated, you can see the lastest changes on github repo so anyone can contribute/update.
If you found any more news like this, paste the link on the comment, I'll gladly add them on the list.
Labels:
database
,
migration
,
move
,
programming language
,
programming switch
,
technology
2016-11-28
Installing AeroSpike on ArchLinux
As benchmarked previously (more than half year ago since this blog post written), AeroSpike has awesome search/retrieval performance. In this tutorial we will learn how to install AeroSpike on ArchLinux
wget -O aerospike.tgz 'http://aerospike.com/download/server/latest/artifact/tgz'
tar -xvfz aerospike.tgz
cd aerospike-server
./bin/aerospike init
Configure the etc/aerospike.conf into something like this:
service {
user YOUR_USERNAME
group YOUR_GROUP
run-as-daemon
paxos-single-replica-limit 1
pidfile /home/YOUR_USERNAME/aerospike-server/var/run/aerospike.pid
transaction-queues 8
transaction-threads-per-queue 8
transaction-pending-limit 255 # HOT KEY ERROR WORKAROUND FOR INCREMENT
proto-fd-max 15000
work-directory /home/YOUR_USERNAME/aerospike-server/var
}
logging {
file /home/YOUR_USERNAME/aerospike-server/var/log/aerospike.log {
context any info
}
}
mod-lua {
system-path /home/YOUR_USERNAME/aerospike-server/share/udf/lua
user-path /home/YOUR_USERNAME/aerospike-server/var/udf/lua
}
network {
service {
address any
port 3333 # USE DIFFERENT PORT IF CONFLICT
#reuse-address # COMMENT/REMOVE THIS PART
}
heartbeat {
mode multicast
multicast-group 239.1.99.222 # CHANGE address INTO multicast-group
port 9918
interval 150
timeout 10
}
fabric {
port 3331 # USE DIFFERENT PORT IF CONFLICT
}
info {
port 3332 # USE DIFFERENT PORT IF CONFLICT
}
}
namespace test {
replication-factor 2
memory-size 4G # CHANGE DEPENDS ON YOUR RAM
default-ttl 0 # CHANGE 30 days, use 0 to never expire/evict.
storage-engine device {
file /home/YOUR_USERNAME/aerospike-server/data
filesize 128M # MAX SIZE OF EACH FILE
}
}
Done, now you can test your aerospike server using any client :3
wget -O aerospike.tgz 'http://aerospike.com/download/server/latest/artifact/tgz'
tar -xvfz aerospike.tgz
cd aerospike-server
./bin/aerospike init
Configure the etc/aerospike.conf into something like this:
service {
user YOUR_USERNAME
group YOUR_GROUP
run-as-daemon
paxos-single-replica-limit 1
pidfile /home/YOUR_USERNAME/aerospike-server/var/run/aerospike.pid
transaction-queues 8
transaction-threads-per-queue 8
transaction-pending-limit 255 # HOT KEY ERROR WORKAROUND FOR INCREMENT
proto-fd-max 15000
work-directory /home/YOUR_USERNAME/aerospike-server/var
}
logging {
file /home/YOUR_USERNAME/aerospike-server/var/log/aerospike.log {
context any info
}
}
mod-lua {
system-path /home/YOUR_USERNAME/aerospike-server/share/udf/lua
user-path /home/YOUR_USERNAME/aerospike-server/var/udf/lua
}
network {
service {
address any
port 3333 # USE DIFFERENT PORT IF CONFLICT
#reuse-address # COMMENT/REMOVE THIS PART
}
heartbeat {
mode multicast
multicast-group 239.1.99.222 # CHANGE address INTO multicast-group
port 9918
interval 150
timeout 10
}
fabric {
port 3331 # USE DIFFERENT PORT IF CONFLICT
}
info {
port 3332 # USE DIFFERENT PORT IF CONFLICT
}
}
namespace test {
replication-factor 2
memory-size 4G # CHANGE DEPENDS ON YOUR RAM
default-ttl 0 # CHANGE 30 days, use 0 to never expire/evict.
storage-engine device {
file /home/YOUR_USERNAME/aerospike-server/data
filesize 128M # MAX SIZE OF EACH FILE
}
}
Then start the server:
./bin/aerospike restart
tail -f var/log/aerospike.log &
./bin/aerospike status
First and last command should show something like this:
info: stopped
info: started
info: process running
Done, now you can test your aerospike server using any client :3
aerospike-tools only available on Debian, Ubuntu, and Redhat.
2015-10-15
State of JSON in SQLite and NoSQL
Another news for those who like JSON format, there's JSON1 extension for SQLite, this is a great news for those who only need embedded database for their app. Their functions are similar to PostgreSQL's JSON/JSONB functions. On other hand, there's already a lot of BSON solution in NoSQL world, such as the infamous MongoDB (and TokuMX) RethinkDB, ArangoDB (SQL-like syntax), PouchDB (Javascript version of CouchDB), SequoiaDB, DjonDB, etc. For more information about those databases, see the video below.
SQLite 3 Tutorial
PostgreSQL 9.4's JSONB
What's new in MongoDB 3
Fractal Tree Index (TokuMX)
RethinkDB
ArangoDB
2015-02-11
How to Install OrientDB Community Edition in Linux
OrientDB is full-featured NoSQL database, it is a RDBMS (just like PostgreSQL, MySQL, Oracle, MSSQL Server), a document-oriented database (just like MongoDB and Cassandra), key-value store (yes, just like Redis, Memcache, Riak, CouchBase and many more) and also a graph database (just like Neo4j). To install the sofware, first you must make sure you have installed JDK and Ant. Then download, extract the archive, and compile using this command:
wget -c https://github.com/orientechnologies/orientdb/archive/2.0.2.tar.gz
tar xvfz orientdb-2.0.2.tar.gz
cd orientdb-2.0.2
ant
cd ../releases/orientdb-community-2.0.2
To start the server, use this command:
cd bin/
./server.sh
At the first time you will be asked a new root password. To start the console, use this command:
./console.sh
Type help within the console for more information about the console commands. And lastly if you want to use the OrientDB Studio UI, just visit this URL: http://127.0.0.1:2480.
You can learn more about OrientDB here.
wget -c https://github.com/orientechnologies/orientdb/archive/2.0.2.tar.gz
tar xvfz orientdb-2.0.2.tar.gz
cd orientdb-2.0.2
ant
cd ../releases/orientdb-community-2.0.2
To start the server, use this command:
cd bin/
./server.sh
At the first time you will be asked a new root password. To start the console, use this command:
./console.sh
Type help within the console for more information about the console commands. And lastly if you want to use the OrientDB Studio UI, just visit this URL: http://127.0.0.1:2480.
You can learn more about OrientDB here.
2014-11-27
How to use Go-Couchbase
Today I will try to use Go-Couchbase, to insert, fetch, update and delete from database. To install go-couchbase, type this commands:
go get -u -v github.com/couchbaselabs/go-couchbase
Then just create an example source code and run it:
package main
import (
"fmt"
"github.com/couchbaselabs/go-couchbase"
//"github.com/davecgh/go-spew/spew"
"encoding/json"
"github.com/kr/pretty"
//"reflect"
"errors"
"runtime"
"strings"
)
var FILE_PATH string
var CB *couchbase.Bucket
// initialize file path
func init() {
_, file, _, _ := runtime.Caller(1)
FILE_PATH = file[:4+strings.Index(file, "/src/")]
err := errors.New("no error")
CB, err = couchbase.GetBucket("http://127.0.0.1:8091/", "default", "default")
Panic(err, "Error connection, getting pool or bucket: %v")
}
// print warning message
func Check(err error, msg string, args ...interface{}) error {
if err != nil {
_, file, line, _ := runtime.Caller(1)
str := fmt.Sprintf("%s:%d: ", file[len(FILE_PATH):], line)
fmt.Errorf(str+msg, args...)
res := pretty.Formatter(err)
fmt.Errorf("%# v\n", res)
}
return err
}
// print error message and exit program
func Panic(err error, msg string, args ...interface{}) {
if Check(err, msg, args...) != nil {
panic(err)
}
}
// describe a variable
func Explain(args ...interface{}) {
_, file, line, _ := runtime.Caller(1)
//res, _ := json.MarshalIndent(variable, " ", " ")
for _, arg := range args {
res := pretty.Formatter(arg)
fmt.Printf("%s:%d: %# v\n", file[len(FILE_PATH):], line, res)
}
//spew.Dump(variable)
}
func main() {
var err error
// save values (upsert)
err = CB.Set("someKey", 0, []string{"an", "example", "list"})
Check(err, "failed to set somekey")
err = CB.Set("primaryKey", 0, 1)
Check(err, "failed to set primaryKey")
// fetch one value
var rv interface{}
err = CB.Get("someKey", &rv)
Check(err, "failed to get someKey")
Explain(rv)
// fetch with CheckAndSet id
cas := uint64(0)
err = CB.Gets("primaryKey", &rv, &cas)
Check(err, "failed to get primaryKey")
Explain(cas, rv)
// fetch multivalue
rows, err := CB.GetBulk([]string{"someKey", "primaryKey", "nothingKey"})
Check(err, "failed to get someKey or primaryKey or nothingKey")
Explain(rows)
jsonStr := rows["someKey"].Body
Explain(string(jsonStr))
stringList := []string{}
err = json.Unmarshal(jsonStr, &stringList)
Check(err, "failed to convert back to json")
Explain(stringList)
// increment value, returns new value
nv, err := CB.Incr("primaryKey", 1, 1, 0)
Check(err, "failed to increment primaryKey")
Explain(nv)
// increment value, defaults to 1 if not exists
nv, err = CB.Incr("key3", 1, 1, 60)
Check(err, "failed to increment primaryKey")
Explain(nv)
}
This would give an output:
go get -u -v github.com/couchbaselabs/go-couchbase
Then just create an example source code and run it:
package main
import (
"fmt"
"github.com/couchbaselabs/go-couchbase"
//"github.com/davecgh/go-spew/spew"
"encoding/json"
"github.com/kr/pretty"
//"reflect"
"errors"
"runtime"
"strings"
)
var FILE_PATH string
var CB *couchbase.Bucket
// initialize file path
func init() {
_, file, _, _ := runtime.Caller(1)
FILE_PATH = file[:4+strings.Index(file, "/src/")]
err := errors.New("no error")
CB, err = couchbase.GetBucket("http://127.0.0.1:8091/", "default", "default")
Panic(err, "Error connection, getting pool or bucket: %v")
}
// print warning message
func Check(err error, msg string, args ...interface{}) error {
if err != nil {
_, file, line, _ := runtime.Caller(1)
str := fmt.Sprintf("%s:%d: ", file[len(FILE_PATH):], line)
fmt.Errorf(str+msg, args...)
res := pretty.Formatter(err)
fmt.Errorf("%# v\n", res)
}
return err
}
// print error message and exit program
func Panic(err error, msg string, args ...interface{}) {
if Check(err, msg, args...) != nil {
panic(err)
}
}
// describe a variable
func Explain(args ...interface{}) {
_, file, line, _ := runtime.Caller(1)
//res, _ := json.MarshalIndent(variable, " ", " ")
for _, arg := range args {
res := pretty.Formatter(arg)
fmt.Printf("%s:%d: %# v\n", file[len(FILE_PATH):], line, res)
}
//spew.Dump(variable)
}
func main() {
var err error
// save values (upsert)
err = CB.Set("someKey", 0, []string{"an", "example", "list"})
Check(err, "failed to set somekey")
err = CB.Set("primaryKey", 0, 1)
Check(err, "failed to set primaryKey")
// fetch one value
var rv interface{}
err = CB.Get("someKey", &rv)
Check(err, "failed to get someKey")
Explain(rv)
// fetch with CheckAndSet id
cas := uint64(0)
err = CB.Gets("primaryKey", &rv, &cas)
Check(err, "failed to get primaryKey")
Explain(cas, rv)
// fetch multivalue
rows, err := CB.GetBulk([]string{"someKey", "primaryKey", "nothingKey"})
Check(err, "failed to get someKey or primaryKey or nothingKey")
Explain(rows)
jsonStr := rows["someKey"].Body
Explain(string(jsonStr))
stringList := []string{}
err = json.Unmarshal(jsonStr, &stringList)
Check(err, "failed to convert back to json")
Explain(stringList)
// increment value, returns new value
nv, err := CB.Incr("primaryKey", 1, 1, 0)
Check(err, "failed to increment primaryKey")
Explain(nv)
// increment value, defaults to 1 if not exists
nv, err = CB.Incr("key3", 1, 1, 60)
Check(err, "failed to increment primaryKey")
Explain(nv)
}
This would give an output:
/test.go:92: []interface {}{ "an", "example", "list", } /test.go:98: uint64(0x13aa8b32b9f7f091) /test.go:98: float64(1) /test.go:103: map[string]*gomemcached.MCResponse{ "primaryKey": &gomemcached.MCResponse{ Opcode: 0x0, Status: 0x0, Opaque: 0x0, Cas: 0x13aa8b32b9f7f091, Extras: {0x0, 0x0, 0x0, 0x0}, Key: {}, Body: {0x31}, Fatal: false, }, "someKey": &gomemcached.MCResponse{ Opcode: 0x0, Status: 0x0, Opaque: 0x0, Cas: 0x13aa8b32b9e4690f, Extras: {0x0, 0x0, 0x0, 0x0}, Key: {}, Body: {0x5b, 0x22, 0x61, 0x6e, 0x22, 0x2c, 0x22, 0x65, 0x78, 0x61, 0x6d, 0x70, 0x6c, 0x65, 0x22, 0x2c, 0x22, 0x6c, 0x69, 0x73, 0x74, 0x22, 0x5d}, Fatal: false, }, } /test.go:106: "[\"an\",\"example\",\"list\"]" /test.go:111: []string{"an", "example", "list"} /test.go:116: uint64(0x2) /test.go:121: uint64(0x4)
How to install Couchbase on ArchLinux
Couchbase is NoSQL database with the best performance AFAIK. To install Couchbase, we need git and repo tool, that could be installed using this command:
sudo pacman -S git libtool gcc libevent make gperftools sqlite erlangautomake autoconf make curl dmidecode
curl https://storage.googleapis.com/git-repo-downloads/repo > ~/bin/repo
chmod +x ~/bin/repo
Change first line from python to python2.7, then initialize and start fetch the Couchbase repository:
mkdir couchbase
cd couchbase
repo init -u git://github.com/couchbase/manifest.git -m released/3.0.1.xml
repo snyc
To prevent failure when building python-based programs, symlink your python to the older one:
sudo ln -sf python2.7 /usr/bin/python
Install older version of v8 (3.21.17 or less), using this command:
yaourt -S v8-3.15
V8PKG=v8-3.19.18.4-1-x86_64.pkg.tar.xz
wget http://seblu.net/a/arm/packages/v/v8/$V8PKG
sudo pacman -U $V8PKG
Then compile the Couchbase:
make
Note if this step failed clean the couchbase first using make clean, then compile the v8 on the v8 folder in the couchbase directory. If you're using latest version of GCC, remove all werror string from build/standalone.gypi and build/toolchain.gpyi file:
make dependencies
export PYTHON=python2
find build/ test/ tools/ src/ -type f \
-exec sed -e 's_^#!/usr/bin/env python$_&2_' \
-e 's_^\(#!/usr/bin/python2\).[45]$_\1_' \
-e 's_^#!/usr/bin/python$_&2_' \
-e "s_'python'_'python2'_" -i {} \;
sed -i 's/python /python2 /' Makefile
sed -i 's/-Werror//' build/standalone.gypi build/common.gypi
make x64.release library=shared console=readline
Alternatively use this modified PKGBUILD file:
wget http://goo.gl/miEmFt -O PKGBUILD
makepkg
sudo pacman -U v8-3.21-3.21.17-1-x86_64.pkg.tar.xz
Don't forget to increase the default number of files:
echo '
* soft nofile 65536
* hard nofile 65536
' | sudo tee -a /etc/security/limits.conf
And last, start the server:
./install/bin/couchbase-server
Then just visit the web interface to setup the cluster http://localhost:8091/
That's it, that's how you install Couchbase on ArchLinux from source.
sudo pacman -S git libtool gcc libevent make gperftools sqlite erlangautomake autoconf make curl dmidecode
curl https://storage.googleapis.com/git-repo-downloads/repo > ~/bin/repo
chmod +x ~/bin/repo
Change first line from python to python2.7, then initialize and start fetch the Couchbase repository:
mkdir couchbase
cd couchbase
repo init -u git://github.com/couchbase/manifest.git -m released/3.0.1.xml
repo snyc
To prevent failure when building python-based programs, symlink your python to the older one:
sudo ln -sf python2.7 /usr/bin/python
Install older version of v8 (3.21.17 or less), using this command:
yaourt -S v8-3.15
V8PKG=v8-3.19.18.4-1-x86_64.pkg.tar.xz
wget http://seblu.net/a/arm/packages/v/v8/$V8PKG
sudo pacman -U $V8PKG
Then compile the Couchbase:
make
Note if this step failed clean the couchbase first using make clean, then compile the v8 on the v8 folder in the couchbase directory. If you're using latest version of GCC, remove all werror string from build/standalone.gypi and build/toolchain.gpyi file:
make dependencies
export PYTHON=python2
find build/ test/ tools/ src/ -type f \
-exec sed -e 's_^#!/usr/bin/env python$_&2_' \
-e 's_^\(#!/usr/bin/python2\).[45]$_\1_' \
-e 's_^#!/usr/bin/python$_&2_' \
-e "s_'python'_'python2'_" -i {} \;
sed -i 's/python /python2 /' Makefile
sed -i 's/-Werror//' build/standalone.gypi build/common.gypi
make x64.release library=shared console=readline
Alternatively use this modified PKGBUILD file:
wget http://goo.gl/miEmFt -O PKGBUILD
makepkg
sudo pacman -U v8-3.21-3.21.17-1-x86_64.pkg.tar.xz
Don't forget to increase the default number of files:
echo '
* soft nofile 65536
* hard nofile 65536
' | sudo tee -a /etc/security/limits.conf
And last, start the server:
./install/bin/couchbase-server
Then just visit the web interface to setup the cluster http://localhost:8091/
That's it, that's how you install Couchbase on ArchLinux from source.
2014-11-24
Database GUI for MongoDB
There are many database GUI for MongoDB, today I will review about GenghisApp, Humongous, and RoboMongo.
You could install the Ruby-based one using this command:
gem install humongous
gem install genghisapp bson_ext
to start it, use this command:
humongous # visit http://localhost:9000
genghisapp # visit http://localhost:8000
php -S localhost:8000 genghis.php # php version
to stop it, use this command:
humongous -K
genghisapp --kill
I don't know why but when I tried both, they are not running at all, just stopped after showing "trying port xxxx", those programs closes immediately after that, the program doesn't show up on ps and netstat.
The next in line is RoboMongo, first, you must download using this command:
wget -c http://robomongo.org/files/linux/robomongo-0.8.4-x86_64.tar.gz
Extract it, then run the robomongo.sh shell script on the bin/ folder, set the connection, then it would show up something like this:
This one quote good, it shows the command line when doing any action, but as usual, Java-based app doesn't work well with dark GTK+ theme.
There are some other GUI for MongoDB that support live-monitoring, for example MongoBird and MongOwl, but don't forget, there are also built-in web monitoring on port 28017.
You could install the Ruby-based one using this command:
gem install humongous
gem install genghisapp bson_ext
to start it, use this command:
humongous # visit http://localhost:9000
genghisapp # visit http://localhost:8000
php -S localhost:8000 genghis.php # php version
to stop it, use this command:
humongous -K
genghisapp --kill
I don't know why but when I tried both, they are not running at all, just stopped after showing "trying port xxxx", those programs closes immediately after that, the program doesn't show up on ps and netstat.
The next in line is RoboMongo, first, you must download using this command:
wget -c http://robomongo.org/files/linux/robomongo-0.8.4-x86_64.tar.gz
Extract it, then run the robomongo.sh shell script on the bin/ folder, set the connection, then it would show up something like this:
This one quote good, it shows the command line when doing any action, but as usual, Java-based app doesn't work well with dark GTK+ theme.
There are some other GUI for MongoDB that support live-monitoring, for example MongoBird and MongOwl, but don't forget, there are also built-in web monitoring on port 28017.
2014-08-04
Open Source Desktop Database GUI
When developing database apps sometimes we need easier user interface to query the database, instead of using command line (mysql for MySQL, psql for PostgreSQL), or using MySQL Workbench or the default pgAdmin. When you're using any of IDE from JetBrains you could use their Database Explorer.
DBeaver is one open source and cross platform database explorer, but it doesn't really work well with dark theme.
SQL Workbench also open source java-based database explorer, works fine with dark theme, but failed to display invalid Date.
Execute Query quite fine, I have no rants about this one.
Oh well I'll find and try another later on.
Subscribe to:
Posts
(
Atom
)