Showing posts with label postgresql. Show all posts
Showing posts with label postgresql. Show all posts

2017-05-19

PostgreSQL 9.6.2 vs ScyllaDB 1.7RC2

Since CockroachDB 1.0 not yet performant (ain't got time to wait), today we're gonna test PostgreSQL 9.6.2 vs ScyllaDB 1.7RC2 on Ubuntu XFS filesystem.

test1: postgresql
INSERT: 34.667685316s (3.47 ms/op)
UPDATE: 35.117617526s (3.51 ms/op)
SELECT: 47.529755777s (0.73 ms/op)
CPU: 35.14s     Real: 117.64s   RAM: 58 544 KB

test2: postgresql jsonb
INSERT: 33.861673279s (3.39 ms/op)
UPDATE: 34.038996914s (3.40 ms/op)
SELECT: 45.340834079s (0.70 ms/op)
CPU: 33.62s     Real: 113.58s   RAM: 58 140 KB

test4: scylladb
INSERT: 2.133985799s (0.21 ms/op)
UPDATE: 2.167973712s (0.22 ms/op)
SELECT: 2m24.804415353s (2.22 ms/op)
CPU: 41.29s     Real: 152.57s   RAM: 79 708 KB

Hmm.. This is weird, because the difference between this and previous benchmark are:

LabelThisPrevious
Operating System Ubuntu 17.04 Manjaro
Kernel 4.10.0-19-generic
tuned by scylla_setup
aufs_friendly 4.10.13-1
PostgreSQL config:
effective_cache
shared_buffers
work_mem
ubuntu default:
4G/128M/4M
modified:
2G/2G/16M
PostgreSQL total time 113.58s 116.34s
ScyllaDB  official deb 1.7RC2 official docker 1.6.4
ScyllaDB config 8GB / 8 / XFS 4GB / 4 / EXT4
ScyllaDB total time 152.57s 141.34s

Probably the kernel factor? Oh yeah, you can get the source on github (you can create PR if there's bug). Now let's test it in parallel with one third of data (only show first 3 and last 3 result):

test1: postgresql
I-18: (6.17 ms/op: 101)
I-12: (6.16 ms/op: 101)
I-14: (6.19 ms/op: 101)
I-32: (157.80 ms/op: 101)
I-05: (167.31 ms/op: 101)
I-31: (168.23 ms/op: 101)
INSERT: 16.99355453s (5.10 ms/op)
U-13: (5.92 ms/op: 101)
U-16: (5.97 ms/op: 101)
U-04: (6.00 ms/op: 101)
U-21: (1312.46 ms/op: 101)
U-28: (1333.33 ms/op: 101)
U-20: (1333.60 ms/op: 101)
UPDATE: 2m14.695128106s (40.41 ms/op)
S-24: (14.56 ms/op: 139)
S-69: (17.86 ms/op: 115)
S-78: (24.18 ms/op: 88)
S-06: (18.14 ms/op: 556)
S-46: (18.54 ms/op: 556)
S-37: (1427.27 ms/op: 91)
SELECT: 2m9.888985893s (5.98 ms/op: 21716)
CPU: 13.07s     Real: 281.90s   RAM: 59 072 KB

test2: postgresql jsonb
I-03: (6.23 ms/op: 101)
I-10: (6.28 ms/op: 101)
I-04: (6.30 ms/op: 101)
I-08: (157.54 ms/op: 101)
I-29: (225.82 ms/op: 101)
I-22: (314.39 ms/op: 101)
INSERT: 31.754559744s (9.53 ms/op)
U-23: (6.45 ms/op: 101)
U-13: (6.51 ms/op: 101)
U-04: (6.54 ms/op: 101)
U-05: (1287.63 ms/op: 101)
U-01: (1287.77 ms/op: 101)
U-26: (1953.67 ms/op: 101)
UPDATE: 3m17.321405467s (59.20 ms/op)
S-27: (27.54 ms/op: 124)
S-21: (21.89 ms/op: 159)
S-29: (30.55 ms/op: 115)
S-48: (323.69 ms/op: 417)
S-05: (202.41 ms/op: 667)
S-03: (121.61 ms/op: 1111)
SELECT: 2m15.109162326s (6.22 ms/op: 21716)
CPU: 13.78s     Real: 364.50s   RAM: 56 688 KB

test4: scylladb
I-17: (1.59 ms/op: 101)
I-12: (1.59 ms/op: 101)
I-30: (1.60 ms/op: 101)
I-13: (1.65 ms/op: 101)
I-22: (1.65 ms/op: 101)
I-06: (1.65 ms/op: 101)
INSERT: 166.992399ms (0.05 ms/op)
U-06: (1.60 ms/op: 101)
U-12: (1.61 ms/op: 101)
U-29: (1.62 ms/op: 101)
U-31: (1.68 ms/op: 101)
U-30: (1.68 ms/op: 101)
U-11: (1.68 ms/op: 101)
UPDATE: 170.240627ms (0.05 ms/op)
S-79: (52.05 ms/op: 86)
S-74: (50.06 ms/op: 98)
S-75: (53.78 ms/op: 96)
S-03: (12.18 ms/op: 1111)
S-02: (8.43 ms/op: 1666)
S-42: (8.53 ms/op: 1666)
SELECT: 14.678651323s (0.68 ms/op: 21716)
CPU: 16.92s     Real: 18.08s    RAM: 76 824 KB

WTF! These numbers are blazing fast!
Too bad that the gocql or probably the scylla-server sometimes refused to connect:

2017/05/19 14:16:28 gocql: unable to create session: unable to discover protocol version: dial tcp 127.0.0.1:9042: i/o timeout
2017/05/19 14:16:33 gocql: unable to create session: unable to discover protocol version: dial tcp 127.0.0.1:9042: i/o timeout
2017/05/19 14:20:30 gocql: unable to create session: unable to discover protocol version: dial tcp 127.0.0.1:9042: i/o timeout

2017/05/19 14:18:28 gocql: unable to create session: unable to discover protocol version: dial tcp 127.0.0.1:9042: getsockopt: connection refused
2017/05/19 14:18:44 gocql: unable to dial control conn 127.0.0.1: dial tcp 127.0.0.1:9042: i/o timeout
2017/05/19 14:18:44 gocql: unable to create session: control: unable to connect to initial hosts: dial tcp 127.0.0.1:9042: i/o timeout

2017/05/19 14:20:34 gocql: unable to create session: unable to discover protocol version: dial tcp 127.0.0.1:9042: i/o timeout
2017/05/19 14:20:37 gocql: unable to create session: unable to discover protocol version: dial tcp 127.0.0.1:9042: i/o timeout 

2017/05/19 14:27:46 gocql: unable to create session: unable to discover protocol version: dial tcp 127.0.0.1:9042: i/o timeout

Or when using cqlsh 127.0.0.1:

Connection error: ('Unable to connect to any servers', {'127.0.0.1': error(None, "Tried connecting to [('127.0.0.1', 9042)]. Last error: timed out")})
Connection error: ('Unable to complete the operation against any hosts', {})
Connection error: ('Unable to connect to any servers', {'127.0.0.1': error(4, "Tried connecting to [('127.0.0.1', 9042)]. Last error: Interrupted system call")})

Other those issue, I think ScyllaDB is freakingly awesome!

EDIT: this issue only happened after doing these intensive benchmark, after restarting this no longer happened

2017-05-15

PostgreSQL 9.6.2 vs CockroachDB 1.0 vs ScyllaDB 1.6.4

New kids on the block, multi-master database that recently released 1.0, here's some microbenchmark result:

N = 999

test1: postgresql
INSERT: 3.442695947s (3.45 ms/op)
UPDATE: 3.912135754s (3.92 ms/op)
SELECT: 3.408927374s (0.52 ms/op)
CPU: 2.62s      Real: 11.07s    RAM: 57 864 KB

test2: postgresql jsonb
INSERT: 3.270218052s (3.27 ms/op)
UPDATE: 3.796453051s (3.80 ms/op)
SELECT: 3.209289448s (0.49 ms/op)
CPU: 2.33s      Real: 10.57s    RAM: 58 680 KB

test3: cockroachdb
INSERT: 7.495245970s (7.50 ms/op)
UPDATE: 8.249719113s (8.26 ms/op)
SELECT: 16m8.372273781s (148.34 ms/op)
CPU: 2.50s      Real: 986.17s   RAM: 58 340 KB

test4: scylladb
INSERT: 150.117719ms (0.15 ms/op)
UPDATE: 147.339553ms (0.15 ms/op)
SELECT: 5.422713068s (0.83 ms/op)
CPU: 4.06s      Real: 7.76s     RAM: 76 764 KB

N = 9999

test2: postgresql jsonb
INSERT: 36.012436525s (3.60 ms/op)
UPDATE: 35.902222429s (3.59 ms/op)
SELECT: 44.119970723s (0.68 ms/op)
CPU: 32.30s     Real: 116.34s   RAM: 58 632 KB

test4: scylladb
INSERT: 1.518285796s (0.15 ms/op)
UPDATE: 1.542542984s (0.15 ms/op)
SELECT: 2m16.29325852s (2.09 ms/op)
CPU: 41.55s     Real: 141.34s   RAM: 76 712 KB

This is shocking for me that CockroachDB 2x-19x slower than PostgreSQL, so I file a bug report and one for scylla (slow query on larger datasets).
This benchmark performed on 64-bit ArchLinux, i7-4720HQ, 16GB RAM, 256GB SSD Samsung. You can get the source here. Note that ScyllaDB requires XFS, but I use EXT4 filesystem.

PostgreSQL
+ battletested for 20 years
+ schema-free (via JSONB, it has indexes :3)
+ triggers, joins, language extensions (eg. pl/v8, pl/go, pl/ruby, etc)
- no multimaster replication, except if you use PostgresXL or Postgres-X2

CockroachDB
+ survive Aphyr's Jepsen
+ autoscaling, autohealing, autobalancing
+ only 1 binary file (Go power ^^)
- seriously slow on every part of this benchmark
- no BLOB! (as per 2017-05-15)

ScyllaDB
+ Cassandra rewritten in C++
+ autoscaling, autohealing, autobalancing
+ blazing fast for insert and update benchmark (not sure if it's persisted to disk though)
- no secondary index and serial/auto-increment (as per 2017-05-15)
- only support Ubuntu, Debian, RHEL (a bit challenging to compile on another OS because it's depends on old thrift and boost library)
- communication by default without authentication, this is bad if you don't have any private network (eg. host it on a public cloud), you must enable internode-ecnryption and put a firewall to allow only certain host access exposed ports.

2017-05-11

TechEmpower Framework Benchmark Round 14

New benchmark result is out, as usual the important part is the data-update benchmark:


At that chart, the top ranking language are: Kotlin, C, Java, C++, Go, Perl, Javascript, Scala, C#; and for the database: MySQL, PostgreSQL, MongoDB.

Also the other benchmark that reflect real world case is multiple-queries:
On that benchmmark, the top performer programming language are: Dart, C++, Java, C, Go, Kotlin, Javascript, Scala, Ruby, and Ur; and the database: MongoDB, PostgreSQL, MySQL. You can see the previous result here, and here.


2016-11-16

Techempower Framework Benchmark Round 13

After long wait, the latest Techempower Framework Benchmark 13 is out! And just like previous one, Go's fasthttp really really fast. As usual the most important thing is data updates benchmark:


Top ranker in this part (more than 1024 req/s) are GoC++JavaJavascript (NodeJS)PerlC#, ElixirDartScalaPythonClojure, and Groovy (Actually PHP are there below the last one on the picture with 1018 req/s). And for the database part would be: PostgreSQL, MongoDB, and MySQL.

2016-02-28

Unfair Database Benchmark

So today we will benchmark some popular database, PostgreSQL, MySQL (Percona), Aerospike, Couchbase, MongoDB, Redis under 64-bit Linux (i7-4700MQ, 16GB RAM, non SSD harddisk) and Go using certain library. The settings for each database should have maximum 4GB in working memory, each insert should be done in transaction (if possible), index must be created (if possible), searching 40 times of insertion number in random.

Disclaimer: this is not apple to apple comparison, you can look for the source (and sent a pull request) on my github or dropbox. Let's get started!

PostgreSQL (9.5.1)

using JSONB data type

sudo pacman -S postgresql

sudo su - postgres <<EOF
initdb --locale en_CA.UTF-8 -E UTF8 -D '/var/lib/postgres/data'
EOF

echo "
max_connections = 64                 
superuser_reserved_connections = 3   
shared_buffers = 2GB                 
work_mem = 32MB                      
dynamic_shared_memory_type = posix  
effective_cache_size = 4GB
logging_collector = off         
datestyle = 'iso, dmy'
timezone = 'Asia/Jakarta'
lc_messages = 'en_CA.UTF-8'                    
lc_monetary = 'en_CA.UTF-8'                    
lc_numeric = 'en_CA.UTF-8'                     
lc_time = 'en_CA.UTF-8'                        
default_text_search_config = 'pg_catalog.english'
" > /var/lib/postgresql/data/postgresql.conf

sudo systemctl start postgresql

sudo su - postgres <<EOF
createdb test2
EOF

go get -u -v gitlab.com/kokizzu/gokil

The result:

initial database size: 7192 kB
INSERT micro 100 983.62 ms
INSERT tiny 1000 13223.53 ms
INSERT small 10000 119006.15 ms
final database size: 10 MB
SEARCH micro 100 354.34 ms
SEARCH tiny 1000 3499.50 ms
SEARCH small 10000 35499.98 ms


MySQL (Percona 5.7.10)

using InnoDB, current JSON data type doesn't support indexing

sudo pacman -S percona-server percona-server-clients 

echo '[client]
port            = 3306
socket          = /run/mysqld/mysqld.sock
[mysqld]
port            = 3306
socket          = /run/mysqld/mysqld.sock
datadir         = /var/lib/mysql
symbolic-links=0
sql_mode=NO_ENGINE_SUBSTITUTION,STRICT_TRANS_TABLES
key_buffer_size = 800M
max_allowed_packet = 16M
table_open_cache = 64
sort_buffer_size = 32M
net_buffer_length = 4M
read_buffer_size = 16M
read_rnd_buffer_size = 16M
myisam_sort_buffer_size = 128M
innodb_buffer_pool_size = 4G
[mysqld_safe]
log-error       = /var/log/mysqld.log
pid-file        = /run/mysqld/mysqld.pid
' > /etc/mysql/my.cnf

sudo systemctl start mysqld

echo 'CREATE DATABASE test2' | mysql -u root

The result:

initial database size: 0
INSERT micro 100 3258.85 ms
INSERT tiny 1000 32297.04 ms
INSERT small 10000 341951.99 ms
final database size: 2.02 MB
SEARCH micro 100 282.32 ms
SEARCH tiny 1000 2843.62 ms
SEARCH small 10000 28404.53 ms


MongoDB (3.2.3)

using struct and map[string]interface{} inside the struct

sudo pacman -S mongodb

go get -u -v gopkg.in/mgo.v2

The result:

initial database size: 78 MB

INSERT micro 100 10.19 ms
INSERT tiny 1000 83.58 ms
INSERT small 10000 894.40 ms
final database size: 78 MB
SEARCH micro 100 321.75 ms
SEARCH tiny 1000 3183.75 ms
SEARCH small 10000 33069.50 ms


CouchBase (4.0.0)

invalid implementation, always got panic: {MCResponse status=0x20 keylen=0, extralen=0, bodylen=12}

sudo yaourt -S couchbase-server-community

go get -u -v github.com/couchbase/go-couchbase

sudo systemctl restart couchbase-server

Visit http://127.0.0.1:8091 to configure, I use 3000MB data cache and 1000MB index cache.

The result (incorrect):

initial database size: 4.06 MB
INSERT micro 100 132.29 ms
INSERT tiny 1000 1317.61 ms
INSERT small 10000 13082.86 ms
final database size: -
SEARCH micro 100 - ms
SEARCH tiny 1000 69020.08 ms
SEARCH small 10000 - ms


AeroSpike (3.7.4)

without secondary index, since I could not found source or binary package for their command line tool for creating the index)

sudo yourt -S aerospike-server-ce
# change this two line on PKGBUILD:
# source=("http://aerospike.com/download/server/latest/artifact/tgz")
# sha512sums=('e9a574e5020db9542de51ad0a1d8da28b8f89d10505848c406656cf113654c0135dfd63fe0aaaafb06f50938124a00275f1b2a0a5ead7058b7b53964c046f3a1')

go get -u -v github.com/aerospike/aerospike-client-go

cd /
sudo aerospike init 

echo '
namespace test2 {
  replication-factor 2
  memory-size 4G
  default-ttl 0 # 30 days, use 0 to never expire/evict.
  storage-engine device {
    file /opt/aerospike/data
    filesize 1G
  }
}' >> /etc/aerospike.conf

sudo mkdir /opt/aerospike

sudo aerospike start

The result:

initial database size: 1 GB
INSERT micro 100 4.63 ms
INSERT tiny 1000 40.75 ms
INSERT small 10000 426.20 ms
final database size: 1 GB
SEARCH micro 100 113.93 ms
SEARCH tiny 1000 1086.70 ms
SEARCH small 10000 11259.24 ms


Redis (3.0.7)

using gob for serialization and deserialization.

sudo yourt -S redis

go get -u -v github.com/garyburd/redigo/redis

sudo systemctl start redis

The result:

initial database size: 18 bytes

INSERT micro 100 3.94 ms
INSERT tiny 1000 38.28 ms
INSERT small 10000 377.24 ms
final database size: 3.9 MB
SEARCH micro 100 424.79 ms
SEARCH tiny 1000 4263.79 ms
SEARCH small 10000 42904.11 ms

Here's the recap:


2016-02-10

Simpler Way to find rows with Largest/Smallest value for each group in SQL

Sometimes we need to find largest or smallest value for each column and also associated column. Recently I found out that DISTINCT ON is quite good, than my previous way (LEFT JOIN less IS NULL, or WHERE IN SELECT MAX/MIN), see this example:

CREATE TABLE test1 ( id bigserial primary key, val int, name text );

INSERT INTO test1(val,name) VALUES(6,'a'),(9,'a'),(5,'b'),(11,'b'),(2,'c'),(8,'c');

SELECT * FROM test1;

 id | val | name 
----+-----+------
  1 |   6 | a
  2 |   9 | a
  3 |   5 | b
  4 |  11 | b
  5 |   2 | c
  6 |   8 | c

To find which id for each name that has greatest value, we usually do something like this (WHERE IN SELECT MAX):

SELECT *
FROM test1 x1
WHERE (name, val) IN (
  SELECT name, MAX(val) 
  FROM test1
  GROUP BY 1
)
ORDER BY x1.name;


 id | val | name
----+-----+------
  2 |   9 | a
  4 |  11 | b
  6 |   8 | c

 Sort  (cost=73.95..74.64 rows=275 width=44)
   Sort Key: test1.name
   ->  Hash Join  (cost=33.50..62.81 rows=275 width=44)
         Hash Cond: ((test1.name = test1_1.name) AND (test1.val = (max(test1_1.val))))
         ->  Seq Scan on test1  (cost=0.00..21.00 rows=1100 width=44)
         ->  Hash  (cost=30.50..30.50 rows=200 width=36)
               ->  HashAggregate  (cost=26.50..28.50 rows=200 width=36)
                     Group Key: test1_1.name
                     ->  Seq Scan on test1 test1_1  (cost=0.00..21.00 rows=1100 width=36)

Or this (LEFT JOIN less IS NULL):

SELECT x1.*
FROM test1 x1
  LEFT JOIN test1 x2
    ON x1.name = x2.name
    AND x1.val < x2.val
WHERE x2.id IS NULL
GROUP BY x1.id, x1.name
ORDER BY x1.name; 


 id | val | name 
----+-----+------
  2 |   9 | a
  4 |  11 | b
  6 |   8 | c

 Group  (cost=264.68..264.75 rows=10 width=44)
   Group Key: x1.name, x1.id
   ->  Sort  (cost=264.68..264.70 rows=10 width=44)
         Sort Key: x1.name, x1.id
         ->  Merge Left Join  (cost=153.14..264.51 rows=10 width=44)
               Merge Cond: (x1.name = x2.name)
               Join Filter: (x1.val < x2.val)
               Filter: (x2.id IS NULL)
               ->  Sort  (cost=76.57..79.32 rows=1100 width=44)
                     Sort Key: x1.name
                     ->  Seq Scan on test1 x1  (cost=0.00..21.00 rows=1100 width=44)
               ->  Sort  (cost=76.57..79.32 rows=1100 width=44)
                     Sort Key: x2.name
                     ->  Seq Scan on test1 x2  (cost=0.00..21.00 rows=1100 width=44)


There some other way, that I found simpler (and faster in real life) also return only one row if there's duplicate value, that is DISTINCT ON!

SELECT DISTINCT ON (x1.name) *
FROM test1 x1
ORDER BY x1.name, x1.val DESC;

 id | val | name 
----+-----+------
  2 |   9 | a
  4 |  11 | b
  6 |   8 | c

 Unique  (cost=76.57..82.07 rows=200 width=44)
   ->  Sort  (cost=76.57..79.32 rows=1100 width=44)
         Sort Key: name, val
         ->  Seq Scan on test1 x1  (cost=0.00..21.00 rows=1100 width=44)

So in this case, we look for highest value, distinct by name. Well, that's all for now.

2015-10-15

State of JSON in SQLite and NoSQL

Another news for those who like JSON format, there's JSON1 extension for SQLite, this is a great news for those who only need embedded database for their app. Their functions are similar to PostgreSQL's JSON/JSONB functions. On other hand, there's already a lot of BSON solution in NoSQL world, such as the infamous MongoDB  (and TokuMX) RethinkDB, ArangoDB (SQL-like syntax), PouchDB (Javascript version of CouchDB),  SequoiaDBDjonDB, etc. For more information about those databases, see the video below.

SQLite 3 Tutorial

PostgreSQL 9.4's JSONB

What's new in MongoDB 3

Fractal Tree Index (TokuMX)

RethinkDB

ArangoDB

EDIT Hey, apparently there are also JSON data type support in MySQL 5.7.8 (2015-08-03, Release Candidate).

2015-08-14

Moving PostgreSQL Database to RAM

If you are software developer, sometimes you need to test your program faster. Normally the bottleneck of your program are in the database (writing to disk). You can increase the performance of your development/testing, by moving the database to the RAM if you have enough free RAM (>3GB). In this article, I will guide you to move your PostgreSQL database to RAM. First of all, you'll need to stop and disable the PostgreSQL, so the port won't conflict:

sudo systemctl stop postgresql
sudo systemctl disable postgresql

As we already know, /tmp folder on Linux mostly use tmpfs, that is a RAM file system. So if we create the database on the /tmp directory, it's on the RAM. What you'll need to do is create a script containing something like this:

#!/usr/bin/env bash
sudo killall postgres
# init directories
src=/tmp/data
sudo rm -rf $src
mkdir -p $src
sudo chown postgres:postgres $src
sudo su - postgres <<EOF
initdb --locale en_CA.UTF-8 -E UTF8 -D '/tmp/data'
sed -i -- 's/max_connections = 100/max_connections = 1024/' /tmp/data/postgresql.conf
sed -i -- 's/#logging_collector = off/logging_collector = on/' /tmp/data/postgresql.conf
sed -i -- "s/#log_directory = 'pg_log'/log_directory = '\/tmp'/" /tmp/data/postgresql.conf
sed -i -- "s/#log_file_mode = 0600/log_file_mode = 0644/" /tmp/data/postgresql.conf
sed -i -- "s/#log_min_duration_statement = -1/log_min_duration_statement = 0/" /tmp/data/postgresql.conf
sed -i -- "s/#log_error_verbosity = default/log_error_verbosity = verbose/" /tmp/data/postgresql.conf
sed -i -- "s/#log_statement = 'none'/log_statement = 'all'/" /tmp/data/postgresql.conf
# sed -i -- "s///" /tmp/data/postgresql.conf
postgres -D /tmp/data &
echo sleep 2 seconds..
sleep 2
createuser xxx
createdb xxx
psql -c 'GRANT ALL PRIVILEGES ON DATABASE xxx TO xxx;'
echo you can restore database now..
EOF

This script will erase all your database and create a new empty database on the RAM, that you can restore into later. This script will also create a log file that shows all queries that could help on the softwade development process. Lastly, to measure your PostgreSQL's data directory size in MB you can type this command:

$ sudo du -s -B 1M /tmp/data/
336     /tmp/data/

That's it, as a measurement, to restore a 137 MB database (about 300k records) in my PC normally it took about 17 seconds, when the database moved to RAM, it only took 5 seconds, so yes, it's a huge improvement.

2014-11-04

How to start Revel, GORM with PostgreSQL project on Archlinux

Revel is one of many Go Web Application Framework that uses Go programming language. After installing Archlinux, first of all you'll need to install yaourt, the easiest way to install AUR packages. After that you'll need to run this command:

yaourt -Sy --noconfirm go postgresql postgresql-libs liteide chromium google-chrome-stable

Then setup your GOPATH and GOROOT environment variables on your .bashrc

export GOROOT=/usr/lib/go
export GOPATH=$HOME/Dropbox/go
export PATH="$PATH:$GOPATH/bin"


Change the permission of this folder:

sudo chmod -Rv a+rwx /usr/share/liteide/liteenv

Then open LiteIDE, select menu View > Edit Environment, then add this line and save:

GOROOT=/usr/lib/go

Add also a GOPATH directory on menu View > Manage GOPATH...
That is ~/Dropbox/go on my computer.

To test if your Go and LiteIDE working correctly, create a new .go file and press Build > FileRun to run, for example:

package main
import "fmt"
func main() {
  fmt.Println("Hello my PC <3")
}

The code above should give an output on Build output window.

To install revel, type this on shell:

go get -u -v github.com/revel/cmd/revel
go get -u -v github.com/revel/revel


To create a new project (for example, the name would be "puis"), type:

cd $GOPATH
revel new puis

Then you could start the app using this command:

revel run puis

And visit http://localhost:9000 to view your app.

Next, we should install PostgreSQL driver and GORM, type:

go get -u -v github.com/lib/pq
go get -u -v github.com/jinzhu/gorm

Then initialize and start your database server:

sudo su - postgres
initdb --locale en_US.UTF-8 -E UTF8 -D '/var/lib/postgres/data'
exit
sudo systemctl enable postgresql
sudo systemctl start postgresql
sudo su - postgres
createuser puis
createdb puis
psql 
GRANT ALL PRIVILEGES ON DATABASE puis TO puis;
\q
exit

Then create a model, for example in app/models/user.go

package models
import "time"
type User struct {
  Id                int64
  Name              string
  Gmail             string
  PuMail            string
  Password          string
  CreatedAt         time.Time
  UpdatedAt         time.Time
  DeletedAt         time.Time
  LastLoginAt       time.Time
  PasswordUpdatedAt time.Time
}

And create a controller for GORM in app/controllers/gorm.go

package controllers
import (
  "database/sql"
  "github.com/jinzhu/gorm"
  _ "github.com/lib/pq"
  "github.com/revel/revel"
  "puis/app/models"
)
type GormController struct {
  *revel.Controller
  Tx *gorm.DB
}
var Db gorm.DB
func InitDB() {
  var err error
  Db, err = gorm.Open("postgres", "user=puis sslmode=disable")
  if err != nil {
    revel.ERROR.Println("FATAL", err)
    panic(err)
  }
  tab := &models.User{}
  Db.AutoMigrate(tab)
  Db.Model(tab).AddUniqueIndex("idx_user__gmail", "gmail")
  Db.Model(tab).AddUniqueIndex("idx_user__pu_mail", "pu_mail")
}
func (c *GormController) Begin() revel.Result {
  txn := Db.Begin()
  if txn.Error != nil {
    panic(txn.Error)
  }
  c.Tx = txn
  revel.INFO.Println("c.Tx init", c.Tx)
  return nil
}
func (c *GormController) Commit() revel.Result {
  if c.Tx == nil {
    return nil
  }
  c.Tx.Commit()
  if err := c.Tx.Error; err != nil && err != sql.ErrTxDone {
    panic(err)
  }
  c.Tx = nil
  revel.INFO.Println("c.Tx commited (nil)")
  return nil
}

func (c *GormController) Rollback() revel.Result {
  if c.Tx == nil {
    return nil
  }
  c.Tx.Rollback()
  if err := c.Tx.Error; err != nil && err != sql.ErrTxDone {
    panic(err)
  }
  c.Tx = nil
  return nil
}

Also call those functions when application starts also before and after controller calling, add these lines in app/init.go on func init()

revel.OnAppStart(controllers.InitDB)
revel.InterceptMethod((*controllers.GormController).Begin, revel.BEFORE)
revel.InterceptMethod((*controllers.GormController).Commit, revel.AFTER)
revel.InterceptMethod((*controllers.GormController).Rollback, revel.FINALLY)

Two more things, add a testing code, for example in app/controllers/app.go on func Index()

user := &models.User{Name: "Kiswono Prayogo"}
c.Tx.NewRecord(user)
c.Tx.Create(user)
return c.Render(user) 

Last thing, just add one more line anywhere on Index.html

{{.user.Name}}

That's it, this is how to connect Revel to PostgreSQL using GORM. This tutorial adapted from Ivan Black's stackoverflow answer.

To solve slow Revel hot-reload or Go build/install, just use this command:

go get -u -v github.com/mattn/go-sqlite3