Each folder on the directory/package uses 1 letter that contains specific commonly used functions, that are:
- A - Array
- B - Boolean
- C - Character (or Rune)
- D - Database
- F - Floating Point
- L - Logging
- M - Map
- I - Integer
- S - String
- T - Time (and Date)
- W - Web (the "framework") -- deprecated, use W2 instead
- X - Anything (aka interface{})
- Z - Z-Template Engine, that has syntax similar to ruby string interpolation #{foo} with additional other that javascript friendly syntax {/* foo */}, [/* bar */], /*! bar */
go get -u -v github.com/kokizzu/gotro
To start a new project, copy a directory from W/example-simplified folder to your $GOPATH/src, that's the base of your project, that should contain something like this:
├── public
│ └── lib
│ └── jquery.js
├── start_dev.sh
├── server.go
└── views
├── error.html
├── layout.html
├── login_example.html
├── named_params_example.html
├── post_values_example.html
└── query_string_example.html
set ownership of $GOROOT..
remove $GOPATH/pkg if go upgraded/downgraded..
precompile all dependencies..
hello1
starting gin..
[gin] listening on port 3000
2017-05-26 10:55:59.835 StartServer ▶ Gotro Example [DEVELOPMENT] server with 6 route(s) on :3001
Work Directory: /home/asd/go/src/hello1/
If you got an error, probably because you haven't installed Redis, that being used in this example to store sessions, to do that on Ubuntu, you can type:
sudo apt-get install redis-server
sudo systemctl enable redis-server
sudo systemctl start redis-server
To see the example, open your browser at http://localhost:3000
Port 3000 is the proxy port for gin program that auto-recompile if the source code changed, the server itself listens on port 3001, if you change it on the source code, you must also change the gin target port on start_dev.sh file, by replacing -a 3001 and -p 3000
Next we see the example on the server.go file:
redis_conn := Rd.NewRedisSession(``, ``, 9, `session::`)
global_conn := Rd.NewRedisSession(``, ``, 10, `session::`)
W.InitSession(`Aaa`, 2*24*time.Hour, 1*24*time.Hour, *redis_conn, *global_conn)
W.Mailers = map[string]*W.SmtpConfig{
``: {
Name: `Mailer Daemon`,
Username: `test.test`,
Password: `123456`,
Hostname: `smtp.gmail.com`,
Port: 587,
},
}
W.Assets = ASSETS
W.Webmasters = WEBMASTER_EMAILS
W.Routes = ROUTERS
W.Filters = []W.Action{AuthFilter}
// web engine
server := W.NewEngine(DEBUG_MODE, false, PROJECT_NAME+VERSION, ROOT_DIR)
server.StartServer(LISTEN_ADDR)
There are 2 redis connection, one for storing local session, one for storing global session (used cross app communication).
You must call W.InitSession to tell the framework name of the cookie, default expiration (how long until a cookie expired, and every how long we should renew). On the next line, we set the mailer W.Mailers, connection that we use to send if there are panic or any other critical error within your web server.
W.Assets is the assets file, should contain any css or javascript script that will be included on every page, the assets should be saved on the public/css/ or public/js/ directory. This is the example how to fill them:
var ASSETS = [][2]string{
//// http://api.jquery.com/ 1.11.1
{`js`, `jquery`},
////// http://hayageek.com/docs/jquery-upload-file.php
{`css`, `uploadfile`},
{`js`, `jquery.form`},
{`js`, `jquery.uploadfile`},
//// https://vuejs.org/v2/guide/ 2.0
{`js`, `vue`},
//// http://momentjs.com/ 2.17.1
{`js`, `moment`},
//// github.com/kokizzu/semantic-ui-daterangepicker
{`css`, `daterangepicker`},
{`js`, `daterangepicker`},
//// http://semantic-ui.com 2.2 // should be below `js` and `css` items
{`/css`, `semantic/semantic`},
{`/js`, `semantic/semantic`},
//// global, helpers, project specific
{`/css`, `global`},
{`/js`, `global`},
}
If you start the type of the file with slash, it means it would locate the file in absolute path starting from public/. Currently only js and css files supported.
Next we must set the W.Webmasters, that is the hardcoded superadmin, one that will be receiving the error emails and could be accessed through ctx.IsWebMaster() that matching the ctx.Session.GetStr(`email`) variable with those values.
Next initialization phase, you must set the route W.Routes, which is used to assign an URL path to a handler function, for example:
var ROUTERS = map[string]W.Action{
``: LoginExample,
`login_example`: LoginExample,
`post_values_example`: PostValuesExample,
`named_params_example/:test1`: NamedParamsExample,
`query_string_example`: QueryStringExample,
}
Next we see the example on the server.go file:
redis_conn := Rd.NewRedisSession(``, ``, 9, `session::`)
global_conn := Rd.NewRedisSession(``, ``, 10, `session::`)
W.InitSession(`Aaa`, 2*24*time.Hour, 1*24*time.Hour, *redis_conn, *global_conn)
W.Mailers = map[string]*W.SmtpConfig{
``: {
Name: `Mailer Daemon`,
Username: `test.test`,
Password: `123456`,
Hostname: `smtp.gmail.com`,
Port: 587,
},
}
W.Assets = ASSETS
W.Webmasters = WEBMASTER_EMAILS
W.Routes = ROUTERS
W.Filters = []W.Action{AuthFilter}
// web engine
server := W.NewEngine(DEBUG_MODE, false, PROJECT_NAME+VERSION, ROOT_DIR)
server.StartServer(LISTEN_ADDR)
There are 2 redis connection, one for storing local session, one for storing global session (used cross app communication).
You must call W.InitSession to tell the framework name of the cookie, default expiration (how long until a cookie expired, and every how long we should renew). On the next line, we set the mailer W.Mailers, connection that we use to send if there are panic or any other critical error within your web server.
W.Assets is the assets file, should contain any css or javascript script that will be included on every page, the assets should be saved on the public/css/ or public/js/ directory. This is the example how to fill them:
var ASSETS = [][2]string{
//// http://api.jquery.com/ 1.11.1
{`js`, `jquery`},
////// http://hayageek.com/docs/jquery-upload-file.php
{`css`, `uploadfile`},
{`js`, `jquery.form`},
{`js`, `jquery.uploadfile`},
//// https://vuejs.org/v2/guide/ 2.0
{`js`, `vue`},
//// http://momentjs.com/ 2.17.1
{`js`, `moment`},
//// github.com/kokizzu/semantic-ui-daterangepicker
{`css`, `daterangepicker`},
{`js`, `daterangepicker`},
//// http://semantic-ui.com 2.2 // should be below `js` and `css` items
{`/css`, `semantic/semantic`},
{`/js`, `semantic/semantic`},
//// global, helpers, project specific
{`/css`, `global`},
{`/js`, `global`},
}
If you start the type of the file with slash, it means it would locate the file in absolute path starting from public/. Currently only js and css files supported.
Next we must set the W.Webmasters, that is the hardcoded superadmin, one that will be receiving the error emails and could be accessed through ctx.IsWebMaster() that matching the ctx.Session.GetStr(`email`) variable with those values.
Next initialization phase, you must set the route W.Routes, which is used to assign an URL path to a handler function, for example:
var ROUTERS = map[string]W.Action{
``: LoginExample,
`login_example`: LoginExample,
`post_values_example`: PostValuesExample,
`named_params_example/:test1`: NamedParamsExample,
`query_string_example`: QueryStringExample,
}
In this example, there are five routes with four different handler function (you can put them on a package, normally you separate them on different package based on access level), on the fourth route we capture the :value as string, that can be anything and can be retrieved by calling ctx.ParamStr(`test1`). Here's some example how to separate the handler based on first segment:
`accounting/acct_payments`: fAccounting.AcctPayments,
`accounting/acct_invoices`: fAccounting.AcctInvoices,
`employee/attendance_list`: fEmployee.AttendanceList,
`employee/business_trip`: fEmployee.BusinessTrip,
`human_resource/business_trip`: fHumanResource.BusinessTrip,
`human_resource/employee/profile/:id`: fHumanResource.EmployeeProfileEdit,
`human_resource/employees`: fHumanResource.Employees,
A handler function should have exactly one parameter with type *W.Context, for example:
func PostValuesExample(ctx *W.Context) {
if ctx.IsAjax() {
ajax := AjaxResponse()
value := ctx.Posts().GetStr(`test2`)
ajax.Set(`test3`, value)
ctx.AppendJson(ajax.SX)
return
}
ctx.Render(`view1`, M.SX{ // <-- locals of the view
`title`: `Post example`,
`map`: M.SI{`test1`:1,`test4`:4},
`arr`: []int{1,2,3,4},
})
}
On above function, we check if the request method is POST or not, if it's so, we assume that it's sent from AJAX, something like this if using jQuery:
var data = {test2: 'foo'};
$.post('', data, function(res) {
alert("Value: " + res.test3);
}).fail(function(xhr, textStatus, errorThrown ) {
alert(textStatus + '\n' + xhr.status);
});
On above javascript snippet, we send to current page through AJAX HTTP POST method, sending a value with key test2 that filled with string foo. The server later will capture it and sending back to client that sending that string as an object with key test3, not that anything you put on it will be converted to JSON. The javascript will retrieve that value through callback (third line on the javascript snippet).
But if client's request is not a POST method, the server will call ctx.Render that will load a file view1.html from view/ directory, if you need to pass anything to that view, put them on a M.SX that is a map with string key and any value type, note that everything you put in this map will be rendered as json. But what's the syntax? This template engine called Z-Template engine, that designed for simplicity and compatibility with javascript syntax, unlike any other template engine, the syntax will not interfere with Javascript IDE's autocomplete feature, here's the example to render values above:
<h1>#{title}</h1>
<h2>#{something that not exists}</h2>
<script>
var title = '#{title}'; // 'Post example'
var a_map = {/* map */}; // {"test1":1,"test4":4}
var an_arr = [/* arr */]; // [1,2,3,4]
</script>
Different from any other template engine, any value given to the Render method that not being used will show a warning, and any key used in the template that not provided in render function will render the key itself (eg: something that not exists).
Wait, in PHP you can retrieve query parameter using $_GET variable, how to do that in this framework?
// this is Go
ctx.QueryParams().GetInt(`theKey`) // equal to $_GET['theKey']
Now back to the handler function, the ctx parameter can be used to control the output, normally when you call Render method, it would also wrap the rendered view with view/layout.html, but if you did not want that, you can call this:
ctx.NoLayout = true
ctx.Buffer.Reset() // to clear rendered things if you already call Render method
ctx.Title = `something` // to set the title, if you use the layout
Layout view have some provided values (locals), that are: title, project_name, assets (the js and css you give on the assets), is_superadmin (if the current logged in person is a webmaster), debug_mode (always true if you didn't update VERSION variable on compile time).
You can see other methods and properties available, you can see them by control-click the W.Context type from your IDE (Gogland, Wide, Visual Studio Code, etc).
Now how to connect to the database? First you must install the database, for example PostgreSQL 9.6 in Ubuntu:
sudo apt-get install postgresql
sudo systemctl enable postgresql
hba=/etc/postgresql/9.6/main/pg_hba.conf
sudo sed -i 's|local all all peer|local all all trust|g' $hba
sudo sed -i 's|host all all 127.0.0.1/32 md5|host all all 127.0.0.1/32 trust|g' $hba
sudo sed -i 's|host all all ::1/128 md5|host all all ::1/128 trust|g' $hba
echo 'local all test1 trust' sudo tee -a $hba # if needed
sudo systemctl start postgresql
sudo su - postgres <<EOF
createuser test1
createdb test1
psql -c 'GRANT ALL PRIVILEGES ON DATABASE test1 TO test1;'
EOF
sudo systemctl enable postgresql
hba=/etc/postgresql/9.6/main/pg_hba.conf
sudo sed -i 's|local all all peer|local all all trust|g' $hba
sudo sed -i 's|host all all 127.0.0.1/32 md5|host all all 127.0.0.1/32 trust|g' $hba
sudo sed -i 's|host all all ::1/128 md5|host all all ::1/128 trust|g' $hba
echo 'local all test1 trust' sudo tee -a $hba # if needed
sudo systemctl start postgresql
sudo su - postgres <<EOF
createuser test1
createdb test1
psql -c 'GRANT ALL PRIVILEGES ON DATABASE test1 TO test1;'
EOF
After testing if your database created correctly, you must create a directory, for example model/ then create a file inside it, for example conn.go with these content:
package model
import (
"github.com/kokizzu/gotro/D/Pg"
_ "github.com/lib/pq"
)
var PG_W, PG_R *Pg.RDBMS
func init() {
PG_W = Pg.NewConn(`test1`, `test1`)
// ^ later when scaling we replace this one
PG_R = Pg.NewConn(`test1`, `test1`)
}
On the code above we create 2 connection, writer and reader, this is the recommended way to scale the reader through multiple servers, if you need better writer (but didn't support join, you can use ScyllaDB or Redis). Next, we create a program to initialize our tables, for example in go/init.go:
package main
import "hello1/model"
func main() {
model.PG_W.CreateBaseTable(`users`, `users`)
model.PG_W.CreateBaseTable(`todos`, `users`) // 2nd table
}
You must execute the gotro/D/Pg/functions.sql using psql before running the code above, it would create 2 tables with indexes with 2 log tables, triggers and some indexes, you can check it inside psql -U test1 using \dt+ or \d users command, that would show something like this:
Table "public.users"
Column | Type | Modifiers
------------+--------------------------+-----------------------------------------
id | bigint | not null default nextval('users_id_seq'::regclass)
unique_id | character varying(4096) |
created_at | timestamp with time zone | default now()
updated_at | timestamp with time zone |
deleted_at | timestamp with time zone |
restored_at | timestamp with time zone |
modified_at | timestamp with time zone | default now()
created_by | bigint |
updated_by | bigint |
deleted_by | bigint |
restored_by | bigint |
is_deleted | boolean | default false
data | jsonb |
Column | Type | Modifiers
------------+--------------------------+-----------------------------------------
id | bigint | not null default nextval('users_id_seq'::regclass)
unique_id | character varying(4096) |
created_at | timestamp with time zone | default now()
updated_at | timestamp with time zone |
deleted_at | timestamp with time zone |
restored_at | timestamp with time zone |
modified_at | timestamp with time zone | default now()
created_by | bigint |
updated_by | bigint |
deleted_by | bigint |
restored_by | bigint |
is_deleted | boolean | default false
data | jsonb |
This is our generic table, what if we need more columns? You don't need to alter table, we use PostgreSQL's JSONB column data. JSONB is very powerful, it can be indexed, queried using arrow operator, greater than its competitor. Using these exact table design, we can store the old and updated value on the log, everytime somebody changed the value.
Ok, now let's create a real model from users table, create a package and file mUsers/m_users.go with content:
Ok, now let's create a real model from users table, create a package and file mUsers/m_users.go with content:
package mUsers
import (
"Billions/sql"
"github.com/kokizzu/gotro/A"
"github.com/kokizzu/gotro/D/Pg"
"github.com/kokizzu/gotro/I"
"github.com/kokizzu/gotro/M"
"github.com/kokizzu/gotro/S"
"github.com/kokizzu/gotro/T"
"github.com/kokizzu/gotro/W"
)
const TABLE = `users`
var TM_MASTER Pg.TableModel
var SELECT = ``
var Z func(string) string
var ZZ func(string) string
var ZJ func(string) string
var ZB func(bool) string
var ZI func(int64) string
var ZLIKE func(string) string
var ZT func(...string) string
var PG_W, PG_R *Pg.RDBMS
func init() {
Z = S.Z
ZB = S.ZB
ZZ = S.ZZ
ZJ = S.ZJ
ZI = S.ZI
ZLIKE = S.ZLIKE
ZT = S.ZT
PG_W = sql.PG_W
PG_R = sql.PG_R
TM_MASTER = Pg.TableModel{
CacheName: TABLE + `_USERS_MASTER`,
Fields: []Pg.FieldModel{
{Key: `id`},
{Key: `is_deleted`},
{Key: `modified_at`},
{Label: `E-Mail(s)`, Key: `emails`, CustomQuery: `emails_join(data)`, Type: `emails`, FormTooltip: `separate with comma`},
{Label: `Phone`, Key: `phone`, Type: `phone`, FormHide: true},
{Label: `Full Name`, Key: `full_name`},
},
}
SELECT = TM_MASTER.Select()
}
func One_ByID(id string) M.SX {
ram_key := ZT(id)
query := ram_key + `
SELECT ` + SELECT + `
FROM ` + TABLE + ` x1
WHERE x1.id::TEXT = ` + Z(id)
return PG_R.CQFirstMap(TABLE, ram_key, query)
}
func Search_ByQueryParams(qp *Pg.QueryParams) {
qp.RamKey = ZT(qp.Term)
if qp.Term != `` {
qp.Where += ` AND (x1.data->>'name') LIKE ` + ZLIKE(qp.Term)
}
qp.From = `FROM ` + TABLE + ` x1`
qp.OrderBy = `x1.id`
qp.Select = SELECT
qp.SearchQuery_ByConn(PG_W)
}
/* accessed through: {"order":["-col1","+col2"],"filter":{"is_deleted":false,"created_at":">isodate"},"limit":10,"offset":5}
this will retrieve record 6-15 order by col1 descending, col2 ascending, filtered by is_deleted=false and created_at > isodate
*/
If the example above too complex for you, you can also do manually, see gotro/D/Pg/_example for simpler example. The example above we create a query model, that query from a single table. If you need multiple table (join), you can extend the fields, something like this:
{Label: `Admin`, Key: `admin`, CustomQuery: `x2.data->>'full_name'`},
And the query params something like this:
qp.From = `FROM ` + TABLE + ` x1
LEFT JOIN ` + mAdmin.TABLE + ` x2
ON (x1.data->>'admin_id') = x2.id::TEXT
`
You can also do something like this:
func All_ByStartID_ByLimit_IsAsc_IsIncl(id string, limit int64, is_asc, is_incl bool) A.MSX {
sign := S.IfElse(is_asc, `>`, `<`) + S.If(is_incl, `=`)
ram_key := ZT(id, I.ToS(limit), sign)
where := ``
if id != `` {
where = `AND x1.id ` + sign + Z(id)
}
query := ram_key + `
SELECT ` + SELECT + `
FROM ` + TABLE + ` x1
WHERE x1.is_deleted = false
` + where + `
ORDER BY x1.id ` + S.If(!is_asc, `DESC`) + `
LIMIT ` + I.ToS(limit)
return PG_R.CQMapArray(table, ram_key, query)
} `
// accessed through: {"limit":10}
// this will retrieve last 10 records
Or query a single row:
func API_Backoffice_Form(rm *W.RequestModel) {
rm.Ajax.SX = One_ByID(rm.Id)
}
// accessed through: {a:'form',id:'123'}
// this will retreive all columns on this record
Or create a save/delete/restore function:
func API_Backoffice_SaveDeleteRestore(rm *W.RequestModel) {
PG_W.DoTransaction(func(tx *Pg.Tx) string {
dm := Pg.NewRow(tx, TABLE, rm) // NewPostlessData
emails := rm.Posts.GetStr(`emails`)
// rm is the requestModel, values provided by http req
dm.Set_UserEmails(emails)
// dm is the dataModel, row we want to update
// we can call dm.Get* to retrieve old record values
dm.SetStr(`full_name`)
dm.UpsertRow()
if !rm.Ajax.HasError() {
dm.WipeUnwipe(rm.Action)
}
return rm.Ajax.LastError()
})
}
// accessed through: {a:'save',full_name:'foo',id:'1'} // update
// if without id, it would insert
Then you can call them on a handler or package-internal function, something like:
func API_Backoffice_FormLimit(rm *W.RequestModel) {
id := rm.Posts.GetStr(`id`)
limit := rm.Posts.GetInt(`limit`)
is_asc := rm.Posts.GetBool(`asc`)
is_incl := rm.Posts.GetBool(`incl`)
result := All_ByStartID_ByLimit_IsAsc_IsIncl(id, limit, is_asc, is_incl)
rm.Ajax.Set(`result`, result)
}
func API_Backoffice_Search(rm *W.RequestModel) {
qp := Pg.NewQueryParams(rm.Posts, &TM_MASTER)
Search_ByQueryParams(qp)
qp.ToMap(rm.Ajax)
}
And call those two APIs function inside a handler something like this:
func PrepareVars(ctx *W.Context, title string) {
user_id := ctx.Session.GetStr(`id`)
rm = &W.RequestModel{
Actor: user_id,
DbActor: user_id,
Level: ctx.Session.SX,
Ctx: ctx,
}
ctx.Title = title
is_ajax := ctx.IsAjax()
if is_ajax {
rm.Ajax = NewAjaxResponse()
}
page := rm.Level.GetMSB(`page`)
first_segment := ctx.FirstPath()
// validate if this user may access this first segment
// check their access level, if it's not ok, set rm.Ok to false
// then render an error, something like this:
/*
if is_ajax {
rm.Ajax.Error(sql.ERR_403_MUST_LOGIN_HIGHER)
ctx.AppendJson(rm.Ajax.SX)
return
}
ctx.Error(403, sql.ERR_403_MUST_LOGIN_HIGHER)
return
*/
if !is_ajax {
// render menu based on privilege
} else {
// prepare variables required for ajax response
rm.Posts = ctx.Posts()
rm.Action = rm.Posts.GetStr(`a`)
id := rm.Posts.GetStr(`id`)
rm.Id = S.IfElse(id == `0`, ``, id)
}
}
func Users(ctx *W.Context) {
rm := PrepareVars(ctx, `Users`)
if !rm.Ok {
return
}
if rm.IsAjax() {
// handle ajax
switch rm.Action {
case `search`: // @API
mUsers.API_Backoffice_Search(rm)
case `form_limit`: // @API
mUsers.API_Backoffice_FormLimit(rm)
case `form`: // @API
mUsers.API_Backoffice_Form(rm)
case `save`, `delete`, `restore`: // @ffPI
mUsers.API_Backoffice_SaveDeleteRestore(rm)
default: // @API-END
handler.ErrorHandler(rm.Ajax, rm.Action)
}
ctx.AppendJson(rm.Ajax.SX)
return
}
locals := W.Ajax{SX: M.SX{
`title`: ctx.Title,
}}
qp := Pg.NewQueryParams(nil, &mUsers.TM_MASTER)
mUsers.Search_ByQueryParams(qp)
qp.ToMap(locals)
ctx.Render(`backoffice/users`, locals.SX)
}
Now that we're done creating the backend API server, all that's left is create the systemd service hello1.service:
[Unit]
Description=My Hello1 Service
After=network-online.target postgresql.service
Wants=network-online.target systemd-networkd-wait-online.service
[Service]
Type=simple
Restart=on-failure
User=yourusername
Group=users
WorkingDirectory=/home/yourusername/web
ExecStart=/home/yourusername/web/run_production.sh
ExecStop=/usr/bin/killall Hello1
LimitNOFILE=2097152
LimitNPROC=65536
ProtectSystem=full
NoNewPrivileges=true
[Install]
WantedBy=multi-user.target
Create the run_production.sh shell script
#!/usr/bin/env bash
ofile=logs/access_`date +%F_%H%M%S`.log
echo Logging into: `pwd`/$ofile
unbuffer time ./Hello1 | tee $ofile
Then compile the binary (you: can also set the VERSION here, to make it production):
go build -ldflags "
-X main.LISTEN_ADDR=:${SUB_PORT}
" -o /tmp/Subscriber
Copy the binary, the script above, and whole public/ and views/ directory to the server /home/yourusername/web, copy the service file to the /usr/lib/systemd/system/ then reload the systemd service on the server:
sudo systemctl daemon-reload
sudo systemctl enable hello1
sudo systemctl start hello1
you're good to go, you can check the service status using journalctl -f hello1