少年阿宾

那些青春的岁月

  BlogJava :: 首页 :: 联系 :: 聚合  :: 管理
  495 Posts :: 0 Stories :: 135 Comments :: 0 Trackbacks

2016年8月18日 #

MySQL触发器Trigger实例篇
发表于668 天前 ⁄ IT技术 ⁄ 暂无评论

以前关注的数据存储过程不太懂其中奥妙,最近遇到跨数据库,同时对多个表进行CURD(Create增、Update改、Read读、Delete删),怎么才能让繁琐的数据CURD同步变得更容易呢?相信很多人会首先想到了MySQL存储过程、触发器,这种想法确实不错。于是饶有兴趣地亲自写了CUD(增、改、删)触发器的实例,用触发器实现多表数据同步更新。

MySQL触发器Trigger实例篇

定义: 何为MySQL触发器?

在MySQL Server里面也就是对某一个表的一定的操作,触发某种条件(Insert,Update,Delete 等),从而自动执行的一段程序。从这种意义上讲触发器是一个特殊的存储过程。下面通过MySQL触发器实例,来了解一下触发器的工作过程吧!

一、创建MySQL实例数据表:

在mysql的默认的测试test数据库下,创建两个表t_a与t_b:



    /*Table structure for table `t_a` */
    DROP TABLE IF EXISTS `t_a`;
    CREATE TABLE `t_a` (
      `id` smallint(1) unsigned NOT NULL AUTO_INCREMENT,
      `username` varchar(20) DEFAULT NULL,
      `groupid` mediumint(8) unsigned NOT NULL DEFAULT '0',
      PRIMARY KEY (`id`)
    ) ENGINE=MyISAM AUTO_INCREMENT=16 DEFAULT CHARSET=latin1;
     
    /*Data for the table `t_a` */
    LOCK TABLES `t_a` WRITE;
    UNLOCK TABLES;
     
    /*Table structure for table `t_b` */
    DROP TABLE IF EXISTS `t_b`;
    CREATE TABLE `t_b` (
      `id` smallint(1) unsigned NOT NULL AUTO_INCREMENT,
      `username` varchar(20) DEFAULT NULL,
      `groupid` mediumint(8) unsigned NOT NULL DEFAULT '0',
      PRIMARY KEY (`id`)
    ) ENGINE=MyISAM AUTO_INCREMENT=57 DEFAULT CHARSET=latin1;
     
    /*Data for the table `t_b` */
    LOCK TABLES `t_b` WRITE;
    UNLOCK TABLES;

在t_a表上分创建一个CUD(增、改、删)3个触发器,将t_a的表数据与t_b同步实现CUD,注意创建触发器每个表同类事件有且仅有一个对应触发器,为什么只能对一个触发器,不解释啦,看MYSQL的说明帮助文档吧。

二、创建MySQL实例触发器:

在实例数据表t_a上依次按照下面步骤创建tr_a_insert、tr_a_update、tr_a_delete三个触发器

1、创建INSERT触发器trigger_a_insert:



    DELIMITER $$
     
    USE `test`$$
     
    --判断数据库中是否存在tr_a_insert触发器
    DROP TRIGGER /*!50032 IF EXISTS */ `tr_a_insert`$$
    --不存在tr_a_insert触发器,开始创建触发器
    --Trigger触发条件为insert成功后进行触发
    CREATE
        /*!50017 DEFINER = 'root'@'localhost' */
        TRIGGER `tr_a_insert` AFTER INSERT ON `t_a`
        FOR EACH ROW BEGIN
            --Trigger触发后,同时对t_b新增同步一条数据
            INSERT INTO `t_b` SET username = NEW.username, groupid=NEW.groupid;
        END;
    $$
     
    DELIMITER;
2、创建UPDATE触发器trigger_a_update:


    DELIMITER $$
     
    USE `test`$$
    --判断数据库中是否存在tr_a_update触发器
    DROP TRIGGER /*!50032 IF EXISTS */ `tr_a_update`$$
    --不存在tr_a_update触发器,开始创建触发器
    --Trigger触发条件为update成功后进行触发
    CREATE
        /*!50017 DEFINER = 'root'@'localhost' */
        TRIGGER `tr_a_update` AFTER UPDATE ON `t_a`
        FOR EACH ROW BEGIN
        --Trigger触发后,当t_a表groupid,username数据有更改时,对t_b表同步一条更新后的数据
          IF new.groupid != old.groupid OR old.username != new.username THEN
            UPDATE `t_b` SET groupid=NEW.groupid,username=NEW.username WHEREusername=OLD.username AND groupid=OLD.groupid;
          END IF;
              
        END;
    $$
     
    DELIMITER ;
3、创建DELETE触发器trigger_a_delete:


    DELIMITER $$
     
    USE `test`$$
    --判断数据库中是否存在tr_a_delete触发器
    DROP TRIGGER /*!50032 IF EXISTS */ `tr_a_delete`$$
    --不存在tr_a_delete触发器,开始创建触发器
    --Trigger触发条件为delete成功后进行触发
    CREATE
        /*!50017 DEFINER = 'root'@'localhost' */
        TRIGGER `tr_a_delete` AFTER DELETE ON `t_a`
        FOR EACH ROW BEGIN
            --t_a表数据删除后,t_b表关联条件相同的数据也同步删除
            DELETE FROM `t_b` WHERE username=Old.username AND groupid=OLD.groupid;
        END;
    $$
     
    DELIMITER ;

三、测试MySQL实例触发器:

分别测试实现t_a与t_b实现数据同步CUD(增、改、删)3个Triggers

1、测试MySQL的实例tr_a_insert触发器:

在t_a表中新增一条数据,然后分别查询t_a/t_b表的数据是否数据同步,测试触发器成功标志,t_a表无论在何种情况下,新增了一条或多条记录集时,没有t_b表做任何数据insert操作,它同时新增了一样的多条记录集。

下面来进行MySQL触发器实例测试:



    --t_a表新增一条记录集
        INSERT INTO `t_a` (username,groupid) VALUES ('sky54.net',123)
       
        --查询t_a表
        SELECT id,username,groupid FROM `t_a`
       
        --查询t_b表
        SELECT id,username,groupid FROM `t_b`

2、测试MySQL的实例tr_a_update、tr_a_delete触发器:

这两个MySQL触发器测试原理、步骤与tr_a_insert触发器一样的,先修改/删除一条数据,然后分别查看t_a、t_b表的数据变化情况,数据变化同步说明Trigger实例成功,否则需要逐步排查错误原因。

世界上任何一种事物都其其优点和缺点,优点与缺点是自身一个相对立的面。当然这里不是强调“世界非黑即白”式的“二元论”,“存在即合理”嘛。当然 MySQL触发器的优点不说了,说一下不足之处,MySQL Trigger没有很好的调试、管理环境,难于在各种系统环境下测试,测试比MySQL存储过程要难,所以建议在生成环境下,尽量用存储过程来代替 MySQL触发器。

本篇结束前再强调一下,支持触发器的MySQL版本需要5.0以上,5.0以前版本的MySQL升级到5.0以后版本方可使用触发器哦!








http://blog.csdn.net/hireboy/article/details/18079183



posted @ 2016-08-18 17:25 abin 阅读(153) | 评论 (0)编辑 收藏

2016年6月14日 #

     摘要: 在开发高并发系统时有三把利器用来保护系统:缓存、降级和限流。缓存的目的是提升系统访问速度和增大系统能处理的容量,可谓是抗高并发流量的银弹;而降级是当服务出问题或者影响到核心流程的性能则需要暂时屏蔽掉,待高峰或者问题解决后再打开;而有些场景并不能用缓存和降级来解决,比如稀缺资源(秒杀、抢购)、写服务(如评论、下单)、频繁的复杂查询(评论的最后几页),因此需有一种手段来限制这些场景的并发/请求量,即限...  阅读全文
posted @ 2016-06-14 13:38 abin 阅读(883) | 评论 (1)编辑 收藏

2016年5月13日 #

Install the Command Line Client

If you prefer command line client, then you can install it on your Linux with the following command.

Debian

sudo apt-get install python-pip sudo pip install shadowsocks

Ubuntu

Yes, you can use the above commands to install shadowsocks client on ubuntu. But it will install it under ~/.local/bin/ directory and it causes loads of trouble. So I suggest using su to become root first and then issue the following two commands.

apt-get install python-pip pip install shadowsocks

Fedora/Centos

sudo yum install python-setuptools   or   sudo dnf install python-setuptools sudo easy_install pip sudo pip install shadowsocks

OpenSUSE

sudo zypper install python-pip sudo pip install shadowsocks

Archlinux

sudo pacman -S python-pip sudo pip install shadowsocks

As you can see the command of installing shadowsocks client is the same to the command of installing shadowsocks server, because the above command will install both the client and the server. You can verify this by looking at the installation script output

Downloading/unpacking shadowsocks Downloading shadowsocks-2.8.2.tar.gz Running setup.py (path:/tmp/pip-build-PQIgUg/shadowsocks/setup.py) egg_info for package shadowsocks  Installing collected packages: shadowsocks Running setup.py install for shadowsocks  Installing sslocal script to /usr/local/bin Installing ssserver script to /usr/local/bin Successfully installed shadowsocks Cleaning up...

sslocal is the client software and ssserver is the server software. On some Linux distros such as ubuntu, the shadowsocks client sslocal is installed under /usr/local/bin. On Others such as Archsslocal is installed under /usr/bin/. Your can use whereis command to find the exact location.

user@debian:~$ whereis sslocal sslocal: /usr/local/bin/sslocal

Create a Configuration File

we will create a configuration file under /etc/

sudo vi /etc/shadowsocks.json

Put the following text in the file. Replace server-ip with your actual IP and set a password.

{
"server":"server-ip",
"server_port":8000,
"local_address": "127.0.0.1",
"local_port":1080,
"password":"your-password",
"timeout":600,
"method":"aes-256-cfb"
}

Save and close the file. Next start the client using command line

sslocal -c /etc/shadowsocks.json

To run in the background

sudo sslocal -c /etc/shadowsocks.json -d start

Auto Start the Client on System Boot

Edit /etc/rc.local file

sudo vi /etc/rc.local

Put the following line above the exit 0 line:

sudo sslocal -c /etc/shadowsocks.json -d start

Save and close the file. Next time you start your computer, shadowsocks client will automatically start and connect to your shadowsocks server.

Check if It Works

After you rebooted your computer, enter the following command in terminal:

sudo systemctl status rc-local.service

If your sslocal command works then you will get this ouput:


● rc-local.service - /etc/rc.local 

Compatibility Loaded: loaded (/etc/systemd/system/rc-local.service; enabled; vendor preset: enabled)
Active: active (running) since Fri 2015-11-27 03:19:25 CST; 2min 39s ago
Process: 881 ExecStart=/etc/rc.local start (code=exited, status=0/SUCCESS)
CGroup: /system.slice/rc-local.service
├─ 887 watch -n 60 su matrix -c ibam
└─1112 /usr/bin/python /usr/local/bin/sslocal -c /etc/shadowsocks....

As you can see from the last line, the sslocal command created a process whose pid is 1112 on my machine. It means shadowsocks client is running smoothly. And of course you can tell your browser to connect through your shadowsocks client to see if everything goes well.

If for some reason your /etc/rc.local script won’t run, then check the following post to find the solution.

How to enable /etc/rc.local with SystemdInstall the Command Line Client

If you prefer command line client, then you can install it on your Linux with the following command.

Debian

sudo apt-get install python-pip
sudo pip install shadowsocks

Ubuntu

Yes, you can use the above commands to install shadowsocks client on ubuntu. But it will install it under ~/.local/bin/ directory and it causes loads of trouble. So I suggest using su to become root first and then issue the following two commands.

apt-get install python-pip
pip install shadowsocks

Fedora/Centos

sudo yum install python-setuptools   or   sudo dnf install python-setuptools
sudo easy_install pip
sudo pip install shadowsocks

OpenSUSE

sudo zypper install python-pip
sudo pip install shadowsocks

Archlinux

sudo pacman -S python-pip
sudo pip install shadowsocks

As you can see the command of installing shadowsocks client is the same to the command of installing shadowsocks server, because the above command will install both the client and the server. You can verify this by looking at the installation script output

Downloading/unpacking shadowsocks
Downloading shadowsocks-2.8.2.tar.gz
Running setup.py (path:/tmp/pip-build-PQIgUg/shadowsocks/setup.py) egg_info for package shadowsocks

Installing collected packages: shadowsocks
Running setup.py install for shadowsocks

Installing sslocal script to /usr/local/bin
Installing ssserver script to /usr/local/bin
Successfully installed shadowsocks
Cleaning up...

sslocal is the client software and ssserver is the server software. On some Linux distros such as ubuntu, the shadowsocks client sslocal is installed under /usr/local/bin. On Others such as Archsslocal is installed under /usr/bin/. Your can use whereis command to find the exact location.

user@debian:~$ whereis sslocal
sslocal: /usr/local/bin/sslocal

Create a Configuration File

we will create a configuration file under /etc/

sudo vi /etc/shadowsocks.json

Put the following text in the file. Replace server-ip with your actual IP and set a password.

{
"server":"server-ip",
"server_port":8000,
"local_address": "127.0.0.1",
"local_port":1080,
"password":"your-password",
"timeout":600,
"method":"aes-256-cfb"
}

Save and close the file. Next start the client using command line

sslocal -c /etc/shadowsocks.json

To run in the background

sudo sslocal -c /etc/shadowsocks.json -d start

Auto Start the Client on System Boot

Edit /etc/rc.local file

sudo vi /etc/rc.local

Put the following line above the exit 0 line:

sudo sslocal -c /etc/shadowsocks.json -d start

Save and close the file. Next time you start your computer, shadowsocks client will automatically start and connect to your shadowsocks server.

Check if It Works

After you rebooted your computer, enter the following command in terminal:

sudo systemctl status rc-local.service

If your sslocal command works then you will get this ouput:

● rc-local.service - /etc/rc.local Compatibility
Loaded: loaded (/etc/systemd/system/rc-local.service; enabled; vendor preset: enabled)
Active: active (running) since Fri 2015-11-27 03:19:25 CST; 2min 39s ago
Process: 881 ExecStart=/etc/rc.local start (code=exited, status=0/SUCCESS)
CGroup: /system.slice/rc-local.service
├─ 887 watch -n 60 su matrix -c ibam
└─1112 /usr/bin/python /usr/local/bin/sslocal -c /etc/shadowsocks....

As you can see from the last line, the sslocal command created a process whose pid is 1112 on my machine. It means shadowsocks client is running smoothly. And of course you can tell your browser to connect through your shadowsocks client to see if everything goes well.

If for some reason your /etc/rc.local script won’t run, then check the following post to find the solution.

How to enable /etc/rc.local with Systemd




posted @ 2016-05-13 22:56 abin 阅读(288) | 评论 (0)编辑 收藏

2016年4月27日 #

废话少说,直接上代码,以前都是调用别人写好的,现在有时间自己弄下,具体功能如下:
1、httpClient+http+线程池:
2、httpClient+https(单向不验证证书)+线程池:

https在%TOMCAT_HOME%/conf/server.xml里面的配置文件
<Connector port="8443" protocol="HTTP/1.1" SSLEnabled="true" 
     maxThreads="150" scheme="https" secure="true" 
     clientAuth="false" keystoreFile="D:/tomcat.keystore" 
     keystorePass="heikaim" sslProtocol="TLS"  executor="tomcatThreadPool"/> 
其中 clientAuth="false"表示不开启证书验证,只是单存的走https



package com.abin.lee.util;

import org.apache.commons.collections4.MapUtils;
import org.apache.commons.lang3.StringUtils;
import org.apache.http.*;
import org.apache.http.client.HttpRequestRetryHandler;
import org.apache.http.client.config.CookieSpecs;
import org.apache.http.client.config.RequestConfig;
import org.apache.http.client.entity.UrlEncodedFormEntity;
import org.apache.http.client.methods.CloseableHttpResponse;
import org.apache.http.client.methods.HttpGet;
import org.apache.http.client.methods.HttpPost;
import org.apache.http.client.protocol.HttpClientContext;
import org.apache.http.config.Registry;
import org.apache.http.config.RegistryBuilder;
import org.apache.http.conn.ConnectTimeoutException;
import org.apache.http.conn.socket.ConnectionSocketFactory;
import org.apache.http.conn.socket.PlainConnectionSocketFactory;
import org.apache.http.conn.ssl.NoopHostnameVerifier;
import org.apache.http.conn.ssl.SSLConnectionSocketFactory;
import org.apache.http.entity.StringEntity;
import org.apache.http.impl.client.CloseableHttpClient;
import org.apache.http.impl.client.HttpClients;
import org.apache.http.impl.conn.PoolingHttpClientConnectionManager;
import org.apache.http.message.BasicHeader;
import org.apache.http.message.BasicNameValuePair;
import org.apache.http.protocol.HttpContext;
import org.apache.http.util.EntityUtils;

import javax.net.ssl.*;
import java.io.IOException;
import java.io.InterruptedIOException;
import java.net.UnknownHostException;
import java.nio.charset.Charset;
import java.security.cert.CertificateException;
import java.security.cert.X509Certificate;
import java.util.*;

/**
* Created with IntelliJ IDEA.
* User: abin
* Date: 16-4-18
* Time: 上午10:24
* To change this template use File | Settings | File Templates.
*/
public class HttpClientUtil {
private static CloseableHttpClient httpsClient = null;
private static CloseableHttpClient httpClient = null;

static {
httpClient = getHttpClient();
httpsClient = getHttpsClient();
}

public static CloseableHttpClient getHttpClient() {
try {
httpClient = HttpClients.custom()
.setConnectionManager(PoolManager.getHttpPoolInstance())
.setConnectionManagerShared(true)
.setDefaultRequestConfig(requestConfig())
.setRetryHandler(retryHandler())
.build();
} catch (Exception e) {
e.printStackTrace();
}
return httpClient;
}


public static CloseableHttpClient getHttpsClient() {
try {
//Secure Protocol implementation.
SSLContext ctx = SSLContext.getInstance("SSL");
//Implementation of a trust manager for X509 certificates
TrustManager x509TrustManager = new X509TrustManager() {
public void checkClientTrusted(X509Certificate[] xcs,
String string) throws CertificateException {
}
public void checkServerTrusted(X509Certificate[] xcs,
String string) throws CertificateException {
}
public X509Certificate[] getAcceptedIssuers() {
return null;
}
};
ctx.init(null, new TrustManager[]{x509TrustManager}, null);
//首先设置全局的标准cookie策略
// RequestConfig requestConfig = RequestConfig.custom().setCookieSpec(CookieSpecs.STANDARD_STRICT).build();
ConnectionSocketFactory connectionSocketFactory = new SSLConnectionSocketFactory(ctx, hostnameVerifier);
Registry<ConnectionSocketFactory> socketFactoryRegistry = RegistryBuilder.<ConnectionSocketFactory>create()
.register("http", PlainConnectionSocketFactory.INSTANCE)
.register("https", connectionSocketFactory).build();
// 设置连接池
httpsClient = HttpClients.custom()
.setConnectionManager(PoolsManager.getHttpsPoolInstance(socketFactoryRegistry))
.setConnectionManagerShared(true)
.setDefaultRequestConfig(requestConfig())
.setRetryHandler(retryHandler())
.build();
} catch (Exception e) {
e.printStackTrace();
}
return httpsClient;
}

// 配置请求的超时设置
//首先设置全局的标准cookie策略
public static RequestConfig requestConfig(){
RequestConfig requestConfig = RequestConfig.custom()
.setCookieSpec(CookieSpecs.STANDARD_STRICT)
.setConnectionRequestTimeout(20000)
.setConnectTimeout(20000)
.setSocketTimeout(20000)
.build();
return requestConfig;
}

public static HttpRequestRetryHandler retryHandler(){
//请求重试处理
HttpRequestRetryHandler httpRequestRetryHandler = new HttpRequestRetryHandler() {
public boolean retryRequest(IOException exception,int executionCount, HttpContext context) {
if (executionCount >= 5) {// 如果已经重试了5次,就放弃
return false;
}
if (exception instanceof NoHttpResponseException) {// 如果服务器丢掉了连接,那么就重试
return true;
}
if (exception instanceof SSLHandshakeException) {// 不要重试SSL握手异常
return false;
}
if (exception instanceof InterruptedIOException) {// 超时
return false;
}
if (exception instanceof UnknownHostException) {// 目标服务器不可达
return false;
}
if (exception instanceof ConnectTimeoutException) {// 连接被拒绝
return false;
}
if (exception instanceof SSLException) {// ssl握手异常
return false;
}

HttpClientContext clientContext = HttpClientContext.adapt(context);
HttpRequest request = clientContext.getRequest();
// 如果请求是幂等的,就再次尝试
if (!(request instanceof HttpEntityEnclosingRequest)) {
return true;
}
return false;
}
};
return httpRequestRetryHandler;
}



//创建HostnameVerifier
//用于解决javax.net.ssl.SSLException: hostname in certificate didn't match: <123.125.97.66> != <123.125.97.241>
static HostnameVerifier hostnameVerifier = new NoopHostnameVerifier(){
@Override
public boolean verify(String s, SSLSession sslSession) {
return super.verify(s, sslSession);
}
};


public static class PoolManager {
public static PoolingHttpClientConnectionManager clientConnectionManager = null;
private static int maxTotal = 200;
private static int defaultMaxPerRoute = 100;

private PoolManager(){
clientConnectionManager.setMaxTotal(maxTotal);
clientConnectionManager.setDefaultMaxPerRoute(defaultMaxPerRoute);
}

private static class PoolManagerHolder{
public static PoolManager instance = new PoolManager();
}

public static PoolManager getInstance() {
if(null == clientConnectionManager)
clientConnectionManager = new PoolingHttpClientConnectionManager();
return PoolManagerHolder.instance;
}

public static PoolingHttpClientConnectionManager getHttpPoolInstance() {
PoolManager.getInstance();
// System.out.println("getAvailable=" + clientConnectionManager.getTotalStats().getAvailable());
// System.out.println("getLeased=" + clientConnectionManager.getTotalStats().getLeased());
// System.out.println("getMax=" + clientConnectionManager.getTotalStats().getMax());
// System.out.println("getPending="+clientConnectionManager.getTotalStats().getPending());
return PoolManager.clientConnectionManager;
}


}

public static class PoolsManager {
public static PoolingHttpClientConnectionManager clientConnectionManager = null;
private static int maxTotal = 200;
private static int defaultMaxPerRoute = 100;

private PoolsManager(){
clientConnectionManager.setMaxTotal(maxTotal);
clientConnectionManager.setDefaultMaxPerRoute(defaultMaxPerRoute);
}

private static class PoolsManagerHolder{
public static PoolsManager instance = new PoolsManager();
}

public static PoolsManager getInstance(Registry<ConnectionSocketFactory> socketFactoryRegistry) {
if(null == clientConnectionManager)
clientConnectionManager = new PoolingHttpClientConnectionManager(socketFactoryRegistry);
return PoolsManagerHolder.instance;
}

public static PoolingHttpClientConnectionManager getHttpsPoolInstance(Registry<ConnectionSocketFactory> socketFactoryRegistry) {
PoolsManager.getInstance(socketFactoryRegistry);
// System.out.println("getAvailable=" + clientConnectionManager.getTotalStats().getAvailable());
// System.out.println("getLeased=" + clientConnectionManager.getTotalStats().getLeased());
// System.out.println("getMax=" + clientConnectionManager.getTotalStats().getMax());
// System.out.println("getPending="+clientConnectionManager.getTotalStats().getPending());
return PoolsManager.clientConnectionManager;
}

}

public static String httpPost(Map<String, String> request, String httpUrl){
String result = "";
CloseableHttpClient httpClient = getHttpClient();
try {
if(MapUtils.isEmpty(request))
throw new Exception("请求参数不能为空");
HttpPost httpPost = new HttpPost(httpUrl);
List<NameValuePair> nvps = new ArrayList<NameValuePair>();
for(Iterator<Map.Entry<String, String>> iterator=request.entrySet().iterator(); iterator.hasNext();){
Map.Entry<String, String> entry = iterator.next();
nvps.add(new BasicNameValuePair(entry.getKey(), entry.getValue()));
}
httpPost.setEntity(new UrlEncodedFormEntity(nvps, Consts.UTF_8));
System.out.println("Executing request: " + httpPost.getRequestLine());
CloseableHttpResponse response = httpClient.execute(httpPost);
result = EntityUtils.toString(response.getEntity());
System.out.println("Executing response: "+ result);
} catch (Exception e) {
throw new RuntimeException(e);
} finally {
try {
httpClient.close();
} catch (IOException e) {
e.printStackTrace();
}
}
return result;
}

public static String httpPost(String json, String httpUrl, Map<String, String> headers){
String result = "";
CloseableHttpClient httpClient = getHttpClient();
try {
if(StringUtils.isBlank(json))
throw new Exception("请求参数不能为空");
HttpPost httpPost = new HttpPost(httpUrl);
for(Iterator<Map.Entry<String, String>> iterator=headers.entrySet().iterator();iterator.hasNext();){
Map.Entry<String, String> entry = iterator.next();
Header header = new BasicHeader(entry.getKey(), entry.getValue());
httpPost.setHeader(header);
}
httpPost.setEntity(new StringEntity(json, Charset.forName("UTF-8")));
System.out.println("Executing request: " + httpPost.getRequestLine());
CloseableHttpResponse response = httpClient.execute(httpPost);
result = EntityUtils.toString(response.getEntity());
System.out.println("Executing response: "+ result);
} catch (Exception e) {
throw new RuntimeException(e);
} finally {
try {
httpClient.close();
} catch (IOException e) {
e.printStackTrace();
}
}
return result;
}

public static String httpGet(String httpUrl, Map<String, String> headers) {
String result = "";
CloseableHttpClient httpClient = getHttpClient();
try {
HttpGet httpGet = new HttpGet(httpUrl);
System.out.println("Executing request: " + httpGet.getRequestLine());
for(Iterator<Map.Entry<String, String>> iterator=headers.entrySet().iterator();iterator.hasNext();){
Map.Entry<String, String> entry = iterator.next();
Header header = new BasicHeader(entry.getKey(), entry.getValue());
httpGet.setHeader(header);
}
CloseableHttpResponse response = httpClient.execute(httpGet);
result = EntityUtils.toString(response.getEntity());
System.out.println("Executing response: "+ result);
} catch (Exception e) {
throw new RuntimeException(e);
} finally {
try {
httpClient.close();
} catch (IOException e) {
e.printStackTrace();
}
}
return result;
}


public static String httpGet(String httpUrl) {
String result = "";
CloseableHttpClient httpClient = getHttpClient();
try {
HttpGet httpGet = new HttpGet(httpUrl);
System.out.println("Executing request: " + httpGet.getRequestLine());
CloseableHttpResponse response = httpClient.execute(httpGet);
result = EntityUtils.toString(response.getEntity());
System.out.println("Executing response: "+ result);
} catch (Exception e) {
throw new RuntimeException(e);
} finally {
try {
httpClient.close();
} catch (IOException e) {
e.printStackTrace();
}
}
return result;
}





maven依赖:
  <!--httpclient-->
        <dependency>
            <groupId>org.apache.httpcomponents</groupId>
            <artifactId>httpclient</artifactId>
            <version>4.5.2</version>
        </dependency>
        <dependency>
            <groupId>org.apache.httpcomponents</groupId>
            <artifactId>httpcore</artifactId>
            <version>4.4.4</version>
        </dependency>
        <dependency>
            <groupId>org.apache.httpcomponents</groupId>
            <artifactId>httpmime</artifactId>
            <version>4.5.2</version>
        </dependency>

<dependency>
<groupId>org.apache.commons</groupId>
<artifactId>commons-collections4</artifactId>
<version>4.1</version>
</dependency>
posted @ 2016-04-27 19:04 abin 阅读(312) | 评论 (0)编辑 收藏

2015年11月3日 #

1、twemproxy explore

      当我们有大量 Redis 或 Memcached 的时候,通常只能通过客户端的一些数据分配算法(比如一致性哈希),来实现集群存储的特性。虽然Redis 2.6版本已经发布Redis Cluster,但还不是很成熟适用正式生产环境。 Redis 的 Cluster 方案还没有正式推出之前,我们通过 Proxy 的方式来实现集群存储

       Twitter,世界最大的Redis集群之一部署在Twitter用于为用户提供时间轴数据。Twitter Open Source部门提供了Twemproxy。

     Twemproxy,也叫nutcraker。是一个twtter开源的一个redis和memcache代理服务器。 redis作为一个高效的缓存服务器,非常具有应用价值。但是当使用比较多的时候,就希望可以通过某种方式 统一进行管理。避免每个应用每个客户端管理连接的松散性。同时在一定程度上变得可以控制。

      Twemproxy是一个快速的单线程代理程序,支持Memcached ASCII协议和更新的Redis协议:

     它全部用C写成,使用Apache 2.0 License授权。项目在Linux上可以工作,而在OSX上无法编译,因为它依赖了epoll API.

      Twemproxy 通过引入一个代理层,可以将其后端的多台 Redis 或 Memcached 实例进行统一管理与分配,使应用程序只需要在 Twemproxy 上进行操作,而不用关心后面具体有多少个真实的 Redis 或 Memcached 存储。 

2、twemproxy特性:

    • 支持失败节点自动删除

      • 可以设置重新连接该节点的时间
      • 可以设置连接多少次之后删除该节点
      • 该方式适合作为cache存储
    • 支持设置HashTag

      • 通过HashTag可以自己设定将两个KEYhash到同一个实例上去。
    • 减少与redis的直接连接数

      • 保持与redis的长连接
      • 可设置代理与后台每个redis连接的数目
    • 自动分片到后端多个redis实例上

      • 多种hash算法:能够使用不同的策略和散列函数支持一致性hash。
      • 可以设置后端实例的权重
    • 避免单点问题

      • 可以平行部署多个代理层.client自动选择可用的一个
    • 支持redis pipelining request

           支持请求的流式与批处理,降低来回的消耗

    • 支持状态监控

      • 可设置状态监控ip和端口,访问ip和端口可以得到一个json格式的状态信息串
      • 可设置监控信息刷新间隔时间
    • 高吞吐量

      • 连接复用,内存复用。
      • 将多个连接请求,组成reids pipelining统一向redis请求。

     另外可以修改redis的源代码,抽取出redis中的前半部分,作为一个中间代理层。最终都是通过linux下的epoll 事件机制提高并发效率,其中nutcraker本身也是使用epoll的事件机制。并且在性能测试上的表现非常出色。

3、twemproxy问题与不足


Twemproxy 由于其自身原理限制,有一些不足之处,如: 
  • 不支持针对多个值的操作,比如取sets的子交并补等(MGET 和 DEL 除外)
  • 不支持Redis的事务操作
  • 出错提示还不够完善
  • 也不支持select操作

4、安装与配置 

具体的安装步骤可用查看github:https://github.com/twitter/twemproxy
Twemproxy 的安装,主要命令如下: 
apt-get install automake  
apt-get install libtool  
git clone git://github.com/twitter/twemproxy.git  
cd twemproxy  
autoreconf -fvi  
./configure --enable-debug=log  
make  
src/nutcracker -h

通过上面的命令就算安装好了,然后是具体的配置,下面是一个典型的配置 
    redis1:  
      listen: 127.0.0.1:6379 #使用哪个端口启动Twemproxy  
      redis: true #是否是Redis的proxy  
      hash: fnv1a_64 #指定具体的hash函数  
      distribution: ketama #具体的hash算法  
      auto_eject_hosts: true #是否在结点无法响应的时候临时摘除结点  
      timeout: 400 #超时时间(毫秒)  
      server_retry_timeout: 2000 #重试的时间(毫秒)  
      server_failure_limit: 1 #结点故障多少次就算摘除掉  
      servers: #下面表示所有的Redis节点(IP:端口号:权重)  
       - 127.0.0.1:6380:1  
       - 127.0.0.1:6381:1  
       - 127.0.0.1:6382:1  
      
    redis2:  
      listen: 0.0.0.0:10000  
      redis: true  
      hash: fnv1a_64  
      distribution: ketama  
      auto_eject_hosts: false  
      timeout: 400  
      servers:  
       - 127.0.0.1:6379:1  
       - 127.0.0.1:6380:1  
       - 127.0.0.1:6381:1  
       - 127.0.0.1:6382:1 

你可以同时开启多个 Twemproxy 实例,它们都可以进行读写,这样你的应用程序就可以完全避免所谓的单点故障。


http://blog.csdn.net/hguisu/article/details/9174459/
posted @ 2015-11-03 19:30 abin 阅读(290) | 评论 (0)编辑 收藏

2015年11月1日 #

Linux is a powerhouse when it comes to networking, and provides a full featured and high performance network stack. When combined with web front-ends such asHAProxylighttpdNginxApache or your favorite application server, Linux is a killer platform for hosting web applications. Keeping these applications up and operational can sometimes be a challenge, especially in this age of horizontally scaled infrastructure and commodity hardware. But don't fret, since there are a number of technologies that can assist with making your applications and network infrastructure fault tolerant.

One of these technologies, keepalived, provides interface failover and the ability to perform application-layer health checks. When these capabilities are combined with the Linux Virtual Server (LVS) project, a fault in an application will be detected by keepalived, and the virtual interfaces that are accessed by clients can be migrated to another available node. This article will provide an introduction to keepalived, and will show how to configure interface failover between two or more nodes. Additionally, the article will show how to debug problems with keepalived and VRRP.

What Is Keepalived?


The keepalived project provides a keepalive facility for Linux servers. This keepalive facility consists of a VRRP implementation to manage virtual routers (aka virtual interfaces), and a health check facility to determine if a service (web server, samba server, etc.) is up and operational. If a service fails a configurable number of health checks, keepalived will fail a virtual router over to a secondary node. While useful in its own right, keepalived really shines when combined with the Linux Virtual Server project. This article will focus on keepalived, and a future article will show how to integrate the two to create a fault tolerant load-balancer.

Installing KeepAlived From Source Code


Before we dive into configuring keepalived, we need to install it. Keepalived is distributed as source code, and is available in several package repositories. To install from source code, you can execute wget or curl to retrieve the source, and then run "configure", "make" and "make install" compile and install the software:

$ wget http://www.keepalived.org/software/keepalived-1.1.17.tar.gz  $ tar xfvz keepalived-1.1.17.tar.gz   $ cd keepalived-1.1.17  $ ./configure --prefix=/usr/local  $ make && make install 

In the example above, the keepalived daemon will be compiled and installed as /usr/local/sbin/keepalived.

Configuring KeepAlived


The keepalived daemon is configured through a text configuration file, typically named keepalived.conf. This file contains one or more configuration stanzas, which control notification settings, the virtual interfaces to manage, and the health checks to use to test the services that rely on the virtual interfaces. Here is a sample annotated configuration that defines two virtual IP addresses to manage, and the individuals to contact when a state transition or fault occurs:

# Define global configuration directives global_defs {     # Send an e-mail to each of the following     # addresses when a failure occurs    notification_email {        matty@prefetch.net        operations@prefetch.net    }    # The address to use in the From: header    notification_email_from root@VRRP-director1.prefetch.net     # The SMTP server to route mail through    smtp_server mail.prefetch.net     # How long to wait for the mail server to respond    smtp_connect_timeout 30     # A descriptive name describing the router    router_id VRRP-director1 }  # Create a VRRP instance  VRRP_instance VRRP_ROUTER1 {      # The initial state to transition to. This option isn't     # really all that valuable, since an election will occur     # and the host with the highest priority will become     # the master. The priority is controlled with the priority     # configuration directive.     state MASTER      # The interface keepalived will manage     interface br0      # The virtual router id number to assign the routers to     virtual_router_id 100      # The priority to assign to this device. This controls     # who will become the MASTER and BACKUP for a given     # VRRP instance.     priority 100      # How many seconds to wait until a gratuitous arp is sent     garp_master_delay 2      # How often to send out VRRP advertisements     advert_int 1      # Execute a notification script when a host transitions to     # MASTER or BACKUP, or when a fault occurs. The arguments     # passed to the script are:     #  $1 - "GROUP"|"INSTANCE"     #  $2 = name of group or instance     #  $3 = target state of transition     # Sample: VRRP-notification.sh VRRP_ROUTER1 BACKUP 100     notify "/usr/local/bin/VRRP-notification.sh"      # Send an SMTP alert during a state transition     smtp_alert      # Authenticate the remote endpoints via a simple      # username/password combination     authentication {         auth_type PASS         auth_pass 192837465     }     # The virtual IP addresses to float between nodes. The     # label statement can be used to bring an interface      # online to represent the virtual IP.     virtual_ipaddress {         192.168.1.100 label br0:100         192.168.1.101 label br0:101     } } 

The configuration file listed above is self explanatory, so I won't go over each directive in detail. I will point out a couple of items:

  • Each host is referred to as a director in the documentation, and each director can be responsible for one or more VRRP instances
  • Each director will need its own copy of the configuration file, and the router_id, priority, etc. should be adjusted to reflect the nodes name and priority relative to other nodes
  • To force a specific node to master a virtual address, make sure the director's priority is higher than the other virtual routers
  • If you have multiple VRRP instances that need to failover together, you will need to add each instance to a VRRP_sync_group
  • The notification script can be used to generate custom syslog messages, or to invoke some custom logic (e.g., restart an app) when a state transition or fault occurs
  • The keepalived package comes with numerous configuration examples, which show how to configure numerous aspects of the server

Starting Keepalived


Keepalived can be executed from an RC script, or started from the command line. The following example will start keepalived using the configuration file /usr/local/etc/keepalived.conf:

$ keepalived -f /usr/local/etc/keepalived.conf 

If you need to debug keepalived issues, you can run the daemon with the "--dont-fork", "--log-console" and "--log-detail" options:

$ keepalived -f /usr/local/etc/keepalived.conf --dont-fork --log-console --log-detail 

These options will stop keepalived from fork'ing, and will provide additional logging data. Using these options is especially useful when you are testing out new configuration directives, or debugging an issue with an existing configuration file.

Locating The Router That is Managing A Virtual IP


To see which director is currently the master for a given virtual interface, you can check the output from the ip utility:

VRRP-director1$ ip addr list br0 5: br0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN      link/ether 00:24:8c:4e:07:f6 brd ff:ff:ff:ff:ff:ff     inet 192.168.1.6/24 brd 192.168.1.255 scope global br0     inet 192.168.1.100/32 scope global br0:100     inet 192.168.1.101/32 scope global br0:101     inet6 fe80::224:8cff:fe4e:7f6/64 scope link         valid_lft forever preferred_lft forever  VRRP-director2$ ip addr list br0 5: br0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN      link/ether 00:24:8c:4e:07:f6 brd ff:ff:ff:ff:ff:ff     inet 192.168.1.7/24 brd 192.168.1.255 scope global br0     inet6 fe80::224:8cff:fe4e:7f6/64 scope link         valid_lft forever preferred_lft forever 

In the output above, we can see that the virtual interfaces 192.168.1.100 and 192.168.1.101 are currently active on VRRP-director1.

Troubleshooting Keepalived And VRRP


The keepalived daemon will log to syslog by default. Log entries will range from entries that show when the keepalive daemon started, to entries that show state transitions. Here are a few sample entries that show keepalived starting up, and the node transitioning a VRRP instance to the MASTER state:

Jul  3 16:29:56 disarm Keepalived: Starting Keepalived v1.1.17 (07/03,2009) Jul  3 16:29:56 disarm Keepalived: Starting VRRP child process, pid=1889 Jul  3 16:29:56 disarm Keepalived_VRRP: Using MII-BMSR NIC polling thread... Jul  3 16:29:56 disarm Keepalived_VRRP: Registering Kernel netlink reflector Jul  3 16:29:56 disarm Keepalived_VRRP: Registering Kernel netlink command channel Jul  3 16:29:56 disarm Keepalived_VRRP: Registering gratutious ARP shared channel Jul  3 16:29:56 disarm Keepalived_VRRP: Opening file '/usr/local/etc/keepalived.conf'. Jul  3 16:29:56 disarm Keepalived_VRRP: Configuration is using : 62990 Bytes Jul  3 16:29:57 disarm Keepalived_VRRP: VRRP_Instance(VRRP_ROUTER1) Transition to MASTER STATE Jul  3 16:29:58 disarm Keepalived_VRRP: VRRP_Instance(VRRP_ROUTER1) Entering MASTER STATE Jul  3 16:29:58 disarm Keepalived_VRRP: Netlink: skipping nl_cmd msg... 

If you are unable to determine the source of a problem with the system logs, you can use tcpdump to display the VRRP advertisements that are sent on the local network. Advertisements are sent to a reserved VRRP multicast address (224.0.0.18), so the following filter can be used to display all VRRP traffic that is visible on the interface passed to the "-i" option:

$ tcpdump -vvv -n -i br0 host 224.0.0.18 tcpdump: verbose output suppressed, use -v or -vv for full protocol decode listening on br0, link-type EN10MB (Ethernet), capture size 96 bytes  10:18:23.621512 IP (tos 0x0, ttl 255, id 102, offset 0, flags [none], proto VRRP (112), length 40) \                 192.168.1.6 > 224.0.0.18: VRRPv2, Advertisement, vrid 100, prio 100, authtype simple,                  intvl 1s, length 20, addrs: 192.168.1.100 auth "19283746"  10:18:25.621977 IP (tos 0x0, ttl 255, id 103, offset 0, flags [none], proto VRRP (112), length 40) \                 192.168.1.6 > 224.0.0.18: VRRPv2, Advertisement, vrid 100, prio 100, authtype simple,                  intvl 1s, length 20, addrs: 192.168.1.100 auth "19283746"                          ......... 

The output contains several pieces of data that be useful for debugging problems:

authtype - the type of authentication in use (authentication configuration directive) vrid - the virtual router id (virtual_router_id configuration directive) prio - the priority of the device (priority configuration directive) intvl - how often to send out advertisements (advert_int configuration directive) auth - the authentication token sent (auth_pass configuration directive) 

Conclusion


In this article I described how to set up a host to use the keepalived daemon, and provided a sample configuration file that can be used to failover virtual interfaces between servers. Keepalived has a slew of options not covered here, and I will refer you to the keepalived source code and documentation for additional details

posted @ 2015-11-01 21:06 abin 阅读(265) | 评论 (0)编辑 收藏

2015年10月12日 #

在Keepalived集群中,其实并没有严格意义上的主、备节点,虽然可以在Keepalived配置文件中设置“state”选项为“MASTER”状态,但是这并不意味着此节点一直就是Master角色。控制节点角色的是Keepalived配置文件中的“priority”值,但并它并不控制所有节点的角色,另一个能改变节点角色的是在vrrp_script模块中设置的“weight”值,这两个选项对应的都是一个整数值,其中“weight”值可以是个负整数,一个节点在集群中的角色就是通过这两个值的大小决定的。

在一个一主多备的Keepalived集群中,“priority”值最大的将成为集群中的Master节点,而其他都是Backup节点。在Master节点发生故障后,Backup节点之间将进行“民主选举”,通过对节点优先级值“priority”和““weight”的计算,选出新的Master节点接管集群服务。


在vrrp_script模块中,如果不设置“weight”选项值,那么集群优先级的选择将由Keepalived配置文件中的“priority”值决定,而在需要对集群中优先级进行灵活控制时,可以通过在vrrp_script模块中设置“weight”值来实现。下面列举一个实例来具体说明。


假定有A和B两节点组成的Keepalived集群,在A节点keepalived.conf文件中,设置“priority”值为100,而在B节点keepalived.conf文件中,设置“priority”值为80,并且A、B两个节点都使用了“vrrp_script”模块来监控mysql服务,同时都设置“weight”值为10,那么将会发生如下情况。


在两节点都启动Keepalived服务后,正常情况是A节点将成为集群中的Master节点,而B自动成为Backup节点,此时将A节点的mysql服务关闭,通过查看日志发现,并没有出现B节点接管A节点的日志,B节点仍然处于Backup状态,而A节点依旧是Master状态,在这种情况下整个HA集群将失去意义。


下面就分析一下产生这种情况的原因,这也就是Keepalived集群中主、备角色选举策略的问题。下面总结了在Keepalived中使用vrrp_script模块时整个集群角色的选举算法,由于“weight”值可以是正数也可以是负数,因此,要分两种情况进行说明。


1. “weight”值为正数时

在vrrp_script中指定的脚本如果检测成功,那么Master节点的权值将是“weight值与”priority“值之和,如果脚本检测失败,那么Master节点的权值保持为“priority”值,因此切换策略为:

Master节点“vrrp_script”脚本检测失败时,如果Master节点“priority”值小于Backup节点“weight值与”priority“值之和,将发生主、备切换。

Master节点“vrrp_script”脚本检测成功时,如果Master节点“weight”值与“priority”值之和大于Backup节点“weight”值与“priority”值之和,主节点依然为主节点,不发生切换。


2. “weight”值为负数时

在“vrrp_script”中指定的脚本如果检测成功,那么Master节点的权值仍为“priority”值,当脚本检测失败时,Master节点的权值将是“priority“值与“weight”值之差,因此切换策略为:

Master节点“vrrp_script”脚本检测失败时,如果Master节点“priority”值与“weight”值之差小于Backup节点“priority”值,将发生主、备切换。

Master节点“vrrp_script”脚本检测成功时,如果Master节点“priority”值大于Backup节点“priority”值时,主节点依然为主节点,不发生切换。


在熟悉了Keepalived主、备角色的选举策略后,再来分析一下刚才实例,由于A、B两个节点设置的“weight”值都为10,因此符合选举策略的第一种,在A节点停止Mysql服务后,A节点的脚本检测将失败,此时A节点的权值将保持为A节点上设置的“priority”值,即为100,而B节点的权值将变为“weight”值与“priority”值之和,也就是90(10+80),这样就出现了A节点权值仍然大于B节点权值的情况,因此不会发生主、备切换。


对于“weight”值的设置,有一个简单的标准,即“weight”值的绝对值要大于Master和Backup节点“priority”值之差。对于上面A、B两个节点的例子,只要设置“weight”值大于20即可保证集群正常运行和切换。由此可见,对于“weight值的设置,要非常谨慎,如果设置不好,将导致集群角色选举失败,使集群陷于瘫痪状态。

posted @ 2015-10-12 00:50 abin 阅读(283) | 评论 (0)编辑 收藏

2015年10月11日 #

如果你在读这篇文章,说明你跟大多数开发者一样对GIT感兴趣,如果你还没有机会来试一试GIT,我想现在你就要了解它了。

GIT不仅仅是个版本控制系统,它也是个内容管理系统(CMS),工作管理系统等。如果你是一个具有使用SVN背景的人,你需要做一定的思想转换,来适应GIT提供的一些概念和特征。所以,这篇文章的主要目的就是通过介绍GIT能做什么、它和SVN在深层次上究竟有什么不同来帮助你认识它。

那好,这就开始吧…

1.GIT是分布式的,SVN不是:

这是GIT和其它非分布式的版本控制系统,例如SVN,CVS等,最核心的区别。如果你能理解这个概念,那么你就已经上手一半了。需要做一点声明,GIT并不是目前第一个或唯一的分布式版本控制系统。还有一些系统,例如BitkeeperMercurial等,也是运行在分布式模式上的。但GIT在这方面做的更好,而且有更多强大的功能特征。

GIT跟SVN一样有自己的集中式版本库或服务器。但,GIT更倾向于被使用于分布式模式,也就是每个开发人员从中心版本库/服务器上chect out代码后会在自己的机器上克隆一个自己的版本库。可以这样说,如果你被困在一个不能连接网络的地方时,就像在飞机上,地下室,电梯里等,你仍然能够提 交文件,查看历史版本记录,创建项目分支,等。对一些人来说,这好像没多大用处,但当你突然遇到没有网络的环境时,这个将解决你的大麻烦。

同样,这种分布式的操作模式对于开源软件社区的开发来说也是个巨大的恩赐,你不必再像以前那样做出补丁包,通过email方式发送出去,你只需要创建一个分支,向项目团队发送一个推请求。这能让你的代码保持最新,而且不会在传输过程中丢失。GitHub.com就是一个这样的优秀案例。

有些谣言传出来说subversion将来的版本也会基于分布式模式。但至少目前还看不出来。

2.GIT把内容按元数据方式存储,而SVN是按文件:

所有的资源控制系统都是把文件的元信息隐藏在一个类似.svn,.cvs等的文件夹里。如果你把.git目录的 体积大小跟.svn比较,你会发现它们差距很大。因为,.git目录是处于你的机器上的一个克隆版的版本库,它拥有中心版本库上所有的东西,例如标签,分 支,版本记录等。

3.GIT分支和SVN的分支不同:

分支在SVN中一点不特别,就是版本库中的另外的一个目录。如果你想知道是否合并了一个分支,你需要手工运行像这样的命令svn propget svn:mergeinfo,来确认代码是否被合并。感谢Ben同学指出这个特征。所以,经常会发生有些分支被遗漏的情况。

然而,处理GIT的分支却是相当的简单和有趣。你可以从同一个工作目录下快速的在几个分支间切换。你很容易发现未被合并的分支,你能简单而快捷的合并这些文件。

4.GIT没有一个全局的版本号,而SVN有:

目前为止这是跟SVN相比GIT缺少的最大的一个特征。你也知道,SVN的版本号实际是任何一个相应时间的源代 码快照。我认为它是从CVS进化到SVN的最大的一个突破。因为GIT和SVN从概念上就不同,我不知道GIT里是什么特征与之对应。如果你有任何的线 索,请在评论里奉献出来与大家共享。

更新:有些读者指出,我们可以使用GIT的SHA-1来唯一的标识一个代码快照。这个并不能完全的代替SVN里容易阅读的数字版本号。但,用途应该是相同的。

5.GIT的内容完整性要优于SVN:

GIT的内容存储使用的是SHA-1哈希算法。这能确保代码内容的完整性,确保在遇到磁盘故障和网络问题时降低对版本库的破坏。这里有一个很好的关于GIT内容完整性的讨论 –http://stackoverflow.com/questions/964331/git-file-integrity

GIT和SVN之间只有这五处不同吗?当然不是。我想这5个只是“最基本的”“最吸引人”的,我只想到这5点。如果你发现有比这5点更有趣的,请共享出来,欢迎。




posted @ 2015-10-11 22:41 abin 阅读(315) | 评论 (0)编辑 收藏

mysql中 myisam 引擎不支持事务的概念,多用于数据仓库这样查询多而事务少的情况,速度较快。
mysql中 innoDB 引擎支持事务的概念,多用于web网站后台等实时的中小型事务处理后台。

而oracle没有引擎的概念,oracle有OLTP和OLAP模式的区分,两者的差别不大,只有参数设置上的不同。
oracle无论哪种模式都是支持事务概念的,oracle是一个不允许读脏的数据库系统。



当今的数据处理大致可以分成两大类:联机事务处理OLTP(on-line transaction processing)、联机分析处理OLAP(On-Line Analytical Processing)。OLTP是传统的关系型数据库的主要应用,主要是基本的、日常的事务处理,例如银行交易。OLAP是数据仓库系统的主要应用,支持复杂的分析操作,侧重决策支持,并且提供直观易懂的查询结果.
OLTP:
也称为面向交易的处理系统,其基本特征是顾客的原始数据可以立即传送到计算中心进行处理,并在很短的时间内给出处理结果。
这样做的最大优点是可以即时地处理输入的数据,及时地回答。也称为实时系统(Real time System)。衡量联机事务处理系统的一个重要性能指标是系统性能,具体体现为实时响应时间(Response Time),即用户在终端上送入数据之后,到计算机对这个请求给出答复所需要的时间。OLTP是由数据库引擎负责完成的。
OLTP 数据库旨在使事务应用程序仅写入所需的数据,以便尽快处理单个事务。
OLAP:
简写为OLAP,随着数据库技术的发展和应用,数据库存储的数据量从20世纪80年代的兆(M)字节及千兆(G)字节过渡到现在的兆兆(T)字节和千兆兆(P)字节,同时,用户的查询需求也越来越复杂,涉及的已不仅是查询或操纵一张关系表中的一条或几条记录,而且要对多张表中千万条记录的数据进行数据分析和信息综合,关系数据库系统已不能全部满足这一要求。在国外,不少软件厂商采取了发展其前端产品来弥补关系数据库管理系统支持的不足,力图统一分散的公共应用逻辑,在短时间内响应非数据处理专业人员的复杂查询要求。
联机分析处理(OLAP)系统是数据仓库系统最主要的应用,专门设计用于支持复杂的分析操作,侧重对决策人员和高层管理人员的决策支持,可以根据分析人员的要求快速、灵活地进行大数据量的复杂查询处理,并且以一种直观而易懂的形式将查询结果提供给决策人员,以便他们准确掌握企业(公司)的经营状况,了解对象的需求,制定正确的方案。
posted @ 2015-10-11 22:05 abin 阅读(343) | 评论 (1)编辑 收藏

2015年9月10日 #

第一种:直接启动
安装:
tar zxvf redis-2.8.9.tar.gz
cd redis-2.8.9
#直接make 编译
make
#可使用root用户执行`make install`,将可执行文件拷贝到/usr/local/bin目录下。这样就可以直接敲名字运行程序了。
make install
启动:
#加上`&`号使redis以后台程序方式运行
./redis-server &
检测:
#检测后台进程是否存在
ps -ef |grep redis
#检测6379端口是否在监听
netstat -lntp | grep 6379
#使用`redis-cli`客户端检测连接是否正常
./redis-cli
127.0.0.1:6379> keys *
(empty list or set)
127.0.0.1:6379> set key "hello world"
OK
127.0.0.1:6379> get key
"hello world"

停止:
#使用客户端
redis-cli shutdown
#因为Redis可以妥善处理SIGTERM信号,所以直接kill -9也是可以的
kill -9 PID


第二种:通过指定配置文件启动

配置文件
可为redis服务启动指定配置文件,配置文件 redis.conf 在Redis根目录下。
#修改daemonize为yes,即默认以后台程序方式运行(还记得前面手动使用&号强制后台运行吗)。
daemonize no
#可修改默认监听端口
port 6379
#修改生成默认日志文件位置
logfile "/home/futeng/logs/redis.log"
#配置持久化文件存放位置
dir /home/futeng/data/redisData

启动时指定配置文件
redis-server ./redis.conf
#如果更改了端口,使用`redis-cli`客户端连接时,也需要指定端口,例如:
redis-cli -p 6380
其他启停同 直接启动 方式。配置文件是非常重要的配置工具,随着使用的逐渐深入将显得尤为重要,推荐在一开始就使用配置文件。



第三种:
使用Redis启动脚本设置开机自启动
启动脚本
推荐在生产环境中使用启动脚本方式启动redis服务。启动脚本 redis_init_script 位于位于Redis的 /utils/ 目录下。

#大致浏览下该启动脚本,发现redis习惯性用监听的端口名作为配置文件等命名,我们后面也遵循这个约定。
#redis服务器监听的端口
REDISPORT=6379
#服务端所处位置,在make install后默认存放与`/usr/local/bin/redis-server`,如果未make install则需要修改该路径,下同。
EXEC=/usr/local/bin/redis-server
#客户端位置
CLIEXEC=/usr/local/bin/redis-cli
#Redis的PID文件位置
PIDFILE=/var/run/redis_${REDISPORT}.pid
#配置文件位置,需要修改
CONF="/etc/redis/${REDISPORT}.conf"

配置环境
1. 根据启动脚本要求,将修改好的配置文件以端口为名复制一份到指定目录。需使用root用户。
mkdir /etc/redis
cp redis.conf /etc/redis/6379.conf
 2. 将启动脚本复制到/etc/init.d目录下,本例将启动脚本命名为redisd(通常都以d结尾表示是后台自启动服务)。
cp redis_init_script /etc/init.d/redisd

 3.  设置为开机自启动
此处直接配置开启自启动 chkconfig redisd on 将报错误: service redisd does not support chkconfig 
参照 此篇文章 ,在启动脚本开头添加如下两行注释以修改其运行级别:
#!/bin/sh
# chkconfig:   2345 90 10
# description:  Redis is a persistent key-value database
#
 再设置即可成功。


#设置为开机自启动服务器
chkconfig redisd on
#打开服务
service redisd start
#关闭服务
service redisd stop


http://www.tuicool.com/articles/aQbQ3u







posted @ 2015-09-10 21:02 abin 阅读(1668) | 评论 (0)编辑 收藏