数据仓库 Or SOA
SOA、DataStage、Unix、Java EE、Oracle、Data Migration、Data Integration

2007年7月10日

Sql_Trace File Identifier和Sample Block(抽样表扫描)

Sql_Trace File Identifier

我们设置了sql_trace为true之后,trace file会dump到udump中,但是该目录下也许已经有很多trace file了,或者其他session也设置了sql_trace为true,也在产生trace file。如何能迅速找到你的session产生的trace file呢?我们可以通过如下语句设置Sql_Trace File Identifier:

ALTER SESSION SET TRACEFILE_IDENTIFIER = 'my_trace_id';


这样你的session产生的trace file就会以设置的identifier为后缀。

 

SQL> alter session set tracefile_identifier='test_identifier';

Session altered.

SQL
> alter session set sql_trace=true;

Session altered.

SQL
> select * from dual;

D
-
X

SQL
> alter session set sql_trace=false;

Session altered.

然后我们可以看到udump下生成的trace file文件名为:test_ora_4168_test_identifier.trc

Sample Block

抽样表扫描(sample table scan)只读取一个表的部分数据。其实我们只选择表的部分数据的话,可以用rownum来限制也很方便。但是考虑如下情况:
A表和B表join,我只想A表的部分数据去连接B表,此时用sample就方便的多。如果用rownum就需要用一个subquery。

--用Sample:
select A.id,A.name from A sample block(20),B 
where A.id=B.id
--用Rownum:
select A.id,A.name from (select a.id,a.name from a where rownum<100) A,B 
where A.id=B.id


抽样表扫描有两种形式,一种是抽样表中行数的一个百分比,一个是抽样表中块数的一个百分比。注意都是百分比。语法如下:

SELECT * FROM employees SAMPLE (1);--返回行数乘以1%
SELECT * FROM employees SAMPLE BLOCK (1);--返回块数乘以1%


也可以在视图上使用SAMPLE。


posted @ 2009-08-25 16:01 羽翼渐丰 阅读(398) | 评论 (0) | 编辑 收藏
 
ETL时先disable外键约束再load数据
Kimball在他的ETL Toolkit一书中,一直强调:对于有外键约束的表,为了提高load的速度,可以先将外键约束disable,加载完成后再enable(当然另外一个原因是进入数据仓库的数据都是规范的,甚至可以考虑不使用外键约束,即违反约束的数据在Transformation部分就应该解决掉,当然这是设计问题了)。ps:另外两个提高load速度的方法分别为使用sql loader和insert、update分离。当时还不是明白这个过程怎么具体实现,今天看《oracle concept》看到下面话终于明白了一些:
Flexibility for Data Loads and Identification of Integrity Violations
You can disable integrity constraints temporarily so that large amounts of data can be loaded without the overhead of constraint checking. When the data load is complete,you can easily enable the integrity constraints, and you can automatically report any new rows that violate integrity constraints to a separate exceptions table.
看来oracle这点上做的比较完善,还可以将违反完整性约束的数据自动记录到一个exception表中。
posted @ 2008-01-15 16:12 羽翼渐丰 阅读(1467) | 评论 (4) | 编辑 收藏
 
删除重复数据的3种方法

今天google分析函数row_number()的时候发现的,觉得很好:

表demo是重复拷贝自dba_objects,有88万左右,不重复的是27323,没有索引
方法一:delete from demo a where a.rowid <> (select max(rowid) from demo b where
b.object_id=a.object_id);
耗时:几个小时以上
方法二: delete from demo where rowid in
(select rid from
(select rowid rid,row_number() over(partition by object_id order by rowid) rn
from demo)
where rn <> 1 );
耗时:30秒
方法三: create table demo2 as
select object_id,owner... from
(select demo.*,row_number() over(partition by object_id order by rowid) rn from demo)
where rn = 1;
truncate table demo; insert into demo select * from demo2; drop table demo2;
共耗时: 10秒,适合大数据量的情况,产生更少回滚量;


学到了分析函数row_number(),对于object_id和rowid也有了一些认识。oracle要学的东西太多了,什么时候是个头啊。上面的方法不是很难理解,但也还没有完全理解,有机会实际试试。

posted @ 2008-01-08 10:04 羽翼渐丰 阅读(426) | 评论 (0) | 编辑 收藏
 
在load的时候去掉sequential file的header和footer
今天在论坛看到的方法,先记录下来:
方法一:
在sequential file stage中有个属性可以ignore第一行。对于footer,如果footer使用了与正文不同的格式,可以用使用transformer stage和 @INROWNUM stage来将footer去掉。
方法二:
在sequential file的filter option中使用sed -e '$d' -e '1d'来讲header和footer去掉。
方法三:
写个before job subroutine,可以先对这个文件做各种处理,不过我想也是subroutine中调用DSExecute sed -e '$d' -e '1d'。当然Basic中也有专门对sequential file处理的函数。
posted @ 2007-08-13 19:37 羽翼渐丰 阅读(660) | 评论 (0) | 编辑 收藏
 
datastage中Orchadmin命令的使用(转)

Orchadmin is a command line utility provided by datastage to research on data sets.
The general callable format is : $orchadmin [options] [descriptor file]

1. Before using orchadmin, you should make sure that either the working directory or the $APT_ORCHHOME/etc contains the file “config.apt” OR

The environment variable $APT_CONFIG_FILE should be defined for your session.
Orchadmin commands

The various commands available with orchadmin are
1. CHECK: $orchadmin check

Validates the configuration file contents like , accesibility of all nodes defined in the configuration file, scratch disk definitions and accesibility of all the nodes etc. Throws an error when config file is not found or not defined properly

2. COPY : $orchadmin copy
Makes a complete copy of the datasets of source with new destination descriptor file name. Please not that

a. You cannot use UNIX cp command as it justs copies the config file to a new name. The data is not copied.

b. The new datasets will be arranged in the form of the config file that is in use but not according to the old confing file that was in use with the source.

3. DELETE : $orchadmin <> [-f -x] descriptorfiles….
The unix rm utility cannot be used to delete the datasets. The orchadmin delete or rm command should be used to delete one or more persistent data sets.
-f options makes a force delete. If some nodes are not accesible then -f forces to delete the dataset partitions from accessible nodes and leave the other partitions in inaccesible nodes as orphans.
-x forces to use the current config file to be used while deleting than the one stored in data set.

4. DESCRIBE: $orchadmin describe [options] descriptorfile.ds
This is the single most important command.
1. Without any option lists the no.of.partitions, no.of.segments, valid segments, and preserve partitioning flag details of the persistent dataset.
-c : Print the configuration file that is written in the dataset if any
-p: Lists down the partition level information.
-f: Lists down the file level information in each partition
-e: List down the segment level information .
-s: List down the meta-data schema of the information.
-v: Lists all segemnts , valid or otherwise
-l : Long listing. Equivalent to -f -p -s -v -e

5. DUMP: $orchadmin dump [options] descriptorfile.ds
The dump command is used to dump(extract) the records from the dataset.
Without any options the dump command lists down all the records starting from first record from first partition till last record in last partition.
-delim ‘’ : Uses the given string as delimtor for fields instead of space.
-field : Lists only the given field instead of all fields.
-name : List all the values preceded by field name and a colon
-n numrecs : List only the given number of records per partition.
-p period(N) : Lists every Nth record from each partition starting from first record.
-skip N: Skip the first N records from each partition.
-x : Use the current system configuration file rather than the one stored in dataset.

6. TRUNCATE: $orchadmin truncate [options] descriptorfile.ds
Without options deletes all the data(ie Segments) from the dataset.
-f: Uses force truncate. Truncate accessible segments and leave the inaccesible ones.
-x: Uses current system config file rather than the default one stored in the dataset.
-n N: Leaves the first N segments in each partition and truncates the remaining.

7. HELP: $orchadmin -help OR $orchadmin -help
Help manual about the usage of orchadmin or orchadmin commands

posted @ 2007-08-08 15:37 羽翼渐丰 阅读(2261) | 评论 (2) | 编辑 收藏
 
trim会drop掉记录
当一条记录过来,某个字段为空,而在transformer中又对该字段trim了,由于该字段为空,不能trim,所以DataStage会认为这是个错误,从而把这个记录drop了,这与实际业务不符合,非主键字段为空并不能就把这条记录drop了。我们可以通过写一个判断来解决该问题:if LK_1.EMAIL <> '' then trim(LK_1.EMAIL) else LK_1.EMAIL
posted @ 2007-08-02 21:01 羽翼渐丰 阅读(539) | 评论 (0) | 编辑 收藏
 
DataStage中可以执行routine,命令行以控制的地方
1 before/after job和before/after stage,可以执行shell,命令行以及一些DataStage API,如DSJobReport
2 job sequence中有个stage,可以在里面写控制job调度的地方
3 每个job的properties中有个job control tab
4 命令行可以执行、导入job等,它通过dsjob实现,在DS Manager的文档中还讲了有专门的命令行来导入job
5 可以编写shell来控制job,对dsjob命令做封装,可以传入参数等。
对于上面这些概念还是很模糊,先记录在此。
posted @ 2007-08-02 19:29 羽翼渐丰 阅读(1799) | 评论 (0) | 编辑 收藏
 
trim全为null的某个字段之后,输出为0
今天又碰到新问题,在transformer中,进去是有很多record,出来之后为0条record,用peek,看了一下,进去的数据的某个字段数据全为null,而在transformer中对此字段trim了,改为不trim就可以了。
posted @ 2007-08-02 15:21 羽翼渐丰 阅读(676) | 评论 (0) | 编辑 收藏
 
job一直运行,数据不能插入数据库
今天遇到一个问题,job不停的在那里运行,然后link上的数据显示各个环节都是正确的,包括最后插入数据库的link上也显示了数据,但是最后数据库里并没有数据。在Director的log中,日志在从两个源文件把所有数据load出来完之后,日志就死在那里了。
以前这个job是正确的,昨天由于重新load其中一个源文件的元数据,所以出现了上述问题。所以先前以为是由于load的新的源数据有问题,就从此处来找问题的原因,并且认为可能是改了元数据,在其他地方映射的时候有位置不对的地方,所以整了很久。因为以前是好好的,然后又以为是服务器的问题。

这都是定势思维的错误,然后又一急,所以浪费了很多时间,其实很多时候都是这样,出了问题我们不能理性的好好思考。

其实问题很简单:
如果我们按照正常逻辑来分析的话,既然不能读入数据库,肯定是数据不符合数据库对数据的约束,包括主键啊,非空啊,本问题就是由于在stage的不断流转中产生了很多空格,使得最后待插入的数据长度远远大于数据库中定义的字段长度。由于在那里不断reject,所以影响了速度,job一直在那里运行。最后用APT_STRING_PADDER,将其设为0x0(用null代替空格)搞定。
ps:在插入数据库时使用一个reject文件对查错有好处,这样能看到reject是些什么数据,然后就能知道为什么被reject了。
同时我们可以得出如果最后插入数据库时很多数据被reject,但是你并没有用一个reject文件来接收这些reject掉的数据,将使得job基本处于停滞状态。
posted @ 2007-08-01 17:33 羽翼渐丰 阅读(823) | 评论 (0) | 编辑 收藏
 
DataStage Job重置的三种方法

当一个Job跑到一半终止了要还原,在DS Director中主要有三种方法来还原:
1 Job-----Reset
2 Job-----Clear Status File
3 Job-----Cleanup Resource
具体细节也不懂,有时间慢慢研究

posted @ 2007-08-01 10:17 羽翼渐丰 阅读(1259) | 评论 (0) | 编辑 收藏
 
Datastage快捷键
Datastage Desiner的快捷键
1 关闭某个job <Ctrl+F4>
2 <Ctrl+F6>同时打开了两个以上job,在不同的job之间切换。
Datastage Director的快捷键
posted @ 2007-07-24 17:00 羽翼渐丰 阅读(770) | 评论 (0) | 编辑 收藏
 
How to install TortoiseSVN plugin for Eclipse
1 First you should already have Tortoise installed.
2 Download the plugin here:http://www.tabaquismo.freehosting.net/ignacio/eclipse/tortoise-svn/subversion.html
3 Unzip the plugin to the plugins directory in your Eclipse as u usually do.
4 Now open the Preference page of your Eclipse and you can see Tortoise if everything goes well.
5 click Tortoise and you have three fields to configure,the following is what I configured :
Path:C:\Program Files\TortoiseSVN\bin\TortoiseProc.exe
SVN Server:128.64.181.66
Shared Root Folder:/CARTCCP/CARTCCP/proj_/,your project's parent dir.

Now you can File--Import--Existing Eclipse projects into Workspace,choose your project and import.

Till now you can right-click on any files in your project and you can see shortcut menu Tortoise and here you can do update、commit and so on.
posted @ 2007-07-18 15:15 羽翼渐丰 阅读(1528) | 评论 (0) | 编辑 收藏
 
调用dsjob的shell script
#!/bin/ksh
#######
################################################################################
#######
####### FILE: MasterControl.ksh
#######
####### DESCRIPTION: Starts a DataStage MasterControl type job passing
#######              all runtime parameter values
#######
#######
####### Date       Version   Developer      Description
####### ---------- --------- -------------- ------------------------------------
####### 2002-05-15 1.0       Ken Bland      Initial release
####### 2002-06-27 2.2       Ken Bland      FileSetDate/ProcessDate modifications
####### 2002-07-10 2.2       Steve Boyce    Added ProcessDate as 14th parameter
####### 2002-08-16 2.3       Steve Boyce    Now calls stored procedure
#######                                     GET_NEW_BATCH_NBR in datastage schema
#######                                     instead of deriving it and inserting
#######                                     here.
#######                                     Uses comSQLPlus.ksh and comPLSQL.ksh
#######                                     instead of SQLPlusStub.ksh.ksh
#######

PROG=`basename ${0}`
EXIT_STATUS=0

NOW=`date`
echo "${NOW} ${PROG} Initialization..."
echo

#######
####### CONFIGURATION ##########################################################
#######

if [ ${#} -ne 14 ]; then
   echo "${NOW} ${PROG} : Invalid parameter list."
   echo "${NOW} ${PROG} : The script needs 14 parameters:"
   echo "${NOW} ${PROG} :    JobName"
   echo "${NOW} ${PROG} :    ParameterFile"
   echo "${NOW} ${PROG} :    FileSetDate (YYYY-MM-DD)"
   echo "${NOW} ${PROG} :    BatchNumber"
   echo "${NOW} ${PROG} :    JobHierarchyFile"
   echo "${NOW} ${PROG} :    SourceSystemList"
   echo "${NOW} ${PROG} :    SubjectAreaList"
   echo "${NOW} ${PROG} :    ClearWorkArea"
   echo "${NOW} ${PROG} :    StartingMilestone"
   echo "${NOW} ${PROG} :    EndingMilestone"
   echo "${NOW} ${PROG} :    DebugMode"
   echo "${NOW} ${PROG} :    JobLinkStatisticChecksFile"
   echo "${NOW} ${PROG} :    ResurrectLogFile"
   echo "${NOW} ${PROG} :    ProcessDate (NULL|YYYY-MM-DD H24:MI:SS)"
   exit 99
fi

JobName="${1}"
ParameterFile="${2}"
FileSetDate="${3}"
BatchNumber="${4}"
JobHierarchyFile="${5}"
SourceSystemList="${6}"
SubjectAreaList="${7}"
ClearWorkArea="${8}"
StartingMilestone="${9}"
EndingMilestone="${10}"
DebugMode="${11}"
JobLinkStatisticChecksFile="${12}"
ResurrectLogFile="${13}"
ProcessDate="${14}"

echo "${NOW} ${PROG} JobName ${JobName}"
echo "${NOW} ${PROG} ParameterFile ${ParameterFile}"
echo "${NOW} ${PROG} FileSetDate ${FileSetDate}"
echo "${NOW} ${PROG} BatchNumber ${BatchNumber}"
echo "${NOW} ${PROG} JobHierarchyFile ${JobHierarchyFile}"
echo "${NOW} ${PROG} SourceSystemList ${SourceSystemList}"
echo "${NOW} ${PROG} SubjectAreaList ${SubjectAreaList}"
echo "${NOW} ${PROG} ClearWorkArea ${ClearWorkArea}"
echo "${NOW} ${PROG} StartingMilestone ${StartingMilestone}"
echo "${NOW} ${PROG} EndingMilestone ${EndingMilestone}"
echo "${NOW} ${PROG} DebugMode ${DebugMode}"
echo "${NOW} ${PROG} JobLinkStatisticChecksFile ${JobLinkStatisticChecksFile}"
echo "${NOW} ${PROG} ResurrectLogFile ${ResurrectLogFile}"
echo "${NOW} ${PROG} ProcessDate ${ProcessDate}"
echo

# Below will look in the parameters.ini file to determine the directory path each.
UserID=`whoami`
BinFileDirectory=`cat /.dshome`/bin
LogFileDirectory=`grep -w LogFileDirectory ${ParameterFile}|cut -d "=" -f2`
TempFileDirectory=`grep -w TempFileDirectory ${ParameterFile}|cut -d "=" -f2`
CommonScriptFileDirectory=`grep -w CommonScriptFileDirectory ${ParameterFile}|cut -d "=" -f2`
CommonLogFileDirectory=`grep -w CommonLogFileDirectory ${ParameterFile}|cut -d "=" -f2`
LogFileName=${CommonLogFileDirectory}/${PROG}_${JobName}.log
TEMPBATCHNBRLOG=${TempFileDirectory}/${PROG}_${JobName}_start.log
DATASTAGEPROJECT=`grep -w DATASTAGEPROJECT ${ParameterFile}|cut -d "=" -f2`
DSSERVER=`grep -w DSSERVER ${ParameterFile}|cut -d "=" -f2`
DSUSERID=`grep -w DSUSERID ${ParameterFile}|cut -d "=" -f2`
DSPASSWORD=`grep -w DSPASSWORD ${ParameterFile}|cut -d "=" -f2`

NOW=`date`
echo "${NOW} ${PROG} UserID ${UserID}"
echo "${NOW} ${PROG} BinFileDirectory ${BinFileDirectory}"
echo "${NOW} ${PROG} LogFileDirectory ${LogFileDirectory}"
echo "${NOW} ${PROG} TempFileDirectory ${TempFileDirectory}"
echo "${NOW} ${PROG} CommonScriptFileDirectory ${CommonScriptFileDirectory}"
echo "${NOW} ${PROG} CommonLogFileDirectory ${CommonLogFileDirectory}"
echo "${NOW} ${PROG} LogFileName ${LogFileName}"
echo "${NOW} ${PROG} TEMPBATCHNBRLOG ${TEMPBATCHNBRLOG}"
echo "${NOW} ${PROG} DATASTAGEPROJECT ${DATASTAGEPROJECT}"
echo "${NOW} ${PROG} DSSERVER ${DSSERVER}"
echo "${NOW} ${PROG} DSUSERID ${DSUSERID}"
echo "${NOW} ${PROG} DSPASSWORD *Protected*"
echo

#######
####### PARAMETER BUILD Without batch number ##################################
#######

if [ "${ProcessDate}" = "NULL" ]; then
   StartTimestamp=`date '+%Y-%m-%d %H:%M:%S'`
else
   StartTimestamp="${ProcessDate}"
fi
ParamList=" -param ParameterFile=${ParameterFile}"
ParamList="${ParamList} -param ProcessDate=\"${StartTimestamp}\""
ParamList="${ParamList} -param FileSetDate=${FileSetDate}"
ParamList="${ParamList} -param JobHierarchyFile=${JobHierarchyFile}"
ParamList="${ParamList} -param SourceSystemList=${SourceSystemList}"
ParamList="${ParamList} -param SubjectAreaList=${SubjectAreaList}"
ParamList="${ParamList} -param ClearWorkArea=${ClearWorkArea}"
ParamList="${ParamList} -param StartingMilestone=${StartingMilestone}"
ParamList="${ParamList} -param EndingMilestone=${EndingMilestone}"
ParamList="${ParamList} -param DebugMode=${DebugMode}"
ParamList="${ParamList} -param JobLinkStatisticChecksFile=${JobLinkStatisticChecksFile}"
ParamList="${ParamList} -param ResurrectLogFile=${ResurrectLogFile}"

#######
####### Get Batch Number and create ETL_BATCH_AUDIT record #####################
#######

echo "${NOW} ${PROG} About to get new BATCH_NBR and insert it into ETL_BATCH_AUDIT..."
${CommonScriptFileDirectory}/comPLSQL.ksh ${ParameterFile} "IRDSN" "IRUserID" \
                                           GET_NEW_BATCH_NBR \
                                              "${JobName}" \
                                              "${StartTimestamp}" \
                                              "${UserID}" \
                                              "${SourceSystemList}" \
                                              "${SubjectAreaList}" \
                                              "${ParamList}" \
                                              "${FileSetDate}" > ${TEMPBATCHNBRLOG}
SQL_EXIT_STATUS=$?
cat ${TEMPBATCHNBRLOG}
if [ "${SQL_EXIT_STATUS}" != 0 ]; then
   NOW=`date`
   echo "${NOW} ${PROG} Failure to connect/insert into ETL_Batch_Audit table!"
   exit ${SQL_EXIT_STATUS}
fi

#######
####### Get BATCH_NBR from batch number log file ##############################
#######
BatchNumber=`grep -w BATCH_NBR ${TEMPBATCHNBRLOG}|cut -d "=" -f2`
if [ -z "${BatchNumber}" ]; then
   NOW=`date`
   echo "${NOW} ${PROG} Failure to retrieve BATCH_NBR from ${TEMPBATCHNBRLOG}"
   exit ${SQL_EXIT_STATUS}
fi

#######
####### Add batch number to list of parameters #################################
#######
ParamList="${ParamList} -param BatchNumber=${BatchNumber}"

NOW=`date`
echo
echo ${NOW} ${PROG} Parameter list: ${ParamList}
echo

#######
####### DataStage EXECUTION ####################################################
#######

NOW=`date`
echo "${NOW} ${PROG} Executing DataStage dsjob program..."

echo ${BinFileDirectory}/dsjob -server ${DSSERVER} -user ${DSUSERID} -password ${DSPASSWORD} -run -wait ${ParamList} ${DATASTAGEPROJECT} ${JobName} 2>&1 > ${LogFileName}
echo
echo "${BinFileDirectory}/dsjob -server ${DSSERVER} -user ${DSUSERID} -password ${DSPASSWORD} -run -wait ${ParamList} ${DATASTAGEPROJECT} ${JobName} 2>&1 > ${LogFileName}"
echo

eval ${BinFileDirectory}/dsjob -server ${DSSERVER} -user ${DSUSERID} -password ${DSPASSWORD} -run -wait ${ParamList} ${DATASTAGEPROJECT} ${JobName} 2>&1 >> ${LogFileName}

jobwaiting=`grep "Waiting for job..." ${LogFileName}`
if [ "${jobwaiting}" != "Waiting for job..." ]; then
   NOW=`date`
   echo ${NOW} ${PROG} "DataStage failed to start the job"
   failedstart=1
else
   NOW=`date`
   echo ${NOW} ${PROG} "DataStage successfully started the job"
   failedstart=0
fi
NOW=`date`
echo ${NOW} ${PROG} "Retrieving job information"

${BinFileDirectory}/dsjob -server ${DSSERVER} -user ${DSUSERID} -password ${DSPASSWORD} -jobinfo ${DATASTAGEPROJECT} ${JobName} >> ${LogFileName}

#######
####### CHECK STATUS ###########################################################
#######

ERROR=`grep "Job Status" ${LogFileName}`
ERROR=${ERROR##*\(}
ERROR=${ERROR%%\)*}

if [ "${failedstart}" != 0 ]; then
   NOW=`date`
   echo ${NOW} ${PROG} "The job failed to start"
   AuditStatus="FAILURE"
   Comments="MasterControl aborted"
   EXIT_STATUS=1
else
   if [ "${ERROR}" = 1 -o "${ERROR}" = 2 ]; then
      NOW=`date`
      echo ${NOW} ${PROG} "The job completed successfully"
      AuditStatus="SUCCESS"
      Comments=""
      EXIT_STATUS=0
   else
      NOW=`date`
      echo ${NOW} ${PROG} "The job aborted"
      AuditStatus="FAILURE"
      Comments="MasterControl aborted"
      EXIT_STATUS=1
   fi
fi

FailedJobCount=`grep -i FAILED ${LogFileDirectory}/${JobName}.log|wc -l|cut -b1-9`
FailedJobCount=`expr ${FailedJobCount} + 0`
echo ${NOW} ${PROG} The number of failed jobs is [${FailedJobCount}]

if [ "${FailedJobCount}" != 0 ]; then
   NOW=`date`
   echo ${NOW} ${PROG} "The job had failed processes"
   AuditStatus="FAILURE"
   Comments="MasterControl had ${FailedJobCount} failed processes"
   EXIT_STATUS=1
fi
StoppedJobStreamCount=`grep "JOB STREAM STOPPED" ${LogFileDirectory}/${JobName}.his|wc -l|cut -b1-9`
StoppedJobStreamCount=`expr ${StoppedJobStreamCount} + 0`
if [ "${StoppedJobStreamCount}" != 0 ]; then
   NOW=`date`
   echo ${NOW} ${PROG} "The job stream was STOPped or KILLed"
   AuditStatus="FAILURE"
   Comments="MasterControl job stream was STOPped or KILLed"
   EXIT_STATUS=1
fi

#######
####### AUDIT ##################################################################
#######

echo
echo "${NOW} ${PROG} About to update ETL_BATCH_AUDIT with status information..."
EndTimestamp=`date '+%Y-%m-%d %H:%M:%S'`

SQLstring="UPDATE ETL_BATCH_AUDIT A \
              SET A.END_TIMESTAMP = TO_DATE('${EndTimestamp}','YYYY-MM-DD HH24:MI:SS'), \
                         A.STATUS = '${AuditStatus}', \
                       A.COMMENTS = '${Comments}', \
                A.RUNTIMESETTINGS = '${ParamList}' \
            WHERE (A.BATCH_NBR = ${BatchNumber});"

NOW=`date`
echo ${NOW} ${PROG} Audit SQL ${SQLstring}
SQLScriptFileName=${TempFileDirectory}/${PROG}_${JobName}_end.sql
echo ${SQLstring} > ${SQLScriptFileName}

${CommonScriptFileDirectory}/comSQLPlus.ksh ${ParameterFile} IRDSN IRUserID ${SQLScriptFileName}
SQL_EXIT_STATUS=$?
if [ "${SQL_EXIT_STATUS}" != 0 ]; then
   NOW=`date`
   echo ${NOW} ${PROG} Failure to connect/update into ETL_Batch_Audit table!
   exit ${SQL_EXIT_STATUS}
fi

#######
####### EXIT ###################################################################
#######

NOW=`date`
echo ${NOW} ${PROG} Complete, exiting with status [${EXIT_STATUS}]
exit ${EXIT_STATUS}


好东西,先收着,再研究。
posted @ 2007-07-18 10:32 羽翼渐丰 阅读(2629) | 评论 (0) | 编辑 收藏
 
DataStage中不能删除Job

在DataStage中job不能删除,如果试着编译又出现如下错误:
Error calling subroutine: DSR_EXECJOB (Action=1); check DataStage is set up correctly in project ETUDES
(Subroutine failed to complete successfully (30107))

+

(40503) A call to an OLE server has failed, or a runtime error occured within the ole itself.

解决办法:
1 首先运行命令:DELETE FROM DS_JOBS WHERE NAME = 'JobName'
2 再运行DS.CHECKER ,注意要运行DS.CHECKER必须要有exclusive access到project
,通过如下命令可以看到有谁还连接到了服务器:netstat -a |grep dsrpc

posted @ 2007-07-14 14:34 羽翼渐丰 阅读(3750) | 评论 (1) | 编辑 收藏
 
DataStage Basic学习笔记

                                                            一 BASIC介绍

1 一般的BASIC程序是如下一个格式
[ label ] statement [ ; statement … ] <Return>
2 关于subroutine
a 如果嵌入到了程序中用GOSUB调用,如果在外部,使用CALL
b 使用return返回到主程序,如果在return之前subroutine结束了,自动return
c 如果subroutine中有ABORT STOP CHAIN则永远回不到主程序。
3 注释
以REM * ! $* 开头的为注释
4 创建一个BASIC 程序的步骤
使用CREATE.FILE命令创建一个type 1或type 19 DataStage file来保存BASIC程序的源码。(在DSEngine目录下运行CREATE.FILE命令,它将提示你输入文件名等参数)

使用DataStage Editor或其他编辑器创建BASIC程序源码,这个源码成为你刚才创建的文件的一条记录。

使用BASIC命令来编译你程序,这将创建一个包含目标代码的文件。目标文件的名字不重要,因为总是通过源文件名来引用程序。

可以用RUN命令来执行你的程序,使用RAID命令来调试你的程序。


                                                                二、基础知识
数据类型、变量和操作符
所有数据在内部都是以字符串存储,在运行时转换为具体类型。
主要有三种数据类型:字符串、数字型、和unknown(null)

字符串
在BASIC源码中,字符串产量是ASCII字符的序列,所以不能有0(null)和10(linefeed)字符,而是要使用CHAR(0)和CHAR(10)
数字型
范围从10(-307)到10(307),最多15位小数
unknown
null在内部是由单一字节CHAR(128)代表的,是一个string,可以用系统变量@NULL.STR来引用它。在BASIC程序中可以使用系统变量@NULL来引用null值。
一个字符串与null连接结果为null

数组
dimensioned array
在向量(一维数组)中,第一个元素的index为1
向量可以看做第二位为1的二维数组(矩阵)。
矩阵和向量的第一个元素分别用vector.name(0)和matrix.name(0,0)表示。它可以在MATPARSE、MATREAD和MATWRITE语句中用来存储不合格的字段。

动态数组
动态数组的最高层是字段,由(F)分割。字段包含有(V)分割的值组成。每个值又可以包含由(S)分割的subvalue。

字符串的比较
字符串的比较是一个个的比较每个字符的ASCII的值。如果两个字符串可以转换为数字,总是比较两个数字的数值大小。任何值与null相比较的结果都是unknown。空串不是简单的等于0。空格看做小于0。

字符串匹配
X代表任何字符 A代表所有字母 N代表所有数字 前面的数字代表重复次数。如果前面的数字为0,代表后面的字符重复0到N次的任意次数都可以。

0是false 空串是false null非true非false


锁 事务和隔离级别
锁
锁有两种粒度:细粒度(记录锁定)和粗粒度(文件锁定)
DataStage支持以下几种锁:
记录共享锁
记录更新锁
文件共享锁
文件意向(intent)锁 介于共享与排他之间
文件排他锁

事务
事务可以嵌套
子事务在父事务提交或回滚之前开始的话,子事务将变成活动事务,而父事务将继续存在但是是非活动的。当子事务提交或回滚之后,父事务将再次变为活动的。
在最高层的事务提交之前,将缓存所有子事务的数据库操作。当进行一个读取操作时,首先会去读取缓存的数据,如果找不到才会从数据库文件中读取。

posted @ 2007-07-12 14:46 羽翼渐丰 阅读(1654) | 评论 (0) | 编辑 收藏
 
在Java中调用DataStage的job
论坛上见到的,记录一下,以备後用
Process p = Runtime.getRuntime().exec( MProperties.DS_EXE_PATH + "/dsjob -run -wait -param lowDate=" + lowDate + " -param highDate=" + highDatePlusOne + " -param dsOutDir=" + MProperties.DS_OUT_DIR + " -param IDB_user=" +MProperties.IDB_USER + " -param IDB_pass=" +
MProperties.IDB_PASS + " " + MProperties.DS_PROJECT + " " + MProperties.DS_JOB);
p.waitFor();
return p.exitValue() == 0;

另外如果有RTI,更好的方法如下:
The best way for this to be accomplished is by using the RTI (Real Time Integration) capabilities that we added in 7.x. This allows you to fully expose a DataStage job as a service, to be then invoked via several different mechanisms, depending on your needs....

.....the job can be exposed as a classic web service, using SOAP over HTTP...we generate and publish the WSDL for this within RTI...

....the job can be exposed via our EJB binding and called appropriately

....the job can be exposed for contact via JMS, and either passed a text message or SOAP envelope....

Not only can you simply pass job parameters via this architecture, you can actually have the entire transformation job itself (columns on the ultimate input and output links) passed and returned to your calling client.

Additionally, the entire job control API, which you may already be familiar with in its API or command line implementations, has been exposed as a set of methods available via Web Services.
主要就是通过将job包装为web service或EJB或JMS来调用,可以传参数,还可以将整个job本身(最终的输入和输出链接)作为参数传递或者返回到调用的客户端。
posted @ 2007-07-10 20:13 羽翼渐丰 阅读(1564) | 评论 (0) | 编辑 收藏
 
 
<2007年7月>
日一二三四五六
24252627282930
1234567
891011121314
15161718192021
22232425262728
2930311234

 导航

  • BlogJava
  • 首页
  • 发新随笔
  • 发新文章
  • 联系
  • 聚合
  • 管理

 统计

  • 随笔: 18
  • 文章: 2
  • 评论: 8
  • 引用: 0

常用链接

  • 我的随笔
  • 我的评论
  • 我的参与
  • 最新评论

留言簿(3)

  • 给我留言
  • 查看公开留言
  • 查看私人留言

随笔分类

  • DataStage(14) (rss)
  • Java EE(2) (rss)
  • Oracle(1) (rss)
  • SOA (rss)

随笔档案

  • 2009年8月 (1)
  • 2008年1月 (2)
  • 2007年8月 (7)
  • 2007年7月 (6)
  • 2007年6月 (1)
  • 2007年5月 (1)
  • 2006年4月 (1)

搜索

  •  

最新评论

  • 1. re: DataStage中不能删除Job[未登录]
  • 如何执行 DS.CHECKER
  • --Jone
  • 2. re: ETL时先disable外键约束再load数据
  • 我们用到的比较常规的也是disable外键约束,删除index.因为upsert用的就比较少了,至于SQL loader,没怎么用过。
  • --robustyang
  • 3. re: datastage中Orchadmin命令的使用(转) [未登录]
  • 안녕하세요
  • --안녕하세요
  • 4. re: datastage中Orchadmin命令的使用(转) [未登录]
  • SDFSDF
  • --FD
  • 5. re: ETL时先disable外键约束再load数据
  • 您好,关于dagtatage的交流,请加我的MSN:timesheet@live.cn
  • --timesheet

阅读排行榜

  • 1. DataStage中不能删除Job(3750)
  • 2. DataStage官方文档学习笔记(3210)
  • 3. 调用dsjob的shell script(2629)
  • 4. datastage中Orchadmin命令的使用(转) (2261)
  • 5. DataStage中可以执行routine,命令行以控制的地方(1799)

评论排行榜

  • 1. ETL时先disable外键约束再load数据(4)
  • 2. datastage中Orchadmin命令的使用(转) (2)
  • 3. DataStage中不能删除Job(1)
  • 4. DataStage官方文档学习笔记(1)
  • 5. An Eclipse plug-in install problem when language is Chinese(0)

Powered by: 博客园
模板提供:沪江博客
Copyright ©2025 羽翼渐丰