AMIS Oracle and Java Blog https://technology.amis.nl Friends of Oracle and Java Wed, 22 Jul 2015 12:28:48 +0000 en-US hourly 1 http://wordpress.org/?v=4.2.3 Overview of WebLogic 12c RESTful Management Services https://technology.amis.nl/2015/07/21/overview-of-weblogic-12c-restful-management-services/ https://technology.amis.nl/2015/07/21/overview-of-weblogic-12c-restful-management-services/#comments Tue, 21 Jul 2015 14:58:12 +0000 https://technology.amis.nl/?p=36565 Inspired by a presentation given by Shukie Ganguly on the free Oracle Virtual Technology Summit in July (see here); “New APIs and Tools for Application Development in WebLogic 12c”, I decided to take a look at an interesting new feature in WebLogic Server 12c: the RESTful Management Services. You can see here how to enable [...]

The post Overview of WebLogic 12c RESTful Management Services appeared first on AMIS Oracle and Java Blog.

]]>
Inspired by a presentation given by Shukie Ganguly on the free Oracle Virtual Technology Summit in July (see here); “New APIs and Tools for Application Development in WebLogic 12c”, I decided to take a look at an interesting new feature in WebLogic Server 12c: the RESTful Management Services. You can see here how to enable them. In this post I will provide an overview of my short study on the topic.

RESTful management services consist of two sets of resources. tenant-monitoring resources and ‘wls’ resources. The first is more flexible in response format (JSON, XML, HTML) and more suitable for monitoring. With the latter you can for example update datasource properties and create entire servers. It however only supports JSON as return format. The ‘wls’ resources also provide links so you can automagically traverse the resource tree which is very useful. I’ve provided a Python script to do just that at the end of this post.

Monitoring

In the past I have already created all kinds of tools to do remote monitoring of WebLogic Server 11g. See for example http://javaoraclesoa.blogspot.nl/2012/09/monitoring-datasources-on-weblogic.html for some code to monitor datasources and for the state of the SOA Infrastructure; http://javaoraclesoa.blogspot.nl/2012/11/soa-suite-cluster-deployments-and.html and also for BPEL: http://javaoraclesoa.blogspot.nl/2013/03/monitoring-oracle-soa-suite-11g.html.

With the 12c RESTful Management Services this becomes a lot easier and does not require any custom code, which is of course a major improvement!

It is possible to let the RESTful Management Services return HTML, JSON or XML by using the Accept HTTP header (application/json or application/xml. HTML is the default). See here.

What can you monitor?

Available resources under http(s)://host:port/management/tenant-monitoring are (WLS 12.1.1):

  • servers
  • clusters
  • applications
  • datasources

You can also go to the level of an individual resource like for example datasources/datasourcename.

SOA Suite

The tenant-monitoring resources of the RESTful Management Services are not specific for SOA Suite. They do not allow you to obtain much information about the inner workings of applications like the SOA infrastructure application or the BPEL process manager. Thus my SOA infrastructure monitoring tool and BPEL process state monitoring tool could still be useful. You can potentially replace this functionality however with for example Jolokia. See below.

Monitoring a lot of resources

Because the Management Services allow monitoring of many resources, they would be ideal to use in a monitoring tool like Nagios. Mark Otting beat me to this however; http://www.qualogy.com/monitoring-weblogic-12c-with-nagios-and-rest/.

The RESTful Management services provide a specific set of resources which you can monitor. These resources are limited. There is also an alternative for the RESTful Management Services for monitoring WebLogic Server (and other application servers), namely Jolokia. See here. One of the nice things about Jolokia is that it allows you to directly access MBeans and you are not limited to a fixed set of available resources. Directly accessing MBeans is very powerful (and potentially dangerous!). This could for example allow obtaining SOA infrastructure state and list deployed composites.

Management

The RESTful Management Services do not only provide monitoring capabilities but also editable resources;
http://docs.oracle.com/middleware/1213/wls/WLRMR/resources.htm#WLRMR471. These resources can be accessed by going to an URL like; http(s)://host:port/management/wls/{version}/path, for example http://localhost:7001/management/wls/latest/. The resources only provide the option to reply with JSON (Accept: application/json) and provide links entries so you can see the parent and children of a resource. With POST, PUT and DELETE HTTP verbs you can update, create or remove resources and with GET and OPTIONS you can obtain information.

Deploying without dependencies (just curl)

An interesting usecase is command-line deployments without dependencies. This was an example given in the Oracle documentation. (see here). You could use for example a curl command (or whatever command-line HTTP client) to deploy an ear without need for Java libraries or WLST/Ant/Maven scripts. There is also a blog on this here.

Walking the resource tree

In contrast to the tenant-monitoring resources, the management resources allow traversing the JSON tree. The response of a HTTP GET request contains a links element, which contains parent and child entries. When an HTTP GET is not allowed or the links element does not exist, you can’t go any further down the resource. In order to display available resources on your WebLogic Server I wrote a small Python script.

 import json  
 import httplib  
 import base64  
 import string  
 from urlparse import urlparse  
   
 WLS_HOST = "localhost"  
 WLS_PORT = "7101"  
 WLS_USERNAME = "weblogic"  
 WLS_PASSWORD = "Welcome01"  
   
 def do_http_request(host,port,url,verb,accept,username,password,body):  
   # from http://mozgovipc.blogspot.nl/2012/06/python-http-basic-authentication-with.html  
   # base64 encode the username and password  
   auth = string.strip(base64.encodestring(username + ':' + password))  
   service = httplib.HTTP(host,port)  
     
   # write your headers  
   service.putrequest(verb, url)  
   service.putheader("Host", host)  
   service.putheader("User-Agent", "Python http auth")  
   service.putheader("Content-type", "text/html; charset=\"UTF-8\"")  
   # write the Authorization header like: 'Basic base64encode(username + ':' + password)  
   service.putheader("Authorization", "Basic %s" % auth)  
   service.putheader("Accept",accept)   
   service.endheaders()  
   service.send(body)  
   # get the response  
   statuscode, statusmessage, header = service.getreply()  
   #print "Headers: ", header  
   res = service.getfile().read()  
   #print 'Content: ', res  
   return statuscode,statusmessage,header,res  
   
 def do_wls_http_get(url,verb):  
   return do_http_request(WLS_HOST,WLS_PORT,url,verb,"application/json",WLS_USERNAME,WLS_PASSWORD,"")  
   
 def get_links(body):  
   uris = []  
   json_obj = {}  
   json_obj = json.loads(body)  
   if json_obj.has_key("links"):  
     for link in sorted(json_obj["links"]):  
       if (link["rel"] != "parent"):  
         uri = link["uri"]  
         uriparsed = urlparse(uri)  
         uris.append(uriparsed.path)  
   return uris     
        
 def get_links_recursive(body):  
   uris=[]  
   links = get_links(body)  
   for link in links:  
     statuscode,statusmessage,header,res = do_wls_http_get(link,"GET")  
     if statuscode==200:  
       print link  
       get_links_recursive(res)
       
 statuscode,statusmessage,header,res= do_wls_http_get("/management/wls/latest/","GET")  
 if statuscode != 200:  
   print "HTTP statuscode: "+str(statuscode)  
   print "Have you enabled RESTful Management Services?"  
 else:  
   get_links_recursive(res)

Output of this script on a WebLogic 12.1.3 server contains information on all datasources, application deployments, servers and jobs. You can use it to for example compare two environments for the presence of resources. The script is easily expanded to include the configuration of individual resources. This way you can easily compare environments and see if you have missed a specific configuration setting. Of course, only resources are displayed which can be accessed by the RESTful Management Services. Absence of for example a data-source or application deployment can easily be detected but absence of a credential store or JMS queue will not be detected this way. The links are parsed in order (sorted) to help in comparing. You can also use this script to compare WebLogic Server versions to see what new resources Oracle has added since the last release.

References

Deploying applications remotely with WebLogic REST Management Interface
http://buttso.blogspot.nl/2015/04/deploying-applications-remotely-with.html

Virtual Technology Summit
http://www.oracle.com/technetwork/community/developer-day/index.html

Enable RESTful Management Services
http://docs.oracle.com/cd/E24329_01/apirefs.1211/e24401/taskhelp/domainconfig/EnableRESTfulManagementServices.html

Jolokia
https://jolokia.org/

Monitoring WebLogic 12c with Nagios and REST
http://www.qualogy.com/monitoring-weblogic-12c-with-nagios-and-rest/

Using REST Resource Methods to Manage WebLogic Server
http://docs.oracle.com/middleware/1213/wls/WLRMR/resources.htm#WLRMR471

RESTful Management Interface Reference for Oracle WebLogic Server
http://docs.oracle.com/middleware/1213/wls/WLRMR/management_wls_version_deployments_application.htm#weblogic_management_rest_wls_resources_deployment_applicationsresource_deployapplication_286308891

The post Overview of WebLogic 12c RESTful Management Services appeared first on AMIS Oracle and Java Blog.

]]>
https://technology.amis.nl/2015/07/21/overview-of-weblogic-12c-restful-management-services/feed/ 0
Use DB Vault to protect password strength policy https://technology.amis.nl/2015/07/20/use-db-vault-to-protect-password-strength-policy/ https://technology.amis.nl/2015/07/20/use-db-vault-to-protect-password-strength-policy/#comments Mon, 20 Jul 2015 13:24:00 +0000 https://technology.amis.nl/?p=36538 Suppose your organization wants to enforce a security policy on database password strength. The DBA’s have implemented a password strength verification function in PLSQL such as the oracle supplied ora12c_strong_verify_function in the DEFAULT profile of the database. There seems no way to get around it at first: Database account u4 is created:     U4 [...]

The post Use DB Vault to protect password strength policy appeared first on AMIS Oracle and Java Blog.

]]>

Suppose your organization wants to enforce a security policy on database password strength. The DBA’s have implemented a password strength verification function in PLSQL such as the oracle supplied ora12c_strong_verify_function in the DEFAULT profile of the database. There seems no way to get around it at first:

Database account u4 is created:

 

create-user-u4

 

U4 logs in and tries to keep it simple, i.e. the password:

 

u4-cannot-simplify-password

 

That password verification function got in the way. U4 searches for solutions to get around this block and stumbles upon the blog from Steve Karam titled Password Verification Security Loophole where Steve demonstrated that it is possible to enter a weak password when creating a user or altering a password, even when a database password verify plsql function is enforced. The way to accomplish this is to use the special IDENTIFIED BY VALUES clause when running the ALTER USER command:

 

2015-07-19 20_22_46-Untitled - Notepad

 

The reason for this behaviour by oracle database is that the IDENTIFIED BY VALUES clause is followed by a hash encoded password string which cannot (easily) be decoded to the original plaintext password. The password strength rules only apply to the original plaintext password value. The only way to crack a hash would be to feed the hash algorithm with candidate passwords and see if the hashed value matches the encoded password string that is known. In the case of the ALTER USER command that would be unfeasible because where would the Oracle database have to stop trying? The number of candidate passwords is limitless..

Until Oracle decides to disable this feature that allows the pre-cooked-at-home encoded password string to be used, there seems no way to stop users from using the IDENTIFIED BY VALUES clause when they have the privilege to use the ALTER USER command, is there?

In fact there is a way to do anti-featuring. It’s possible in one of my favorite EE options called Database Vault (a seperately licenced product for Oracle Database Enterprise Edition) because it allows us to create our own rules on commands such as ALTER USER on top of required system privileges we would normally need to use the command. If we have the database vault rules enabled we would see following when someone tries to use the IDENTIFIED BY VALUES clause:

u4-cannot-simplify-password-using-identified-by-values-clause

as you can see, IDENTIFIED BY VALUES clause can no longer can be used.
The setup script in Datababase Vault I used is given below and should be run by a database account with at least DV_ADMIN role enabled. Note that individual DV rules are first combined into a DV rule set and then this rule set is used as the command rule for ALTER/CREATE USER & CHANGE PASSWORD. Rules in a rule set will be evaluated either using ALL TRUE or ANY TRUE logic. In my case I needed a mix, therefore I created one DV rule with two checks that were combined using ANY TRUE and a second DV rule to check the sql string. These two DV rules were then put in the DV rule set using ALL TRUE evaluation logic. The ‘Is user allowed or modifying own password’ rule is in fact a copy of an Oracle supplied rule. It checks whether the user has the DV_ACCTMGR role OR whether the user is trying to change his/her own password.

— create DV RULESBEGIN
DVSYS.DBMS_MACADM.CREATE_RULE (
rule_name   => ‘Contains no identified by values clause’,
rule_expr   => ‘UPPER(DVSYS.DV_SQL_TEXT) not like ”%IDENTIFIED BY VALUES%”’);

   DVSYS.DBMS_MACADM.CREATE_RULE (
rule_name   => ‘Is user allowed or modifying own password’,
rule_expr   => ‘DVSYS.DBMS_MACADM.IS_ALTER_USER_ALLOW_VARCHAR(”””||dvsys.dv_login_user||”””) = ”Y” OR DVSYS.dv_login_user = dvsys.dv_dict_obj_name’);
END;
/

— CREATE DV RULESET

BEGIN
DVSYS.DBMS_MACADM.CREATE_RULE_SET (
rule_set_name     =>'(Is user allowed or modifying own password) AND (command does not contain IDENTIFIED BY VALUES clause)’,
description       => ‘rule set for (Is user allowed or modifying own password) AND (command does not contain IDENTIFIED BY VALUES clause)’,
enabled           => ‘Y’,
eval_options      => ‘1’,
audit_options     => ‘3’,
fail_options      => ‘1’,
fail_message      => ‘IDENTIFIED BY VALUES clause not allowed’,
fail_code         => ‘-20600′,
handler_options   => ‘0’,
handler           => NULL);
END;
/

— ADD RULES TO RULESET

BEGIN
DVSYS.DBMS_MACADM.ADD_RULE_TO_RULE_SET (
rule_set_name   => ‘(Is user allowed or modifying own password) AND (command does not contain IDENTIFIED BY VALUES clause)’,
rule_name       => ‘Contains no identified by values clause2′,
rule_order      => ‘1’,
enabled         => ‘Y’);
DVSYS.DBMS_MACADM.ADD_RULE_TO_RULE_SET (
rule_set_name   => ‘(Is user allowed or modifying own password) AND (command does not contain IDENTIFIED BY VALUES clause)’,
rule_name       => ‘Is user allowed or modifying own password2′,
rule_order      => ‘1’,
enabled         => ‘Y’);
END;
/

— UPDATE COMMAND RULE

BEGIN
DVSYS.DBMS_MACADM.UPDATE_COMMAND_RULE (
command         => ‘CREATE USER’,
rule_set_name   => ‘(Is user allowed or modifying own password) AND (command does not contain IDENTIFIED BY VALUES clause)’,
object_owner    => DBMS_ASSERT.ENQUOTE_NAME (‘%’, FALSE),
object_name     => ‘%’,
enabled         => ‘Y’);
END;
/

BEGIN
DVSYS.DBMS_MACADM.UPDATE_COMMAND_RULE (
command         => ‘ALTER USER’,
rule_set_name   => ‘(Is user allowed or modifying own password) AND (command does not contain IDENTIFIED BY VALUES clause)’,
object_owner    => DBMS_ASSERT.ENQUOTE_NAME (‘%’, FALSE),
object_name     => ‘%’,
enabled         => ‘Y’);
END;
/

BEGIN   DVSYS.DBMS_MACADM.UPDATE_COMMAND_RULE (
command         => ‘CHANGE PASSWORD’,
rule_set_name   => ‘(Is user allowed or modifying own password) AND (command does not contain IDENTIFIED BY VALUES clause)’,
object_owner    => DBMS_ASSERT.ENQUOTE_NAME (‘%’, FALSE),
object_name     => ‘%’,
enabled         => ‘Y’);
END;
/

 

 

NOTES:

  • the password command in sqlplus also seems to be using the IDENTIFIED BY VALUES clause, so using this DV setup would disable that command too

u4-cannot-simplify-password-using-password-command

  • to find out the hash encoded string to be used in IDENTIFIED BY VALUES clause one can easily create a user in a homegrown database (preferably using same version as victim database) and afterwards retrieve the spare4 column value from SYS.USER$ table for that user. Note that the username itself is used in the Oracle algorithm to calculate the hash value so the hash value only works for a user with the same name.

The post Use DB Vault to protect password strength policy appeared first on AMIS Oracle and Java Blog.

]]>
https://technology.amis.nl/2015/07/20/use-db-vault-to-protect-password-strength-policy/feed/ 0
Vakantie is net werken… https://technology.amis.nl/2015/07/17/vakantie-net-werken/ https://technology.amis.nl/2015/07/17/vakantie-net-werken/#comments Fri, 17 Jul 2015 15:05:25 +0000 https://technology.amis.nl/?p=36463 De transformatie van de kenniswerker in vakantieganger is op dit moment overal zichtbaar. Ik vind het mooi om te zien hoe mensen in andere situaties toch vergelijkbare patronen vertonen. Wat zien we en wat kunnen we leren van dit soort laterale verbanden? Voor mij: energie, flow en plezier. Vakantie is net werken. Bij de vakantieganger maakt [...]

The post Vakantie is net werken… appeared first on AMIS Oracle and Java Blog.

]]>
De transformatie van de kenniswerker in vakantieganger is op dit moment overal zichtbaar. Ik vind het mooi om te zien hoe mensen in andere situaties toch vergelijkbare patronen vertonen. Wat zien we en wat kunnen we leren van dit soort laterale verbanden? Voor mij: energie, flow en plezier. Vakantie is net werken.

Bij de vakantieganger maakt het kostuum plaats voor de bermuda, de gepoetste schoenen voor teenslippers en de stropdas voor pet en zonnebril. Ook het managementteam van de vakantie kent een andere samenstelling, een samenstelling met partner en kinderen. Een verscheidenheid aan belangen en ook in de manier waarop naar de wereld wordt gekeken.

De laatste dagen voor de vakantie zijn voor mij altijd weer boeiend. RelativerenAlles uit de kast halen om de belangrijke zaken gedaan te krijgen, over te dragen en met een goed gevoel weg te kunnen. Als dat voor elkaar is, kan de vakantie echt starten. De laatste zaken van de voorbereiding en dan ‘echt los komen’. Het is meestal een stap in een andere wereld, een reis met onverwachte gebeurtenissen, vol anticipatie. De aankomst op de bestemming(en), de weg vinden, soms letterlijk, eigenlijk altijd figuurlijk.

Ik zie in succes, energie, flow, en plezier de kernfactoren van elke beweging die je wilt inzetten. Succes bepaalt de richting: wanneer is het goed of gelukt? Energie zorgt ervoor dat er de kracht is om daadwerkelijk iets voor elkaar te krijgen. Flow zorgt voor stuwkracht, geen eenmalige uitbarsting maar een permanente drijvende kracht. En plezier is zichtbaar en voelbaar, het beeld dat we echt naar het succes toe willen werken, niet slechts omdat we gedwongen worden door de omstandigheden.

Keep calm and stay positiveAls je het positief insteekt, is alles een mooie ontdekkingsreis. Door tijdens de vakantie te kiezen voor de juiste, optimistische mind-set, voorkom ik irritaties die mijn vakantiegevoel teniet (kunnen) doen. Het oog op het doel ofwel het succes houden staat voorop: fijn vakantie vieren, samen zijn, uitrusten, gedachten verzetten, nieuwe omgevingen ontdekken en mensen leren kennen. Dat geeft de energie om met tegenslagen om te gaan, of het nu gaat om files onderweg, een bestemming die toch anders is dan de folder deed vermoeden, de omgeving die vaak luidruchtiger is dan gehoopt of om de vaak kleine irritaties in het eigen gezin, dat toch wat minder harmonieus blijkt dan die fantastische families in allerlei mooie televisieseries.

Een positieve kijk op de zaken geeft incasserings- en relativeringsvermogen. En net dat beetje afstand om om te kunnen gaan met de omstandigheden en om energie vrij te maken om obstakels te overkomen. En ook om dat vol te houden en het resultaat te bereiken: Lachende gezichten, veel plezier en nieuwe ervaringen. Resultaat ook in lessen, die ik dan weer mee terug neem naar het werk. Om zo met energie, flow en plezier te werken aan nieuwe zakelijke successen.

Ik wens u een fijne vakantie!

The post Vakantie is net werken… appeared first on AMIS Oracle and Java Blog.

]]>
https://technology.amis.nl/2015/07/17/vakantie-net-werken/feed/ 0
Managing identity information from multiple sources with Oracle Identity Manager, Part 2 https://technology.amis.nl/2015/07/16/managing-identity-information-from-multiple-sources-with-oracle-identity-manager-part-2/ https://technology.amis.nl/2015/07/16/managing-identity-information-from-multiple-sources-with-oracle-identity-manager-part-2/#comments Thu, 16 Jul 2015 10:18:14 +0000 https://technology.amis.nl/?p=36449 Consolidating identity information in Oracle Identity Manager In part 1 one this article we saw several options for managing identities in an environment where multiple sources for identity information are used. In this part, you’ll find more information on how to set up Oracle Identity Manager in a scenario like the one described in the [...]

The post Managing identity information from multiple sources with Oracle Identity Manager, Part 2 appeared first on AMIS Oracle and Java Blog.

]]>
Consolidating identity information in Oracle Identity Manager

In part 1 one this article we saw several options for managing identities in an environment where multiple sources for identity information are used. In this part, you’ll find more information on how to set up Oracle Identity Manager in a scenario like the one described in the Swift&Safe Inc. use case.

First of all, the identities based on the input from CUST1 will be placed in dedicated organizations in Oracle Identity Manager so these identities and their authorizations can be managed separately. Regarding HR1 and HR2, these systems use their own internal identifiers for user records. These identifiers must be provided to Oracle Identity Manager as user attributes in the feeds from HR1 and HR2. In Oracle Identity Manager an UDF (User Defined Field) must be created for the identifier attribute (for instance, the personnel number) from HR1 and a separate UDF for HR2. During reconciliation, Oracle Identity Manager can match users in HR1 and HR2 by comparing the unique identifiers in the feeds with the UDFs in Oracle Identity Manager. You can add an attribute to the Oracle Identity Manager user by creating a sandbox and opening the user definition in the Identity System Administration interface. Depending on the version of Oracle Identity Manager you can find it in the ‘Form Designer’ under the Configuration section or ‘User’ under System Entities.

open_user_field_defFigure 1: Opening the user field definition.

UDF_defFigure 2: Adding a user field.

Next, additional UDFs can be created to store source specific information from the HR1 or HR2 system, for example about the persons manager and department, and any other information that needs to be present in Oracle Identity Manager. The additional UDFs can then be used in request, approval and review procedures and as attributes of accounts that are provisioned to target systems.

After this has been set up, measures must be implemented to prevent the creation of multiple identities for individual persons. The best way to do this is by adding an event handler in the orchestration that deals with all creations (no matter the source). The logic in the event handler can also be implemented in the connectors, however from an operational standpoint it’s easier to implement the logic once, in a central location. The event handler will add a check in the workflow by taking a number of attributes (first name, last name, birth date, etc.) and trying to find a match in Oracle Identity Manager on some or all of the attributes, skipping identities in the CUST1 organizations. If there is a match, the create event will terminate and a notification is sent to someone in your organization who can verify that the create event indeed concerns someone who already has an identity in Oracle Identity Manager. Once verified, the unique identifier of the second HR registration must be added to the existing identity, so the next time the source is reconciled the user is linked to the existing identity based on the unique identifier of that source.

proc_defFigure 3: assigning an event handler to an action using the Design Console.

The reason people should be involved when there is a possible match, is to make sure that it is in fact the same person. If you have enough information in the feeds from HR1 and HR2, and are able to apply sufficient logic in the event handler, you can consider triggering automated actions instead of requiring user input. And if the person has different managers in the HR sources, they need to be updated on the situation. Since the manager plays an important role in Oracle Identity Manager, and identities have only one ‘manager’ field, it can happen that tasks for a manager get routed to the wrong manager. If this happens often it may be wise to adjust approval and certification workflows to look for manager information in the source specific UDFs of the user instead of the regular Oracle Identity Manager ‘manager’ field, or configure workflows to not use the manager but specify an approver or certifier based on organization or other attributes. You can also modify the user creation and update process to choose from manager information in the HR1 and HR2 feeds to fill the regular Oracle Identity Manager ‘manager’ field.

An event handler should also be added to the orchestration involved when someone leaves the company. A check must be done to see if the identity is linked to multiple sources. If so, the identity should not be removed or disabled and only the link to the trusted source that was reconciled must be removed.

Usefull links

Configuring User Defined Fields (UDF): http://docs.oracle.com/cd/E27559_01/admin.1112/e27149/customattr.htm#OMADM4803

Developing Event Handlers: https://docs.oracle.com/cd/E52734_01/oim/OMDEV/oper.htm#OMDEV3085

Managing Notification Service: http://docs.oracle.com/cd/E27559_01/admin.1112/e27149/notification.htm#OMADM873

Managing Connector Lifecycle: http://docs.oracle.com/cd/E27559_01/admin.1112/e27149/conn_mgmt.htm#OMADM4295

Developer’s Guide for Oracle Identity Manager: http://docs.oracle.com/cd/E27559_01/dev.1112/e27150/toc.htm

Oracle Identity Manager Identity Connectors Documentation: https://docs.oracle.com/cd/E22999_01/index.htm

Oracle Identity Manager – Development: https://docs.oracle.com/cd/E52734_01/oim/oim-develop.htm

 

The post Managing identity information from multiple sources with Oracle Identity Manager, Part 2 appeared first on AMIS Oracle and Java Blog.

]]>
https://technology.amis.nl/2015/07/16/managing-identity-information-from-multiple-sources-with-oracle-identity-manager-part-2/feed/ 0
Synchronizing databases through BPEL services https://technology.amis.nl/2015/07/16/synchronizing-databases-through-bpel-services/ https://technology.amis.nl/2015/07/16/synchronizing-databases-through-bpel-services/#comments Wed, 15 Jul 2015 22:03:16 +0000 https://technology.amis.nl/?p=36472 Introduction This blog post is about how to synchronize two databases through BPEL, focusing on transaction, rollback and fault handling. During a project, I’ve encountered a situation where we wanted to migrate from an old database to a new one. However, in order to gradually move external systems from the old to the new database, [...]

The post Synchronizing databases through BPEL services appeared first on AMIS Oracle and Java Blog.

]]>
Introduction

This blog post is about how to synchronize two databases through BPEL, focusing on transaction, rollback and fault handling.

During a project, I’ve encountered a situation where we wanted to migrate from an old database to a new one. However, in order to gradually move external systems from the old to the new database, it was required that both databases would be kept in sync for a limited amount of time. Apart from the obvious database tools, for example Oracle Golden Gate, this can be done through the service layer as well and that’s what this article is about. I will explain how I have done it with a strong focus on fault handling, since that’s the most complicated part of the deal. In this case, since keeping things in sync is what we’re aiming for, a rollback needs to be performed on one database when the other fails to process the update.

One of the requirements is that it should be easy to throw the synchronization code away, as it has no place in our future plans. Another requirement is that the service layer should return faults in a decent manner.

Preparation

In order to enable out-of-the-box rollback functionality, make sure that the data sources connecting to both databases are XA enabled. As there is plenty of information about this subject, I will not get into detail about it in this blog.

Now we will be developing two services:

  • SalesOrderBusinessService: a BPEL process that receives messages from a BPM process and forwards them to our integration service
  • UpdateSalesOrderIntegrationService: a BPEL process that receives messages from SalesOrderBusinessService and updates two databases through adapters

We need to make sure that both services have a fault specified in their wsdl operation in order to return the recoverable fault.


<wsdl:message name="UpdateSalesOrderRequestMessage">
<wsdl:part name="UpdateSalesOrderRequest" element="cdm:UpdateSalesOrderEBM"/>
</wsdl:message>

<wsdl:message name="UpdateSalesOrderResponseMessage">
<wsdl:part name="UpdateSalesOrderResponse" element="hdr:ServiceResult"/>
</wsdl:message>

<wsdl:message name="UpdateSalesOrderFaultMessage">
<wsdl:part name="UpdateSalesOrderFault" element="hdr:ErrorMessages"/>
</wsdl:message>

<wsdl:portType name="SalesOrderBusinessService_ptt">
<wsdl:operation name="updateSalesOrder">
<wsdl:input message="tns:UpdateSalesOrderRequestMessage"/>
<wsdl:output message="tns:UpdateSalesOrderResponseMessage"/>
<wsdl:fault name="TechnicalFault" message="tns:UpdateSalesOrderFaultMessage"/>
</wsdl:operation>
</wsdl:portType>

Development

Once the data sources and wsdl definitions are in place, we can start developing our BPEL services. Let’s start with UpdateSalesOrderIntegrationService. It will be a SOA composite, containing a BPEL process, a web service and two database adapters. In the end it should look like this:

compositeUpdateSO

 

While we can create the database adapters with default settings, we have to make an adjustment to the BPEL process: the transaction will have to be changed from “required” to “requiresNew”. See picture below:

createUpdateSO

The UpdateSalesOrderBPEL process will first update the new database and, if the update is successful, the old database too. This can easily be achieved when the database procedure returns, for example, OK or NOK (with a business reason) to let us know about the processing result. If the update in the old database is not successful, however, we need to throw a fault to rollback the update in the first database. This is out-of-the-box functionality, but we need to be aware that the rollback will only take place when the entire transaction fails. This means that we can’t catch any faults in this BPEL process, because then it will be considered a successful transaction. Also, this is why we set the transaction property to “requiresNew”: in SalesOrderBusinessService we do want to catch faults, but if UpdateSalesOrderIntegrationService is in the same transaction, the transaction will still be considered successful and we will not get our rollback. In the end, the BPEL process should look something like this, between the “receive” and “reply” activities:

 

bpelUpdateSO

The throw activity goes as follows and we can either assign error information from the database procedure or our own information to the faultVariable:

throwUpdateSO

The next step is to create SalesOrderBusinessService. The composite should look like this and we can keep the transaction property for the BPEL process at “required”:

compositeSO

Our BPEL process will look like this:

bpelSO

As you can see, the main flow is very basic and we don’t need to do anything out of the ordinary here. The interesting part is the Catch, where the Technical Fault coming from the IntegrationService will be handled. In this case, we can simply assign the fault message to the fault message of the Business Service and reply the fault to the requestor. Consequently, the requestor can, for example, re-send the message once the problem in the old database has been resolved. If there is a business problem (NOK) in the new database, it should be handled as a business problem and no SOAP fault will be returned. Should there be any other technical faults, like a database being down, the CatchAll will handle those as usual.

catchSO

That’s it, we’re done. Now, once the old database can be shutdown, it will be fairly easy to remove the code: just throw away the first “CheckResult” component and the database adapter from UpdateSalesOrderIntegrationService, as well as the Catch activity in the Business Service.

Keep in mind the most important parts of the deal:

  1. XA data sources are required
  2. Integration Service should have its transaction property at “requiresNew”
  3. Integration Service cannot have any fault handling
  4. Business Service should handle specific faults from the Integration Service
  5. Make sure that the temporary code can be easily removed

The post Synchronizing databases through BPEL services appeared first on AMIS Oracle and Java Blog.

]]>
https://technology.amis.nl/2015/07/16/synchronizing-databases-through-bpel-services/feed/ 0
Managing identity information from multiple sources with Oracle Identity Manager, Part 1 https://technology.amis.nl/2015/07/14/managing-identity-information-from-multiple-sources-with-oracle-identity-manager-part-1/ https://technology.amis.nl/2015/07/14/managing-identity-information-from-multiple-sources-with-oracle-identity-manager-part-1/#comments Tue, 14 Jul 2015 10:21:37 +0000 https://technology.amis.nl/?p=36318 When you are implementing Oracle Identity Manager to manage the identities within your organization, you may have to use multiple sources for identity information. For instance, there might be different departments with their own HR system and there might be separate sources for customers or business partners. In this article I’ll discuss 4 options to [...]

The post Managing identity information from multiple sources with Oracle Identity Manager, Part 1 appeared first on AMIS Oracle and Java Blog.

]]>
When you are implementing Oracle Identity Manager to manage the identities within your organization, you may have to use multiple sources for identity information. For instance, there might be different departments with their own HR system and there might be separate sources for customers or business partners. In this article I’ll discuss 4 options to manage multiple sources and prevent issues like double identities. I will also present a use case to explain how to configure Oracle Identity Manager to use multiple sources.

Use Case

Swift&Safe Inc. is an organization that specializes in logistics. Swift&Safe Inc. uses Oracle Identity Manager to provide employees and customers with appropriate access to company resources. They use two different HR systems: one for employees who work in at the office (mostly administrative staff, but also some who work the logistic processes), and one for employees on the road. Let’s call these systems HR1 and HR2, both will be used by Oracle Identity Manager for importing identities. In addition, the company uses a third system (CUST1) to register customers and Oracle Identity Manager will also import identities from this system.

design

Figure 1: Swift&Safe Inc.

In this case people can have several positions within the company concurrently and can therefore exist in both HR1 and HR2, but Swift&Safe Inc. only allows one identity in Oracle Identity Manager per employee, so that any entitlements a particular person has can be checked against segregation of duties (SoD) policies. In addition, an employee can also be a customer and in that case needs a separate customer identity because access to customer facing resources is managed separately.

Swift&Safe Inc. is working on connecting Oracle Identity Manager to these three sources so employees and customers will have the correct identities in Oracle Identity Manager.

How does it work?

First I’ll tell you a couple of things you need to know about how the importing of identity information works in Oracle Identity Manager. After that, we can look into possible implementation options.

Source systems are integrated with Oracle Identity Manager by use of connectors. A connector is installed for every source and holds information about the format of the data in the source system (meta data), and a mapping table specifying which attributes of entries in the source correspond to which attributes of an identity in Oracle Identity Manager. The meta data and mapping table tell Oracle Identity Manager how to interpret the flow of data coming from a source so Oracle Identity Manager can build identities with the provided information.

attribute mapping Figure 2: Example of attribute mapping.

Oracle Identity Manager uses its reconciliation engine to handle the process of importing information. Reconciliation can be done in trusted mode and target mode. In trusted mode the imported identity information is used to create, update and delete identities in Oracle Identity Manager. In target mode, the imported data is regarded as information about accounts that are present in the source system. These accounts are assigned to identities in Oracle Identity Manager.

The reconciliation engine first uses the information of an entry in the source to try to match the entry to an identity in Oracle Identity Manager, based on matching rules. Depending on the result of this matching process, an action is then assigned to handle the imported entry, based on action rules. The matching and action rules are defined at connector level so these are specific per source. The entry and assigned actions (for example “create identity”) are stored in an event that is placed in the event queue. Items in this queue are then processed in so called orchestrations, which are workflows that take care of the job at hand.

recon action rule Figure 3: Action rules define actions for each type of matching results

Implementation options

  1. Integrating HR sources

integratehr

One way to prevent issues, is to make sure only one system is authoritative for the lifecycle of identities. A trusted reconciliation is set up with this source. Additional target reconciliations can be set up with any number of sources to augment Oracle Identity Manager identities with additional attributes that are not present in the trusted reconciliation. In the case of Swift&Safe Inc. this option requires the consolidation of identity information at HR system level, because information from all three systems must be present in the trusted source defined in Oracle Identity Manager.

  1. Using a staging area

staging

This option involves setting up a system that acts as a staging area between the HR sources and Oracle Identity Manager. This may be in the form of a database or directory where information from multiple sources is combined (and maybe scrubbed, enriched or anonymized) in order to create a single trusted source for Oracle Identity Manager. In some situations this may be an option because of the complexity of the data, the amount of changes in meta data, the skill set of the support team or responsibility for data sanitation. But it may not be technically possible or too costly to maintain an extra system.

  1. Allowing multiple identities per person

allowmultiple

Technically you can use multiple trusted sources in Oracle Identity Manager, and these sources will be authoritative for the lifecycle of ‘their’ identities. In this case multiple identities will be created for a person if this person is registered in more than one trusted source, and this results in multiple accounts on target systems. This can be useful for keeping accounts related to different job functions separated. Having multiple accounts on the same company resources can also be confusing to end users while perfoming their daily dutties and when they review information in request or review processes. Or maybe only one identity will be created and creation of subsequent identities for the same person will fail, depending on the configuration of Oracle Identity Manager for instance regarding the uniqueness of attributes.

  1. Consolidating identity information in Oracle Identity Manager

design

Using Oracle Identity Manager to consolidate identity information. This is the option that Swift&Safe Inc. will be implementing. They will use the capabilities of Oracle Identity Manager to combine identity information and centrally manage accounts and access rights. In part 2 of this article we’ll take a look at the basic configuration that is needed to achieve this.

Conclusion

There are several options for managing identities in an environment where multiple sources for identity information are used. Which one fits best in your organization depends on several factors such as technical feasibility, costs, maintainability and reliability, and data quality responsibility. Swift&Safe Inc. decided on option 4 because they need to keep their HR systems separated and do not want the burden of maintaining an extra system needed for a staging area. Oracle Identity Manager provides them with an excellent option by providing a central platform with configurable connectors, reconciliation options and workflows which allows them to accommodate the flow of identity information. In part 2 of this article you’ll find more information on how to set up Oracle Identity Manager in this scenario.

 

 

The post Managing identity information from multiple sources with Oracle Identity Manager, Part 1 appeared first on AMIS Oracle and Java Blog.

]]>
https://technology.amis.nl/2015/07/14/managing-identity-information-from-multiple-sources-with-oracle-identity-manager-part-1/feed/ 0
Subversion revision of a deployed BPM/SOA composite? https://technology.amis.nl/2015/07/12/subversion-revision-of-a-deployed-bpmsoa-composite/ https://technology.amis.nl/2015/07/12/subversion-revision-of-a-deployed-bpmsoa-composite/#comments Sun, 12 Jul 2015 21:37:44 +0000 https://technology.amis.nl/?p=36438 So there you are: a production error was reported … in your code (of all places) … but no one knows what release the code came from? Wouldn’t it be great if it was easy to link deployed composites to their Subversion location and revision? This article show an approach based on ‘Subversion keywords expansion’. [...]

The post Subversion revision of a deployed BPM/SOA composite? appeared first on AMIS Oracle and Java Blog.

]]>
So there you are: a production error was reported … in your code (of all places) … but no one knows what release the code came from?

holmes

Wouldn’t it be great if it was easy to link deployed composites to their Subversion location and revision?

This article show an approach based on ‘Subversion keywords expansion’. This is illustrated with the following steps:

  1. Add properties to the composite.xml file
  2. Set Subversion keywords for composite.xml
  3. Query the deployed composite with wlst
  4. Solve the limitations in Subversion keyword expansion

Let’s get started:

Step 1: add properties to the composite.xml

In composite.xml, add the below lines after the properties for productVersion and compositeID that JDeveloper already added:


<property name="productVersion" type="xs:string" many="false">12.1.3.0.0</property>
<property name="compositeID" type="xs:string" many="false">38e4c940-31e8-46ca-90d7-1e56639f6880</property>
<property name="subVersionURL" type="xs:string" many="false">$URL$</property>
<property name="subVersionRevision" type="xs:string" many="false">$Rev$</property>

Step 2: set Subversion keywords for composite.xml

On composite.xml, add the Subversion properties ‘Revision’ and ‘URL’ (of type ‘svn-keywords’) .

This can be done using TortoiseSVN:

– check out your project from Subversion

– right-click composite.xml, goto TortoiseSVN –> Properties

002 - subversion setting properties

– Click on ‘New’ and then ‘Keywords’:

003 - subversion setting properties - 02

– Select keywords ‘Revsion’ and ‘URL’:

004 - subversion setting properties - 03

– with result:

005 - subversion setting properties - 04

 

… and you’re done.

The same could be achieved or using command line:

   svn propset svn:keywords “Revision URL” composite.xml

 

After this is done, subversion will expand the svn keywords $URL$ and $Rev$ when the file is checked out.

Now, commit the composite.xml into Subversion and then check it out again. Examine the properties that now should look like:


<property name="productVersion" type="xs:string" many="false">12.1.3.0.0</property>
<property name="compositeID" type="xs:string" many="false">38e4c940-31e8-46ca-90d7-1e56639f6880</property>
<property name="subVersionURL" type="xs:string" many="false">$URL: svn://192.168.178.50/LGO/sandbox/HelloKeywordApplication/HelloKeyword/SOA/composite.xml $</property>
<property name="subVersionRevision" type="xs:string" many="false">$Rev: 25 $</property>

 

Now, re-deploy the composite with the new composite.xml

Step 3: query the deployed composite with wlst

After checking out the above code from Subversion and deploying it, the properties can be queried using the wslt script:


# function that returns mbean(s) of all composites
# borrowed from Edwin Biemond and changed

def findMBeans(prefix):
# get a listing of everything in the current directory
mydirs = ls(returnMap='true');

# we're going to use a regular expression for our test
pattern = java.util.regex.Pattern.compile(str(prefix) + str('.*name=*') + str('.*$'));

# loop through the listing
beanList = [];
for mydir in mydirs:
x = java.lang.String(mydir);
matcher = pattern.matcher(x);
# if we find a match, add it to the found list
while matcher.find():
beanList.append(x);

return beanList;

print 'starting the script ....'
username = 'weblogic'
password = 'welcome01'
url='t3://localhost:7001'

connect(username,password,url)

custom();
cd('oracle.soa.config');

#Note the , at the end of the string, so components are not returned...
composites = findMBeans('oracle.soa.config:partition=default,j2eeType=SCAComposite,');

for composite in composites:

cd( composite );

properties = mbs.getAttribute(ObjectName(composite), 'Properties');

print 'Composite : ' + mbs.getAttribute(ObjectName(composite), 'Name');

for property in properties:
print '- property name/value : ' + property.get('name') + ' / ' + property.get('value');

print '----------';
print

cd('..');

disconnect();

 

Output of the script is ( …. beginning deleted ….)

Composite : HelloKeyword [1.0]
– property name/value : productVersion / 12.1.3.0.0
———-
– property name/value : subVersionRevision / $Rev: 25 $
———-
– property name/value : subVersionURL / $URL: svn://192.168.178.50/LGO/sandbox/HelloKeywordApplication/HelloKeyword/SOA/composite.xml $
———-
– property name/value : compositeID / 38e4c940-31e8-46ca-90d7-1e56639f6880
———-

Step 4: Solve the limitations in Subversion keyword expansion

Note that the revision number that is displayed is the revision number of the composite.xml file. THIS IS NOT THE CHECKED OUT REVISION NUMBER, but it is THE REVISION NUMBER OF WHEN THE FILE COMPOSITE.XML WAS LAST CHANGED.

The two measures below, will make your composite really traceable:

  1. Composites that are released will be first tagged in Subversion
  2. A property ReleaseLabel will be added and release labels will only be used once

So, add a property like below in the composite.xml:


<property name="ReleaseLabel" type="xs:string" many="false">@ReleaseLabelNotSet@</property>

This property can then be set by the script that checks out a release from Subversion (e.g. by an ant search/replace…)

Note that this property is NOT a Subversion keyword, so giving this property a value is something that has to be explicitly done by the script that is used for building a release.

Additional benefit is that if the default value is set at @ReleaseLabelNotSet@ it will be clear when not-officially-released composites are deployed.

Querying of this property works with the same wlst script.

 

Note: the wlst script and properties have been tested with SOA Suite 11.1.1.6, 11.1.1.7 and 12.1.3.

The post Subversion revision of a deployed BPM/SOA composite? appeared first on AMIS Oracle and Java Blog.

]]>
https://technology.amis.nl/2015/07/12/subversion-revision-of-a-deployed-bpmsoa-composite/feed/ 0
Sonatype Nexus: Delete artifacts based on a selection https://technology.amis.nl/2015/07/12/sonatype-nexus-delete-artifacts-based-on-a-selection/ https://technology.amis.nl/2015/07/12/sonatype-nexus-delete-artifacts-based-on-a-selection/#comments Sun, 12 Jul 2015 15:02:42 +0000 https://technology.amis.nl/?p=36424 Sonatype Nexus provides several mechanisms to remove artifacts from the repository. You can schedule a job to keep only specified number of the latest releases (see here). You can also specifically remove a single artifact or an entire group using the API (see here). Suppose you want to make a selection though. I only want [...]

The post Sonatype Nexus: Delete artifacts based on a selection appeared first on AMIS Oracle and Java Blog.

]]>
Sonatype Nexus provides several mechanisms to remove artifacts from the repository. You can schedule a job to keep only specified number of the latest releases (see here). You can also specifically remove a single artifact or an entire group using the API (see here). Suppose you want to make a selection though. I only want to delete artifacts from before a certain date with a specified groupid. In this article I have provided a Python 2.7 script which allows you to do just that.

The script has been created for my specific sample situation. Yours might differ. For example, I have only used the Releases repository and no snapshot versions. First check if the artifacts are the ones you expect to be selected based on your criteria before actually performing the artifact deletion. If they differ, it is easy to alter the script to suit your particular needs.

You can download the NetBeans 8.0.2 project containing the code of the script here. I’ve used the NetBeans Python plugin you can find here. Also I have not used any third party Python libraries so a default installation should suffice.

Script to delete artifacts

Configuration

The script starts with some configuration. First the connection information for Nexus followed by artifact selection criteria. Only the group is required. All other criteria can be left empty (None). If empty, any test related to the selection criteria passes. Thus for example setting the ARTIFACTVERSIONMIN to None means all previous versions could become part of the selection.

#how to access Nexus. used to build the URL in get_nexus_artifact_version_listing and get_nexus_artifact_names
NEXUSHOST = "localhost"
NEXUSPORT = "8081"
NEXUSREPOSITORY = "releases"
NEXUSBASEURL = "/nexus/service/local/repositories/"
NEXUSUSERNAME = 'admin'
NEXUSPASSWORD = 'admin123'

#what to delete
ARTIFACTGROUP = "nl.amis.smeetsm.application" #required
ARTIFACTNAME = None #"testproject" #can be an artifact name or None. None first searches for artifacts in the group
ARTIFACTVERSIONMIN = "1.1" #can be None or a version like 1.1
ARTIFACTVERSIONMAX = "1.2" #can be None or a version like 1.2
ARTIFACTMAXLASTMODIFIED = datetime.datetime.strptime("2014-10-29 12:00:00","%Y-%m-%d %H:%M:%S") #can be None or datetime in format like 2014-10-29 12:00:00
ARTIFACTMINLASTMODIFIED = datetime.datetime.strptime("2014-10-28 12:00:00","%Y-%m-%d %H:%M:%S") #can be None or datetime in format like 2014-10-28 12:00:00

What does the script do?

The script uses the Nexus API (see for example my previous post). If the artifact name is specified, that is used. Else the API is used to query for artifacts which are part of the specified group. When the artifacts are determined, artifact versions are looked at.

For example, a group nl.amis.smeetsm.application is specified and an artifact name of testproject is specified. This translates to an URL like;

http://localhost:8081/nexus/service/local/repositories/releases/content/nl/amis/smeetsm/application/testproject/

When I go to this URL in a browser, an XML is returned containing directory content which among others contain the artifact versions and several properties of these versions such as lastModified date. This I can then use in the selection.

If an artifact version is determined to be part of the provided selection, it is removed. Interesting about the actual removing of the artifact using the Nexus API is the Python usage of HTTP Basic Authentication. See for the sample I have used this as inspiration.

Seeing it work

My test situation looks at follows. testproject is my artifact name. I have 4 versions. 1.0, 1.2, 1.3, 1.4. 1.0 is the oldest one with a lastModifiedDate of 2014-10-28. I want to remove it.

Artifacts used

I have used the following selection (delete testproject releases before 2014-10-29 12:00:00)

ARTIFACTGROUP = "nl.amis.smeetsm.application"
ARTIFACTNAME = "testproject"
ARTIFACTVERSIONMIN = None
ARTIFACTVERSIONMAX = None
ARTIFACTMAXLASTMODIFIED = datetime.datetime.strptime("2014-10-29 12:00:00","%Y-%m-%d %H:%M:%S")
ARTIFACTMINLASTMODIFIED = None

The output of the script is as follows;

Processing artifact: testproject
URL to determine artifact
versions:http://localhost:8081/nexus/service/local/repositories/releases/content/nl/amis/smeetsm/application/testproject/
Item datetime: 2015-07-11 14:43:32.0 UTC
Item version: 1.3
Item datetime: 2015-07-11 14:43:57.0 UTC
Item version: 1.4
Item datetime: 2014-10-28 18:20:49.0 UTC
Item version: 1.0
Artifact to be removed nl.amis.smeetsm.application: testproject: 1.0
Sending HTTP DELETE request to
http://localhost:8081/nexus/service/local/repositories/releases/content/nl/amis/smeetsm/application/testproject/1.0
Response: 204 No Content
Item datetime: 2014-11-03 13:36:43.0 UTC
Item version: 1.2

As you can see, all versions are evaluated and only one is selected and removed. The HTTP 204 indicates the action has been successful.

References

NetBeans Python plugin
http://plugins.netbeans.org/plugin/56795/python4netbeans802

Can I delete releases from Nexus after they have been published?
https://support.sonatype.com/entries/20871791-Can-I-delete-releases-from-Nexus-after-they-have-been-published-

curl : safely delete artifacts from Nexus
https://parkerwy.wordpress.com/2011/07/10/curl-safely-delete-artifacts-from-nexus/

Python: HTTP Basic authentication with httplib
http://mozgovipc.blogspot.nl/2012/06/python-http-basic-authentication-with.html

Retrieve Artifacts from Nexus Using the REST API or Apache Ivy
http://www.sonatype.org/nexus/2015/02/18/retrieve-artifacts-from-nexus-using-the-rest-api-or-apache-ivy/

The post Sonatype Nexus: Delete artifacts based on a selection appeared first on AMIS Oracle and Java Blog.

]]>
https://technology.amis.nl/2015/07/12/sonatype-nexus-delete-artifacts-based-on-a-selection/feed/ 0
Continuous delivery culture. Why do we do the things we do the way we do them? https://technology.amis.nl/2015/07/11/continuous-delivery-culture/ https://technology.amis.nl/2015/07/11/continuous-delivery-culture/#comments Sat, 11 Jul 2015 09:50:52 +0000 https://technology.amis.nl/?p=36412 Usually at first there is a problem to be solved. A solution is conjured and implemented. After a while, the solution is re-used and re-used again. It changes depending on the person implementing it and his/hers background, ideas, motives, likes and dislikes. People start implementing the solution because other people do it or someone orders [...]

The post Continuous delivery culture. Why do we do the things we do the way we do them? appeared first on AMIS Oracle and Java Blog.

]]>
Usually at first there is a problem to be solved. A solution is conjured and implemented. After a while, the solution is re-used and re-used again. It changes depending on the person implementing it and his/hers background, ideas, motives, likes and dislikes. People start implementing the solution because other people do it or someone orders you to do it. The solution becomes part of a culture. This can happen to such extents that the solution causes increasing amounts of side effects, other new problems which require new solutions.

culture

In software development, solutions are often methods and/or pieces of software which change rapidly. This is especially true for the area of continuous delivery, which is relatively young and still much in development. Continuous delivery tools and methods are meant to increase software quality and to make software development, test and deployment more easy. Are your continuous delivery efforts actually increasing your software quality and decreasing your time to market or have they lost their momentum and become a bother?

Sometimes it is a good idea to look at the tools you are using or are planning to use and think about what they contribute. Is using them intuitive and do they avoid errors and misunderstandings? Do you spend more time on merging changes and solving deployment issues than actually creating new functionality? Maybe that is a time to think about how you can improve things.

In this article I will look at current usage of version control and artifact repositories. I will not go to the level of specific products. Next I will describe some common challenges which often arise and give some suggestions on how you can deal with them. The purpose of this is to try and let the reader not take continuous delivery culture for granted but be able to think about the why before and during the what.

Version Control

A purpose of software version control is to track changes in software versions. Who made which change in which version of the software? In version control you can track back what is in a certain version of the software. A release (which contains code from a specific version) can be installed on an environment and thus indirectly version control allows tracing back which code is installed (comes in handy when something goes wrong).

When using version control, you should ask yourself; can I still without a doubt identify a (complete) version of the software? Do I still know who made which change in which version? If someone says a certain version is installed in a certain environment, can I without a doubt identify the code which was installed from my version control system?

Branching and merging; dangerous if not done right

Most software development projects I’ve seen, have implemented a branching and merging strategy. People want to work on their own independent code-base and not be bothered by changes other people make (and the other way around). Develop their software in their own isolated sandbox. The idea is that when a change is completed (and conforms to certain agreements (such as quality, testing)), it is merged back to the originating branch and after merging has been completed, usually the branch ceases to have function.

Projects and code modularity

Sometimes you see the following happen which can be quite costly and annoying. Project A and Project B partially share the same code (common components) and have their own separate not overlapping code. One of the projects creates a version control branch to have a stable base to work with, an independent life-cycle and not be bothered by development done by the other project. Both projects go their own way, both also editing the common components (which are now living in two places). At a certain moment they realize they need to come back together again (for example due to environment constraints (a single acceptance environment) or because Project A has something useful which Project B also wants). The branches have to be merged again. This can be a problem because are all the changes Project A and Project B have done to the common components compatible with each other? After merging is complete (this could take a while), an entire regression-test has to be performed for both projects if you want to ensure the merged code still works like expected for both projects. In my experience, this can be painful, especially if automated regression testing is not in place.

Lots of copies

Branching and keeping the branch alive for a prolonged time is against the continuous delivery principle of integrating early and often. The problem started with the creation of the branch and separate development between the different projects. A branch is essentially an efficient copy of the code. Having multiple copies of the same code is not they way we were taught to develop; Don’t repeat yourself (DRY) or Duplication is Evil (DIE) or Once and Only Once (OAOO), Single Point of Truth (SPoT), Single Source Of Truth (SSOT).

Remember agent Smith from The Matrix? Copies are not good!

Remember agent Smith from The Matrix? Copies are not good!

Increase development time

When developing new features, the so-called ‘feature branch’ is often used. This can be a nice way to isolate development of a specific piece of software. However at a certain moment, the feature has to be merged with the other code, which in the meanwhile might have changed a lot. Essentially, the feature has to be rebuild on another branch. This is especially so when the technology used is not easy to merge. This can in some cases dramatically increase development time of a feature.

Danger of regression

When bug-fixes are created and there are feature branches and several release branches, is it still clear where a certain fix should go? Is your branching strategy making it easy for yourself or are you introducing extra complexity and more work? If you do not update the branch used for the release and future releases, your fix might get lost somewhere and resurface at a later time.

A similar issue arises with release branches on which different teams develop. Team A works on release 1.0 which is in production. Team 2 works on release 2.0 which is still in development. Are all the fixes Team A makes (when relevant), also applied to Release 2.0? Is this checked and enforced?

Solutions

In order to counter such issues, there are several possible and quite obvious solutions. Try to keep the number of separate branches small to avoid misunderstandings and reduce merge effort. Merge back (part of) the changes made on the branch regularly (integrate early and often) and check if they still function as expected. Do not forget to allow unique identification of a version of the software. Introduce a separate life-cycle for the shared components (think about project modularity) and project specific components. This way branching might not even be needed.

plaatje branching

Artifact repository

An artifact repository is used for storing artifacts. An artifact has a certain version number. Usually this can be tracked back to a version control system. An artifact repository uniquely identifies an artifact of a specific version. Usually deployable units are stored. An artifact stored in a repository usually has a certain status. For example, it allows you to distinguish released artifacts from ‘work-in-progress’ or snapshot artifacts. Also an artifact repository is often used as a means to transfer responsibility of the artifact from a certain group to another. For example, development is done, it is put in the artifact repository for operations to deploy it.

When working with an artifact repository, you should consider the following (among other things). If someone says an artifact with a specific version is deployed, can I still say I know exactly what was deployed from the artifact repository, even for example after a year? Once a version is created and released, is it immutable in the artifact repository? If I have deployed a certain artifact, can I at a later time repeat the procedure and get exactly the same result?

artifactproduct

An artifact repository can be used to transfer an artifact from development to operations. Sometimes the artifact in the repository is not complete. For example environment dependent properties are added by operations. Also some of the placeholders are replaced from the artifact and several artifacts are combined and reordered to make deployment easier. Deployment tooling has changed or a property file has been added. Do I still know a year later exactly what is deployed or have the deployment steps after the artifact is fetched from the repository modified the original artifact in such a way it is not recognizable anymore?

Changes in deployment software

Suppose the deployment software has been enhanced with several cool new features. For example the deployment now supports deploying to clustered environments and new property files make deployment more flexible, for example, allow specifying which database database code should be deployed to. Only I can’t deploy my old artifacts anymore because the artifact structure and added property files are different. You have a problem here.

Solutions

Carefully thing about the granularity of your artifacts. Small granularity means it might be more difficult to keep track of dependencies but you gain flexibility in your deployment software and better traceability from artifact to deployment. Large artifacts means some actions might be required to allow deployment of your custom deployment unit (custom scripts) and you will get more artifact versions since often code changes lead to new versions and generally more code changes more often.

Carefully think about how you link your deployment to your artifact and how to deal with changes in the deployment software. You can add a dependency to the version of the deployment software to your artifacts or make your deployment software backwards compatible. You can also accept that after you change your deployment software, you cannot deploy old artifacts anymore. This might not be a problem if the new style artifacts are already installed in the production environment and the old style artifacts will never be altered or deployed anymore. You can also create new versions of the different artifacts in the new structure or update as you go.

Conclusion

Implementing continuous delivery can be a pain in the ass! It requires a lot of thought about responsibilities and implementation methods (not even talking about the implementation itself). It is easy to just do what everyone else does and smart people say you should do, but it has also never hurt to think about what you are doing yourself and to understand what you are doing and why. Also it is important to realize what the limitations are of the methods and tools used in order to make sound judgments about them. Try to keep it easy to use and make sure it adds value.

KISS

The post Continuous delivery culture. Why do we do the things we do the way we do them? appeared first on AMIS Oracle and Java Blog.

]]>
https://technology.amis.nl/2015/07/11/continuous-delivery-culture/feed/ 0
XL Deploy: Simple Case of Custom Deployment https://technology.amis.nl/2015/07/05/xl-deploy-simple-case-of-custom-deployment/ https://technology.amis.nl/2015/07/05/xl-deploy-simple-case-of-custom-deployment/#comments Sun, 05 Jul 2015 19:24:38 +0000 https://technology.amis.nl/?p=36289 The last couple of months I have been assigned to a project dedicated to deployment automation including several levels and types of technologies: database, middleware, services and portals. The selected deployment tool is XL Deploy from XebiaLabs. One of the main challenges was to determine the proper order of the steps within the deployment plan [...]

The post XL Deploy: Simple Case of Custom Deployment appeared first on AMIS Oracle and Java Blog.

]]>
The last couple of months I have been assigned to a project dedicated to deployment automation including several levels and types of technologies: database, middleware, services and portals. The selected deployment tool is XL Deploy from XebiaLabs.

One of the main challenges was to determine the proper order of the steps within the deployment plan considering the dependencies between the several artifacts.

1 Basic features for orchestration

Deployment with XL Deploy is a joined activity between its core process and its plugins. This diagram from the Customization Manual shows this cooperation (puzzle pieces represent plugin jobs):

Each plugin defines its own artifact types derived from their base UDM type in hierarchy.
The core process defines default orders, related to the base types.
XL Deploy uses the following default orders:
• PRE_FLIGHT (0)
• STOP_ARTIFACTS (10)
• STOP_CONTAINERS (20)
• UNDEPLOY_ARTIFACTS (30)
• DESTROY_RESOURCES (40)
• CREATE_RESOURCES (60)
• DEPLOY_ARTIFACTS (70)
• START_CONTAINERS (80)
• START_ARTIFACTS (90)
• POST_FLIGHT (100)

Each plugin, either distribution or community, is defaulting each of its types’ create, modify and destroy order property into one of these values. However, these are just ranges. Individual artifact types may be refined easily without implementing a new plugin

2 The Case

The largest set of artifacts consists of Oracle Service Bus resources and SOA services, including metadata service (MDS) and service component architecture (SCA).

SB resources are separated by their “nature” into project categories:
1. Common definitions, which are shared among multiple services
2. Service contract elements (WSDL, schema definitions)
3. Service implementations (business and proxy services)
Logically, 2 and 3 may have elements of which deployment depend on 1 and also 3 may depend on 2. Each category should have no internal dependency.
SOA artifacts
1. Metadata services (MDS) with schema definitions, service contracts and mappings
2. Composite archives (SCA) of SOA or BPM processes
Based on service definitions, 2 strongly depends on 1 since the propagated entry point as well as all external service references need their entire schema and operation definition present.
In principle, group 1 may have internal dependencies. However, importing metadata before service implementation would succeed with missing referenced schema.

3 SOA and Service Bus plugin type definitions

For Service Bus, the XL Deploy plugin comes with one generic type: osb.Configuration. It contains a configuration archive with optional customization file(s). If we provide a set of such artifacts, the generated deployment plan will show up just in lexicographical order as they are assigned with the default deployment order value 60. This way we cannot have any influence on the sequence. Furthermore, the order property is defined as hidden so manual adjustment of the mapped artifact is excluded too.
For SOA Suite, the plugin defines the two related types: soa.MdsSOADeployableSpec, soa.CompositeSOADeployableSpec. This would support the sequencing. Yet, the defaulted order (70) and the hidden property still prevents us from the adjustment.

According to the dependencies, SharedOsbResources should precede OsbResources otherwise latter would fail. For the same reason SchemaDefinitions_mds should precede SampleSCA. Let’s take care of that.

4 Customization by Type Modification

Let’s touch the SOA types first. The only criteria is that all MDS artifacts get imported before any composite deployment. This involves references from updated services to new contracts and from new services to updated contracts. Therefore we have to specify the order at the modification stage as well.
Our entry point is the XLD_dir/ext/synthetic.xml file. We modify the type definitions just putting distinct order values in place:

<type-modification type="soa.MdsSOADeployable">
	<property name="createOrder" kind="integer" default="75" hidden="true"/>
	<property name="modifyOrder" kind="integer" default="76" hidden="true"/>
</type-modification>

<type-modification type="soa.CompositeSOADeployable">
	<property name="createOrder" kind="integer" default="77" hidden="true"/>
	<property name="modifyOrder" kind="integer" default="78" hidden="true"/>
</type-modification>

Keep in mind we used the type definitions of the artifacts in the target container (environment), not the ones in the application.
To take the change into effect, we need to restart XL Deploy server (command line or service, whichever variant is running in place).
Analyze the deployment of the same application. This time, SchemaDefinitions_mds stands above SampleSCA. The dependency criteria is fulfilled.

5 Customization by Type Definition

For Service Bus the job is a bit more complex and a little „invasive”. As we have only one type, we cannot distinct the order values. We have to artificially cast SharedOsbResources into category 2 and OsbResources into category 3 and then modify their order properties.

The Service Bus plugin is distributed with the XL Deploy product. It is better to keep it opaque.
For our new types we will use the virtual plugin name „custom”. We use again the synthetic.xml the CI definition but we do not define new deployment rules.
At first, we define three „clones” of osb.Configuration type:
1. custom.OsbCommonConfiguration: schema definitions for shared use
2. custom.OsbServiceCommonConfiguration: service definitions
3. custom.OsbServiceConfiguration: service implementations
In the XML it will look like:

<type type="custom.OsbCommonConfiguration" extends="osb.Configuration"/>
<type type="custom.OsbServiceCommonConfiguration" extends="osb.Configuration"/>
<type type="custom.OsbServiceConfiguration" extends="osb.Configuration"/>

As you see, the base type here is the application CI type, not the targeted type. In order to customize their order we link each of them to a deployed type, which extends the deployed variant of the original type:

<type type="custom.DeployedOsbCommonConfiguration" extends="osb.DeployedConfiguration" deployable-type="custom.OsbCommonConfiguration" container-type="osb.Domain">
	<property name="createOrder" kind="integer" default="60" hidden="true"/>
	<property name="modifyOrder" kind="integer" default="61" hidden="true"/>
</type>

<type type="custom.DeployedOsbServiceCommonConfiguration" extends="osb.DeployedConfiguration" deployable-type="custom.OsbServiceCommonConfiguration" container-type="osb.Domain">
	<property name="createOrder" kind="integer" default="62" hidden="true"/>
	<property name="modifyOrder" kind="integer" default="63" hidden="true"/>
</type>

<type type="custom.DeployedOsbServiceConfiguration" extends="osb.DeployedConfiguration" deployable-type="custom.OsbServiceConfiguration" container-type="osb.Domain">
	<property name="createOrder" kind="integer" default="64" hidden="true"/>
	<property name="modifyOrder" kind="integer" default="65" hidden="true"/>
</type>

The story behind the chosen numbers is:
1. The shared resources get absolute precedence to the rest at each scenario (new deployment or update).
2. New service implementations may refer to modified messages and operations (of different services).
3. Modified service implementations may refer to new messages and operations (of different services).
Same way, restarting of the server is necessary for the customization to take effect.

 

6 Applying the New Types

This time, due to the new types, we would not be able to achieve the goal with just a redeployment of the same package.
Thus, we create a new application, ProperOrder using the same artifacts but associated with freshly defined types. The corresponding part of the deployit-manifest.xml is:

<custom.OsbServiceConfiguration name="OsbResources" file="OsbResources/OsbResources">
	<tags>
		<value>osb</value>
	</tags>
	<scanPlaceholders>true</scanPlaceholders>
	<projectNames>
		<value>Specific</value>
	lt;/projectNames>
</custom.OsbServiceConfiguration>
<custom.OsbServiceCommonConfiguration name="SharedOsbResources" file="SharedOsbResources/SharedOsbResources">
	<tags>
		<value>osb</value>
	</tags>
	<scanPlaceholders>true</scanPlaceholders>
	<projectNames>
		<value>Common</value>
	</projectNames>
</custom.OsbServiceCommonConfiguration>

Let’s analyze the deployment of the new application. This time, SharedOsbResources stands above OsbResources. The dependency criteria is fulfilled.

The post XL Deploy: Simple Case of Custom Deployment appeared first on AMIS Oracle and Java Blog.

]]>
https://technology.amis.nl/2015/07/05/xl-deploy-simple-case-of-custom-deployment/feed/ 0
Tips voor een effectieve Architectuur functie https://technology.amis.nl/2015/07/03/tips-voor-een-effectieve-architectuur-functie/ https://technology.amis.nl/2015/07/03/tips-voor-een-effectieve-architectuur-functie/#comments Fri, 03 Jul 2015 10:51:23 +0000 https://technology.amis.nl/?p=36296 Als architect zie ik dat iedere organisatie een andere manier heeft om ‘architectuur te bedrijven’. Ik krijg ook vaak de vraag hoe de architectuur bij andere organisaties is georganiseerd en wat er te verbeteren valt aan hun aanpak. Er is geen ‘silver bullet’, iedere organisatie heeft op architectuurvlak zijn eigen behoefte, cultuur, volwassenenheidsniveau, grootte en [...]

The post Tips voor een effectieve Architectuur functie appeared first on AMIS Oracle and Java Blog.

]]>
Als architect zie ik dat iedere organisatie een andere manier heeft om ‘architectuur te bedrijven’. Ik krijg ook vaak de vraag hoe de architectuur bij andere organisaties is georganiseerd en wat er te verbeteren valt aan hun aanpak.

Er is geen ‘silver bullet’, iedere organisatie heeft op architectuurvlak zijn eigen behoefte, cultuur, volwassenenheidsniveau, grootte en complexiteit. Maar natuurlijk zijn er ook een aantal zaken die je overal terug ziet komen, een soort ‘ongeschreven wetten’ die vrijwel altijd positief uitwerken.

Ik heb een aantal van mijn positieve ervaringen verzameld.

Kies een standaard architectuur framework en maak het praktisch

Kies bijvoorbeeld het TOGAF model als basis, en zorg dat er voor iedere architectuur fase beschreven is wat je minimaal nodig hebt. Het doel is om te zorgen dat de organisatie (en dus ieder team/project) voldoende besef heeft van de architectuur richting en of verandering en de daaruit volgende kaders, zodat men zich eraan kan ‘verbinden’. Deze verbinding is nodig, omdat werken onder architectuur niet alleen gaat over techniek en/of functionaliteit, maar ook over het werken aan zaken als de missie van het bedrijf, het halen van project deadlines en het maken van kosten.

Onderkennen van verschillende architectuur rollen

verschillende architectuur rollenIedereen kent wel het gevoel dat er aan zijn stoelpoten wordt gezaagd. Helaas is dit gevoel voor veel architecten meer dan gemiddeld aanwezig. Dit is lang niet altijd met opzet. Een architectuur rol is vaak een ‘onbegrepen rol’, een rol waar weinig mensen gevoel bij hebben. Daarnaast is het niveau en de verscheidenheid aan mensen waarmee wordt gewerkt heel divers, namelijk de beslissers op directieniveau, het middle management, het project management, inkopers, techneuten, etc. Om het nog lastiger te maken, zijn er ook nog verschillende typen architecten, ieder met een eigen beeld van zijn verantwoordelijkheid.

Er zijn een aantal architectuur rollen waarvan ik vind dat ze zeker nodig zijn. Het is noodzakelijk dat iedereen in de organisatie deze rollen kent en de noodzaak ervan begrijpt. Natuurlijk kan 1 persoon meerdere rollen invullen, of kan de rol gecombineerd worden met een andere rol. Maar het is heel belangrijk dat deze persoon zich bewust blijft van het feit dat het verschillende rollen zijn. Voorbeelden van combinaties zijn: Software Architect met Senior Developer en Solution Architect met Software Architect.

De architectuur rollen die nodig zijn:

  • Enterprise Architect: Focust op de missie, visie, doel en strategie van de hele organisatie.
  • Business Architect: Focust op het ondersteunen van de bedrijfsvoering, vanuit functioneel perspectief. Vaak is deze functie ingedeeld per business domein.
  • IT Architect: Focust op het ondersteunen van de bedrijfsvoering, vanuit technisch perspectief (‘uit welke producten/bouwstenen bestaat mijn IT landschap?’).
  • Solution Architect: Focust op het vertalen van algemene architectuur in oplossingen, in meer detail en per technologie (verdiepingsarchitectuur) of per project (architectuur voor het project, plus de regie daarop).
  • Software Architect: Focust op de technische detail oplossing binnen een applicatie.

Beleg de verantwoordelijkheid binnen de juiste rol

Een architect kan alleen succesvol zijn als de organisatie om hem heen zich aan de rollen en (gedelegeerde) verantwoordelijkheden houdt. Richt de organisatie en de processen ook in naar de interactie tussen deze rollen en zorg daarbij voor bewuste en vaste communicatiemomenten (reguliere meetings, etc) tussen de rollen onderling.

Een aantal tips:

  • Zorg dat de beslissers alleen beslissen en niet het werk overnemen van de architecten. Dit zie ik vaak gebeuren omdat er politieke, financiële en commerciële (zoals leveranciers) invloeden zijn waar de beslissers worden verleid om inhoudelijk (maar zonder de juiste expertise) mee te praten.
  • Zorg voor de aansluiting van architecten bij de business ontwikkelingen. Binnen vrijwel ieder bedrijf is een vorm van portfolio management aanwezig, waar de bedrijfsvisie wordt omgezet in concrete initiatieven en projecten. Op basis van prioriteiten wordt een programma- of project roadmap vastgesteld, een ideale plek om deze bewust uit te lijnen met de architectuur roadmap. Dit effect wordt krachtiger als het portfolio proces iteratief is, waardoor met regelmaat kan worden bijgestuurd op basis van recente informatie.
  • Laat architecten nooit onder een projectleider vallen. Vrijwel iedere moeilijke beslissing (tijd, scope, geld) zal dan in het projectbelang worden beoordeeld, en niet in het algemene (architectuur) belang. Dit ondermijnt een zuivere discussie waar project en algemeen (architectuur) belang op het juiste niveau worden besproken: niet binnen het project, maar overkoepelend.
  • Zorg dat er een technisch architectuur eigenaar is, en laat die formeel samenwerken met de functioneel georiënteerde eigenaren (Product Owners/Managers). Er is een grote kans dat er vanuit de techniek oplossingen zijn die goed passen bij de (toekomstige) functionele wens. Met deze vorm van samenwerking kun je ervoor zorgen dat dit tijdig wordt onderkend. Ook worden functionele en technische belangen dan op hetzelfde niveau gewogen.
  • Stel vooraf de project kaders vast, maar niet in teveel detail. Geeft het (project) team de ruimte om binnen de kaders te komen met de juiste oplossingen. De kans is namelijk vrij groot dat je vooraf niet voldoende informatie hebt om de optimale beslissingen voor het hele project te nemen. Wanneer je teveel in detail treedt, neem je mogelijkheden tot het uitwerken van (gedragen) oplossingen door het team onnodig weg. Deze benadering heeft overigens ook een positieve invloed op de doorlooptijd die nodig is voor het schrijven van een Project Solution Architectuur document.
  • Laat de Solution en Software Architect werken als onderdeel van het ontwikkelteam, om de details in te vullen en om (indien nodig) de architectuur kaders en standaarden bij te stellen. Daarmee wordt een deel van het eigenaarschap gedeeld met het team, de mensen die de echte implementatie doen. Op deze wijze wordt op een ‘natuurlijke manier’ vorm gegeven aan een stuk regie (Governance) op de oplossing.

The post Tips voor een effectieve Architectuur functie appeared first on AMIS Oracle and Java Blog.

]]>
https://technology.amis.nl/2015/07/03/tips-voor-een-effectieve-architectuur-functie/feed/ 0
Still no news from the security front… https://technology.amis.nl/2015/06/29/still-no-news-from-the-security-front/ https://technology.amis.nl/2015/06/29/still-no-news-from-the-security-front/#comments Mon, 29 Jun 2015 14:25:28 +0000 https://technology.amis.nl/?p=36268 This week I was doing research for one of our internal knowledge session when I stumbled across an interesting piece of history. I was tracing the history of computer security when I found an interview from Wired from the first people who implemented passwords as a security measure. They interviewed technicians like Fred Schneider and [...]

The post Still no news from the security front… appeared first on AMIS Oracle and Java Blog.

]]>
This week I was doing research for one of our internal knowledge session when I stumbled across an interesting piece of history. I was tracing the history of computer security when I found an interview from Wired from the first people who implemented passwords as a security measure. They interviewed technicians like Fred Schneider and Fernando Corbató who worked at MIT back in the 60’s. http://www.wired.com/2012/01/computer-password/ The article centers on a system (CTSS) which was built in the early 60’s, a time in which we were struggling to build computers which were more powerful than some watches we produce today. And remember, that stuff send us up to space and back. It was really good to read, as it seemed that nothing had really changed in all that time of technological innovation. There where several excerpts which I particularly liked in that respect, like this one:

The CTSS guys could have gone for knowledge-based authentication, where instead of a password, the computer asks you for something that other people probably don’t know — your mother’s maiden name, for example. But in the early days of computing, passwords were surely smaller and easier to store than the alternative, Schneider says. A knowledge-based system “would have required storing a fair bit of information about a person, and nobody wanted to devote many machine resources to this authentication stuff.

“Nobody wanted to devote many machine resources to this authentication stuff”, talk about ringing a bell… download As a community I believe we have not grown beyond this statement. I don’t mean to say that we haven’t built better authentication mechanisms and better security systems, but for the most part, our attitude towards authentication has not changed. Most developers and architects still basically think: “Well fine, just slap a password on it and it will be OK” if there is no embedded authentication mechanism available. I have only seen a handful of applications which have expanded on this mechanism and that is a real shame. The real kicker is that there are so many ways of solving this problem intelligently is stead of following the 1960’s solution. Just think about the integration possibilities with the existing security infrastructure or how you can best support soft and hard tokens. But that was not the only thing that got me, just read this (for the same article)

The irony is that the MIT researchers who pioneered the passwords didn’t really care much about security. CTSS may also have been the first system to experience a data breach.

This even made sense in some twisted way. The people who were charged with building this system were basically trying to build a shared computing system, not a computing vault of any kind. We can learn from this and move on, I suppose. So how about this: If you tag on security as some sort of secondary objective, don’t expect it to be really good, expect to be breached. So If you want software to be secure, make sure it is designed secure.

The post Still no news from the security front… appeared first on AMIS Oracle and Java Blog.

]]>
https://technology.amis.nl/2015/06/29/still-no-news-from-the-security-front/feed/ 0
Key take-aways from the Oracle PaaS Cloud announcements – Integrate, Accelerate, Lead https://technology.amis.nl/2015/06/24/key-take-aways-from-the-oracle-paas-cloud-announcements-integrate-accelerate-lead/ https://technology.amis.nl/2015/06/24/key-take-aways-from-the-oracle-paas-cloud-announcements-integrate-accelerate-lead/#comments Wed, 24 Jun 2015 05:09:58 +0000 https://technology.amis.nl/?p=36264 Monday June 22nd was the launch date for Oracle for 24 (and more) Cloud Services. June is traditionally an important month for Oracle when it comes to product launches and important announcements. This year is the same in that respect. The announcements came in a many-hour live webcast including a 45 minute presentation by Oracle [...]

The post Key take-aways from the Oracle PaaS Cloud announcements – Integrate, Accelerate, Lead appeared first on AMIS Oracle and Java Blog.

]]>
imageMonday June 22nd was the launch date for Oracle for 24 (and more) Cloud Services. June is traditionally an important month for Oracle when it comes to product launches and important announcements. This year is the same in that respect. The announcements came in a many-hour live webcast including a 45 minute presentation by Oracle CTO Larry Ellison (see videos from Oracle Cloud Platform Launch). I have harvested some of the most relevant slides from this presentation – that capture the essence from his announcements (or at least the things that stood out to me).

See some other relevant resources regarding these announcements:

image

image

“… All the major boxes are filled in. So you can move any application into the Oracle cloud. “

image

Launching new cloud services in each of these boxes:

image

image

image

image

image

image

image

 

Primary Competitors on PaaS:

image

 

 

A remarkable offering: Application Builder Cloud Service (ABC S): https://cloud.oracle.com/ApplicationBuilder

image

 

On PaaS – competing against Amazon. For example on Glacier – archived data service at very low prices:

image

And on ease of provisioning and management – for environments that include WebLogic or Oracle Database:

image

image

 

image

 

On SaaS: comparison against the competition – in breadth and depth of portfolio:

image

Oracle Cloud operational summary:

image

The post Key take-aways from the Oracle PaaS Cloud announcements – Integrate, Accelerate, Lead appeared first on AMIS Oracle and Java Blog.

]]>
https://technology.amis.nl/2015/06/24/key-take-aways-from-the-oracle-paas-cloud-announcements-integrate-accelerate-lead/feed/ 0
Security Features of Standard Edition (One) – Part 2 https://technology.amis.nl/2015/06/17/se_security_part_2/ https://technology.amis.nl/2015/06/17/se_security_part_2/#comments Wed, 17 Jun 2015 12:37:14 +0000 https://technology.amis.nl/?p=34304 or Some Musings on the Security Implications of Oracle Database Initialization Parameters Still following the steps of a database installation, this article will muse about some Initialization Parameters with security relevance. In order to make a Standard Edition database as secure as possible we could start by looking what is the same in SE and [...]

The post Security Features of Standard Edition (One) – Part 2 appeared first on AMIS Oracle and Java Blog.

]]>
or

Some Musings on the Security Implications of Oracle Database Initialization Parameters

Still following the steps of a database installation, this article will muse about some Initialization Parameters with security relevance.
In order to make a Standard Edition database as secure as possible we could start by looking what is the same in SE and EE, which are in their basic security functions more or less equal (security targets of 11g for EE and SE. And after having installed and secured the software (in Part 1 of this series ) we are now ready to create our first database instance. One of the first steps in this process is – and I assume you don’t use clicka-di-click-DBCA blindly – creating/adapting the initial init.ora–file.

Of the hundreds of initialization parameters in 11g, quite a handful also influence the behavior in such a way that it counts as security relevant. These parameters are often barely noticed or rarely changed from their defaults.

Take for example the parameters OPEN_LINKS and OPEN_LINKS_PER_INSTANCE. When asking colleagues around me most of them never-ever change(d) these defaults (both: 4) and when asked whether the database instance actually uses database links or other remote, distributed connections (XA connections) I harvested looks which can only be interpreted as “ehm… why do you ask, should I have bothered?” Maybe … we should at least look what these parameters are intended to do.

OPEN_LINKS determines the maximum number of concurrently open database links and/or connections of external procedure calls of a single session and OPEN_LINKS_PER_INSTANCE does almost the same but for the whole instance and it includes migratable XA connections as well. First of all, it makes no sense to set OPEN_LINKS larger than OPEN_LINKS_PER_INSTANCE, which is pretty obvious. But why do they matter to security?
Especially the OPEN_LINKS_PER_INSTANCE can consist of connections which are relatively easy to highjack. So, if a hacker got access to a database server with open connections (s)he can get access to one of the connections and access the (target) database without the need to authenticate, because the open connection is already authenticated when the connection was established. And each currently unused, but open connection is a “hole” in the security shell of the targeted database (for example of a pending distributed transaction). So, allowing more connections than you will ever use is like pricking holes in the defenses of the targeted databases. If you know your instance will never use database links or allow XA connections, set these parameters to 0 and close the holes before someone else pokes them open. On the other hand application developers should take care not to leave database links open unnecessarily.
(BTW: securing database links might be another blog in the future …)
Another often overlooked parameter is SQL92_SECURITY which is default (in 11.2) set to FALSE, but should be TRUE. The effect of TRUE is that a user must also have SELECT privileges on a table/view used in a WHERE-clause of an INSERT or UPDATE-statement in order to be able to execute updates or inserts. This tightens the restrictions a little more to prevent unauthorized data changes.

Ever heard of the “SEC_”-parameters like SEC_PROTOCOL_ERROR_FURTHER_ACTION and SEC_PROTOCOL_ERROR_TRACE_ACTION? Both reign over the TTC protocol and its possible errors. The first one governs what should be done if such an error occurs or what has to happen when too many errors have occurred, the second sets the tracing options of these errors. TTC is the Oracle wire protocol used by OCI in the JDBC Thin drivers that allow direct connections to the database on top of Java sockets. Again if something goes wrong with a connection it would be nice to know why. And if someone is trying to break in via a JDBC connection the admin/DBA can directly be warned if the trace action is set to ALERT.
The default trace action is set to TRACE which is okay but it should never be changed to NONE because you could easily miss the many undetected bad packets which can indicate a Denial of Service (DoS) attack on your database clients.
SEC_PROTOCOL_ERROR_FURTHER_ACTION can be set to the values of CONTINUE (the default), DELAY or DROP. The actual actions taken are A) DEFAULT: do nothing and go on, normal operations just continue (except maybe logging it when SEC_PROTOCOL_ERROR_TRACE_ACTION is set to TRACE or LOG), B) DELAY the bad packets of a session and therefore all communication sessions to this instance are slowed down (which is to say until it gets unattractive for the attacker and/or normal user) or C) DROP the offending session after x bad attempts. Setting the last two is a bit tricky because they also must contain a value to indicate what the delay should be or after how many bad packets Oracle server should start dropping sessions.
When setting these option, don’t forget the brackets as indicated in the documentation! The value must be written like below in order to be effectively changed:

SQL > ALTER SYSTEM SET SEC_PROTOCOL_ERROR_FURTHER_ACTION = “(DROP, 20)” SCOPE=BOTH;

In this example the database server would drop offending sessions after 20 bad TTC-packets and the client would show the error ORA-03134.
CONTINUE does not impact the good sessions as does the DELAY, which impacts other sessions by delaying the bad session as wel the waiting good session. This is an indication that something is going on. So, I tend to choose DROP in conjunction with SEC_PROTOCOL_ERROR_TRACE_ACTION=TRACE or even ALERT. LOG only registers a short notice in the alert log which often is not enough to debug what precisely happened.

Aprospos DoS attacks… setting SEC_MAX_FAILED_LOGIN_ATTEMPTS (default: 10) to a value equal or just a tiny bit higher than the highest value used in all of the profiles (where it is called FAILED_LOGIN_ATTEMPTS (default: 10)) is the overall emergency break for failed login’s into the instance and can help to prevent or stop brute force attacks or at least break them when someone is just trying to guess the password of a specific account. This parameter caps higher values of the profiles! Personally, I find 10 consecutive failed login attempts quite high. Batches and other automated processes logging in “know” their correct passwords and users who manually log in and miss it more than 5 times (counted since: a) the last password reset, b) the last succesfull login of c) the unlock command of a dba) are simply clumsy and should be reminded to take more care typing their passwords.

The next SEC_-parameter is SEC_CASE_SENSITIVE_LOGIN. Luckily it defaults to TRUE in 11g and so activates the case_sensitivity of passwords. When migrating from 10g to 11g the former un-sensitive passwords of 10g are kept until the first password change in 11g. It should stay TRUE and case-sensitive password should always be used if possible.
In 12c this parameter will be deprecated and there are other ways to force a case-insensitive login. Have good look into Database Upgrade Guide 12c and follow the link therein to the Database Security Guide.

The last of the SEC_-parameters is a static parameter SEC_RETURN_SERVER_RELEASE_BANNER. This parameter works a little bit like the “ServerTokens” directive of an APACHE Webserver but is only effective for unauthenticated clients which makes it very difficult to test.
In FULL mode (here: TRUE) APACHE might result in invitations to hackers with answers like:

Server: Apache/2.0.41 (Unix) PHP/4.2.2 MyMod/1.2

In Production mode (here: FALSE) an Apache server just answers with:

Server: Apache

An Oracle instance answers, when set to FALSE, instead of the correct version number of 11.2.0.4 the server, only with the main RDBMS version of 11.0.0.0 which could be a fully patched or a just Out-of-the-box install with all its bugs.
In order to change the value the database has to be restarted! So leave this one on FALSE.

Below is a list with other parameters which are (partly) relevant to security:

  • AUDIT_FILE_DEST: sets the path to the audit-files when AUDIT_TRAIL is set to “OS” or “OS, extended”. This path should be secured and monitored to prevent or at least be able to “see” tinkering with the audit-logs.
  • AUDIT_SYS_OPERATIONS: should be set to TRUE, always. It is not as comprehensive as the Fine Grained Auditing some Auditors might expect, but nevertheless “it might guard the Guards” a little bit.
  • AUDIT_TRAIL: choose at least “DB, extended”, but on systems where the dba’s are not system administrators maybe someone else should check the audit logs on the file system?
  • DIAGNOSTIC_DEST: don’t let it block your Oracle Home and again, don’t let it be tampered with, it contains valuable (forensic?) information about the going-on’s in/of your database
  • DISPATCHERS: here goes the same as for the OPEN_LINKS, if you don’t use it don’t set or set it to 0.
  • GLOBAL_NAMES: If set to TRUE, db_links have to use the service name resp. the global_name, which could form an extra hurdle for some hackers
  • LOG_ARCHIVE_%: protect this directory carefully, because firstly you might need it to restore your database and secondly remember: It contains your data (be it in a form your are not used to access in this form) which you are trying to protect!
  • MAX_ENABLED_ROLES: This is a deprecated parameter which is default set to 30 in 11R1 and from 11R2 onward it is ignored, so there is no way to prevent users to gather all roles they can get…and in 12c it will be deprecated.
  • O7_DICTIONARY_ACCESSIBILITY: since 11g the default is FALSE, keep it that way otherwise you allow access to data dictionary objects when an ANY privilege is granted.
  • OS_AUTHENT_PREFIX: don’t use ‘OPS$’ or ” which everybody would try first…
  • OS_ROLES: TRUE would leave it to the OS to manage roles, and the OS is easier to reach than the database…
  • REMOTE_LOGIN_PASSWORDFILE: do yourself a favor and never set it to NONE
  • REMOTE_OS_AUTHENT: will get deprecated in 12c
  • REMOTE_OS_ROLES: keep the default to FALSE and let the database manage the roles of remote users
  • RESOURCE_LIMIT: in an EE it would fully activate the Resource Manager when set to TRUE and therefore enforce the resource parameters of the profiles; in SE it only seems to activate the resource limits of the profiles. So, set it to TRUE anyway
  • SMTP_OUT_SERVER: if you don’t use it, don’t set it!
  • SPFILE: it specifies the path to the binary spfile and that is part of your configuration, which should be extra protected
  • UTL_FILE_DIR: just don’t use it anymore, use DIRECTORY objects instead. All OS-paths entered here are available for all authenticated users for read AND write access via PL\SQL!

This list does not pretend to be complete. It only should fire up your imagination to study the init-parameters more. It really is quite interesting!
… and I think, in the next blog I might dive into the Possibilities and Limitations of Profiles and Roles…

The post Security Features of Standard Edition (One) – Part 2 appeared first on AMIS Oracle and Java Blog.

]]>
https://technology.amis.nl/2015/06/17/se_security_part_2/feed/ 0
De business case voor vervanging van maatwerksoftware https://technology.amis.nl/2015/06/16/de-business-case-voor-vervanging-van-maatwerksoftware/ https://technology.amis.nl/2015/06/16/de-business-case-voor-vervanging-van-maatwerksoftware/#comments Tue, 16 Jun 2015 07:44:23 +0000 https://technology.amis.nl/?p=36205 IT-projecten rond maatwerk software voldoen vaak niet aan de financiële verwachtingen. Niet zelden is een voorname oorzaak dat naast de functionele uitbreidingen waar de business case en het budget op gebaseerd zijn, ook een flinke technische inhaalslag moet worden gemaakt die niet expliciet in de begroting is opgenomen. Vervanging van maatwerksoftware drukt zwaar op de [...]

The post De business case voor vervanging van maatwerksoftware appeared first on AMIS Oracle and Java Blog.

]]>
IT-projecten rond maatwerk software voldoen vaak niet aan de financiële verwachtingen. Niet zelden is een voorname oorzaak dat naast de functionele uitbreidingen waar de business case en het budget op gebaseerd zijn, ook een flinke technische inhaalslag moet worden gemaakt die niet expliciet in de begroting is opgenomen.

Vervanging van maatwerksoftware drukt zwaar op de begroting, waardoor uitstellen vaak makkelijker is dan vervanging. Ik verbaas me hier over, want het is toch bekend dat maatwerksoftware een eindige levensduur heeft? Dit geldt voor de economische gronden zoals snelheid en gebruikersvriendelijkheid, maar ook zeker uit technische overwegingen zoals ondersteunde hardware en compatibiliteit met Operating System. Maar qua budgettering lijkt het toch vaak onverwacht, waardoor de business case voor de vervanging moeilijk te maken is.

Aanschaf en vervanging van machines

Als een fabriek een grote machine aanschaft, dan is het ‘usance’ om de machine als kapitaalgoed bij de activa op de balans op te nemen en een jaarlijkse afschrijving te doen. Deze periode is de kortste van de technische en de economische levensduur. Naast de operationele kosten van het reguliere onderhoud zijn daarmee de investeringskosten ook geoperationaliseerd: de maandelijkse kosten inclusief afschrijving omvatten een deel van kosten van de machine.

Als de machine tijdens de implementatie op maat moet worden gemaakt om te passen in de fysieke ruimte en binnen het bedrijfsproces, worden ook deze kosten geactiveerd. En als er geen standaard-machine beschikbaar is voor de situatie van het bedrijf dan kan er zelf een machine ontworpen en gebouwd worden – precies op maat. De kosten van de bouw van deze machine zijn vergelijkbaar met de aanschafkosten van een ‘commercial off the shelf’ systeem en worden ook geactiveerd op de balans en via afschrijvingen geoperationaliseerd. De afschrijvingen kunnen worden beschouwd als terugbetaling op een lening die is verkregen – intern of extern – om de aanschaf te doen. Of ze kunnen worden gezien als een spaarpot voor het gaan doen van de vervanging.

Als er in de fabriek en zijn omgeving niets verandert, vervangt het de machine aan het eind van de afschrijvingsperiode zonder wijziging in het kostenpatroon. Als de nieuwe machine goedkoper of duurder is of de levensduur is anders, worden de maandelijkse afschrijvingen hoger of lager. Er is in elk geval geen ingewikkelde exercitie nodig om budgetten vrij te maken voor de vervanging. Er is geen sprake van een verrassing waarop de organisatie niet is voorbereid.

Het kan zijn dat er wel veranderingen zijn – waardoor een tussentijdse vervanging van de machine wordt overwogen. Door nieuwe regelgeving (bijvoorbeeld op ARBO-gebied of met betrekking tot milieu), een veranderde marktsituatie, een gewijzigde inrichting van het bedrijfsproces, nieuw ontwikkelde diensten, stijgende kosten van energie of onderhoudsmiddelen of schaarser wordende kennis kan er een business case zijn om voor het einde van verwachte levensduur de machine te vervangen. Deze business case wordt geleid door concrete business wensen en kansen en kan maar gedeeltelijk gebaseerd zijn op de waarde van de afschrijvingen.

En nu: Maatwerk Software

Voor software zou bovenstaand verhaal ook van toepassing moeten zijn. Een project voor de realisatie van een nieuw maatwerksysteem is soms een expliciete ‘as-is’ vervanging van een bestaande oplossing en is ook in veel andere gevallen op zijn minst gedeeltelijk bedoeld ter vervanging van bestaande systemen. Hoewel deze projecten dus, net als de machines uit de fabriek, gefinancierd zouden moeten kunnen worden uit de afschrijvingen, moeten voor het IT project vaak volledig nieuwe budgetten worden vrijgemaakt. Alsof er sprake is van een eenmalige, onverwachte uitgave waar geen fondsen voor beschikbaar zijn.

maatwerk softwareDoordat de overgang van ‘CAPEX’ naar ‘OPEX’ op basis van cloud-diensten een actueel gespreksonderwerp is, lijkt het misschien raar en tegendraads om te pleiten voor het behandelen van maatwerksoftware componenten als een kapitaalgoed. Het punt is: maatwerksoftware is een kapitaalgoed en vertegenwoordigt meestal een omvangrijke investering. En door het als zodanig te onderkennen, ontstaat juist inzicht in de operationele kosten. Via de afschrijvingen op het kapitaalgoed ontstaat de budgettaire ruimte om de maatwerk software te vervangen.

Ik zie vaak dat een business case voor waardevolle nieuwe functionaliteit wordt aangegrepen als aanleiding voor achterstallig onderhoud op of zelfs een volledige vervanging van een maatwerk applicatie. De business case zou wel de kosten voor de nieuwe functies moeten kunnen dragen, maar zou in mijn visie niet het budget moeten fourneren voor de complete vervanging. Helaas is vaak nagelaten de middelen voor vervanging te reserveren. Ik geef hierbij een paar richtlijnen die organisaties met grote belangen in maatwerk software – of aangepaste COTS software en eigenlijk ook met pure standaard pakketten – zouden kunnen overwegen:

  • Beschouw software die een cruciaal onderdeel vormt van de bedrijfsvoering en/of waar een substantiële investering voor is gedaan als kapitaalgoed. Dit betekent activering op de balans en hanteren van afschrijvingsschema’s, gebaseerd op de verwachte levensduur.
  • Realiseer je dat de investering in maatwerk-software niet alleen de kosten voor software licenties voor tools en generieke componenten en de kosten voor ontwikkeling omvat. Maar óók de kosten voor het opbouwen van de kennis van de gebruikte technologie voor beheer en onderhoud en het inrichten van een beheer-organisatie.
  • Maak kosten voor het gebruik van software systemen inzichtelijk; met inzicht in de maandelijkse kosten kun je de lasten doorberekenen en kun je de economische levensduur bepalen. Ook ontstaat inzicht dat bij het opstellen van business cases voor vernieuwing en vervanging kan worden ingezet; deze kosten omvatten naast de afschrijving op de initiële investering en de kosten voor fixes en changes ook de kosten voor het op peil houden van de technische infrastructuur.
  • Houd rekening met de kosten voor het op peil houden van de technische infrastructuur en de kennis bij betrokken medewerkers. Met het schaarser worden van kennis van verouderende technologie lopen deze kosten vaak op.
  • Inventariseer ook risico’s en druk deze zo mogelijk ook in geld uit. Verouderde technologische componenten leveren risico’s op qua beveiliging, compliance, ondersteuning en beschikbare kennis. Deze risico’s vragen tegenmaatregelen die geld kosten.

Van kosten naar investering

Het is waardevol de vergelijking van maatwerk-software met het machinepark regelmatig te maken als gedachtenexercitie. Als het gaat om concrete machines lijkt het makkelijker te denken in termen van investering, afschrijving, levensduur, business case en vervanging dan als we spreken of zoiets ongrijpbaars als software.

Kortom, activeer je investeringen in software en de daaraan gekoppelde implementatiekosten op de balans. Doe regelmatig een herwaardering of herijking van de systemen en volg een afschrijvingsschema. Zo voorkom je dat inspanningen voor onderhoud en vernieuwing als onverwachte en ongebudgetteerde kosten worden ervaren. En zo zorg je ervoor dat de business case voor nieuwe functionaliteit zuiver blijft en de investeringsbeslissing goed afgewogen kan worden genomen.

Zie ook het artikel Activering van zelfontwikkelde software en websites in jaarrekeningen van Nederlandse ondernemingen (http://www.compact.nl/artikelen/C-2003-2-Ginkel.htm) van Drs. R.M. van Ginkel RA ✝ en Drs. A.J. van de Munt RA over de fiscale en boekhoudkundige overwegingen rond de activering van maatwerk software.

The post De business case voor vervanging van maatwerksoftware appeared first on AMIS Oracle and Java Blog.

]]>
https://technology.amis.nl/2015/06/16/de-business-case-voor-vervanging-van-maatwerksoftware/feed/ 0