AMIS Oracle and Java Blog https://technology.amis.nl Friends of Oracle and Java Fri, 03 Jul 2015 10:51:23 +0000 en-US hourly 1 http://wordpress.org/?v=4.2.2 Tips voor een effectieve Architectuur functie https://technology.amis.nl/2015/07/03/tips-voor-een-effectieve-architectuur-functie/ https://technology.amis.nl/2015/07/03/tips-voor-een-effectieve-architectuur-functie/#comments Fri, 03 Jul 2015 10:51:23 +0000 https://technology.amis.nl/?p=36296 Als architect zie ik dat iedere organisatie een andere manier heeft om ‘architectuur te bedrijven’. Ik krijg ook vaak de vraag hoe de architectuur bij andere organisaties is georganiseerd en wat er te verbeteren valt aan hun aanpak. Er is geen ‘silver bullet’, iedere organisatie heeft op architectuurvlak zijn eigen behoefte, cultuur, volwassenenheidsniveau, grootte en [...]

The post Tips voor een effectieve Architectuur functie appeared first on AMIS Oracle and Java Blog.

]]>
Als architect zie ik dat iedere organisatie een andere manier heeft om ‘architectuur te bedrijven’. Ik krijg ook vaak de vraag hoe de architectuur bij andere organisaties is georganiseerd en wat er te verbeteren valt aan hun aanpak.

Er is geen ‘silver bullet’, iedere organisatie heeft op architectuurvlak zijn eigen behoefte, cultuur, volwassenenheidsniveau, grootte en complexiteit. Maar natuurlijk zijn er ook een aantal zaken die je overal terug ziet komen, een soort ‘ongeschreven wetten’ die vrijwel altijd positief uitwerken.

Ik heb een aantal van mijn positieve ervaringen verzameld.

Kies een standaard architectuur framework en maak het praktisch

Kies bijvoorbeeld het TOGAF model als basis, en zorg dat er voor iedere architectuur fase beschreven is wat je minimaal nodig hebt. Het doel is om te zorgen dat de organisatie (en dus ieder team/project) voldoende besef heeft van de architectuur richting en of verandering en de daaruit volgende kaders, zodat men zich eraan kan ‘verbinden’. Deze verbinding is nodig, omdat werken onder architectuur niet alleen gaat over techniek en/of functionaliteit, maar ook over het werken aan zaken als de missie van het bedrijf, het halen van project deadlines en het maken van kosten.

Onderkennen van verschillende architectuur rollen

verschillende architectuur rollenIedereen kent wel het gevoel dat er aan zijn stoelpoten wordt gezaagd. Helaas is dit gevoel voor veel architecten meer dan gemiddeld aanwezig. Dit is lang niet altijd met opzet. Een architectuur rol is vaak een ‘onbegrepen rol’, een rol waar weinig mensen gevoel bij hebben. Daarnaast is het niveau en de verscheidenheid aan mensen waarmee wordt gewerkt heel divers, namelijk de beslissers op directieniveau, het middle management, het project management, inkopers, techneuten, etc. Om het nog lastiger te maken, zijn er ook nog verschillende typen architecten, ieder met een eigen beeld van zijn verantwoordelijkheid.

Er zijn een aantal architectuur rollen waarvan ik vind dat ze zeker nodig zijn. Het is noodzakelijk dat iedereen in de organisatie deze rollen kent en de noodzaak ervan begrijpt. Natuurlijk kan 1 persoon meerdere rollen invullen, of kan de rol gecombineerd worden met een andere rol. Maar het is heel belangrijk dat deze persoon zich bewust blijft van het feit dat het verschillende rollen zijn. Voorbeelden van combinaties zijn: Software Architect met Senior Developer en Solution Architect met Software Architect.

De architectuur rollen die nodig zijn:

  • Enterprise Architect: Focust op de missie, visie, doel en strategie van de hele organisatie.
  • Business Architect: Focust op het ondersteunen van de bedrijfsvoering, vanuit functioneel perspectief. Vaak is deze functie ingedeeld per business domein.
  • IT Architect: Focust op het ondersteunen van de bedrijfsvoering, vanuit technisch perspectief (‘uit welke producten/bouwstenen bestaat mijn IT landschap?’).
  • Solution Architect: Focust op het vertalen van algemene architectuur in oplossingen, in meer detail en per technologie (verdiepingsarchitectuur) of per project (architectuur voor het project, plus de regie daarop).
  • Software Architect: Focust op de technische detail oplossing binnen een applicatie.

Beleg de verantwoordelijkheid binnen de juiste rol

Een architect kan alleen succesvol zijn als de organisatie om hem heen zich aan de rollen en (gedelegeerde) verantwoordelijkheden houdt. Richt de organisatie en de processen ook in naar de interactie tussen deze rollen en zorg daarbij voor bewuste en vaste communicatiemomenten (reguliere meetings, etc) tussen de rollen onderling.

Een aantal tips:

  • Zorg dat de beslissers alleen beslissen en niet het werk overnemen van de architecten. Dit zie ik vaak gebeuren omdat er politieke, financiële en commerciële (zoals leveranciers) invloeden zijn waar de beslissers worden verleid om inhoudelijk (maar zonder de juiste expertise) mee te praten.
  • Zorg voor de aansluiting van architecten bij de business ontwikkelingen. Binnen vrijwel ieder bedrijf is een vorm van portfolio management aanwezig, waar de bedrijfsvisie wordt omgezet in concrete initiatieven en projecten. Op basis van prioriteiten wordt een programma- of project roadmap vastgesteld, een ideale plek om deze bewust uit te lijnen met de architectuur roadmap. Dit effect wordt krachtiger als het portfolio proces iteratief is, waardoor met regelmaat kan worden bijgestuurd op basis van recente informatie.
  • Laat architecten nooit onder een projectleider vallen. Vrijwel iedere moeilijke beslissing (tijd, scope, geld) zal dan in het projectbelang worden beoordeeld, en niet in het algemene (architectuur) belang. Dit ondermijnt een zuivere discussie waar project en algemeen (architectuur) belang op het juiste niveau worden besproken: niet binnen het project, maar overkoepelend.
  • Zorg dat er een technisch architectuur eigenaar is, en laat die formeel samenwerken met de functioneel georiënteerde eigenaren (Product Owners/Managers). Er is een grote kans dat er vanuit de techniek oplossingen zijn die goed passen bij de (toekomstige) functionele wens. Met deze vorm van samenwerking kun je ervoor zorgen dat dit tijdig wordt onderkend. Ook worden functionele en technische belangen dan op hetzelfde niveau gewogen.
  • Stel vooraf de project kaders vast, maar niet in teveel detail. Geeft het (project) team de ruimte om binnen de kaders te komen met de juiste oplossingen. De kans is namelijk vrij groot dat je vooraf niet voldoende informatie hebt om de optimale beslissingen voor het hele project te nemen. Wanneer je teveel in detail treedt, neem je mogelijkheden tot het uitwerken van (gedragen) oplossingen door het team onnodig weg. Deze benadering heeft overigens ook een positieve invloed op de doorlooptijd die nodig is voor het schrijven van een Project Solution Architectuur document.
  • Laat de Solution en Software Architect werken als onderdeel van het ontwikkelteam, om de details in te vullen en om (indien nodig) de architectuur kaders en standaarden bij te stellen. Daarmee wordt een deel van het eigenaarschap gedeeld met het team, de mensen die de echte implementatie doen. Op deze wijze wordt op een ‘natuurlijke manier’ vorm gegeven aan een stuk regie (Governance) op de oplossing.

The post Tips voor een effectieve Architectuur functie appeared first on AMIS Oracle and Java Blog.

]]>
https://technology.amis.nl/2015/07/03/tips-voor-een-effectieve-architectuur-functie/feed/ 0
Still no news from the security front… https://technology.amis.nl/2015/06/29/still-no-news-from-the-security-front/ https://technology.amis.nl/2015/06/29/still-no-news-from-the-security-front/#comments Mon, 29 Jun 2015 14:25:28 +0000 https://technology.amis.nl/?p=36268 This week I was doing research for one of our internal knowledge session when I stumbled across an interesting piece of history. I was tracing the history of computer security when I found an interview from Wired from the first people who implemented passwords as a security measure. They interviewed technicians like Fred Schneider and [...]

The post Still no news from the security front… appeared first on AMIS Oracle and Java Blog.

]]>
This week I was doing research for one of our internal knowledge session when I stumbled across an interesting piece of history. I was tracing the history of computer security when I found an interview from Wired from the first people who implemented passwords as a security measure. They interviewed technicians like Fred Schneider and Fernando Corbató who worked at MIT back in the 60’s. http://www.wired.com/2012/01/computer-password/ The article centers on a system (CTSS) which was built in the early 60’s, a time in which we were struggling to build computers which were more powerful than some watches we produce today. And remember, that stuff send us up to space and back. It was really good to read, as it seemed that nothing had really changed in all that time of technological innovation. There where several excerpts which I particularly liked in that respect, like this one:

The CTSS guys could have gone for knowledge-based authentication, where instead of a password, the computer asks you for something that other people probably don’t know — your mother’s maiden name, for example. But in the early days of computing, passwords were surely smaller and easier to store than the alternative, Schneider says. A knowledge-based system “would have required storing a fair bit of information about a person, and nobody wanted to devote many machine resources to this authentication stuff.

“Nobody wanted to devote many machine resources to this authentication stuff”, talk about ringing a bell… download As a community I believe we have not grown beyond this statement. I don’t mean to say that we haven’t built better authentication mechanisms and better security systems, but for the most part, our attitude towards authentication has not changed. Most developers and architects still basically think: “Well fine, just slap a password on it and it will be OK” if there is no embedded authentication mechanism available. I have only seen a handful of applications which have expanded on this mechanism and that is a real shame. The real kicker is that there are so many ways of solving this problem intelligently is stead of following the 1960’s solution. Just think about the integration possibilities with the existing security infrastructure or how you can best support soft and hard tokens. But that was not the only thing that got me, just read this (for the same article)

The irony is that the MIT researchers who pioneered the passwords didn’t really care much about security. CTSS may also have been the first system to experience a data breach.

This even made sense in some twisted way. The people who were charged with building this system were basically trying to build a shared computing system, not a computing vault of any kind. We can learn from this and move on, I suppose. So how about this: If you tag on security as some sort of secondary objective, don’t expect it to be really good, expect to be breached. So If you want software to be secure, make sure it is designed secure.

The post Still no news from the security front… appeared first on AMIS Oracle and Java Blog.

]]>
https://technology.amis.nl/2015/06/29/still-no-news-from-the-security-front/feed/ 0
Key take-aways from the Oracle PaaS Cloud announcements – Integrate, Accelerate, Lead https://technology.amis.nl/2015/06/24/key-take-aways-from-the-oracle-paas-cloud-announcements-integrate-accelerate-lead/ https://technology.amis.nl/2015/06/24/key-take-aways-from-the-oracle-paas-cloud-announcements-integrate-accelerate-lead/#comments Wed, 24 Jun 2015 05:09:58 +0000 https://technology.amis.nl/?p=36264 Monday June 22nd was the launch date for Oracle for 24 (and more) Cloud Services. June is traditionally an important month for Oracle when it comes to product launches and important announcements. This year is the same in that respect. The announcements came in a many-hour live webcast including a 45 minute presentation by Oracle [...]

The post Key take-aways from the Oracle PaaS Cloud announcements – Integrate, Accelerate, Lead appeared first on AMIS Oracle and Java Blog.

]]>
imageMonday June 22nd was the launch date for Oracle for 24 (and more) Cloud Services. June is traditionally an important month for Oracle when it comes to product launches and important announcements. This year is the same in that respect. The announcements came in a many-hour live webcast including a 45 minute presentation by Oracle CTO Larry Ellison (see videos from Oracle Cloud Platform Launch). I have harvested some of the most relevant slides from this presentation – that capture the essence from his announcements (or at least the things that stood out to me).

See some other relevant resources regarding these announcements:

image

image

“… All the major boxes are filled in. So you can move any application into the Oracle cloud. “

image

Launching new cloud services in each of these boxes:

image

image

image

image

image

image

image

 

Primary Competitors on PaaS:

image

 

 

A remarkable offering: Application Builder Cloud Service (ABC S): https://cloud.oracle.com/ApplicationBuilder

image

 

On PaaS – competing against Amazon. For example on Glacier – archived data service at very low prices:

image

And on ease of provisioning and management – for environments that include WebLogic or Oracle Database:

image

image

 

image

 

On SaaS: comparison against the competition – in breadth and depth of portfolio:

image

Oracle Cloud operational summary:

image

The post Key take-aways from the Oracle PaaS Cloud announcements – Integrate, Accelerate, Lead appeared first on AMIS Oracle and Java Blog.

]]>
https://technology.amis.nl/2015/06/24/key-take-aways-from-the-oracle-paas-cloud-announcements-integrate-accelerate-lead/feed/ 0
Security Features of Standard Edition (One) – Part 2 https://technology.amis.nl/2015/06/17/se_security_part_2/ https://technology.amis.nl/2015/06/17/se_security_part_2/#comments Wed, 17 Jun 2015 12:37:14 +0000 https://technology.amis.nl/?p=34304 or Some Musings on the Security Implications of Oracle Database Initialization Parameters Still following the steps of a database installation, this article will muse about some Initialization Parameters with security relevance. In order to make a Standard Edition database as secure as possible we could start by looking what is the same in SE and [...]

The post Security Features of Standard Edition (One) – Part 2 appeared first on AMIS Oracle and Java Blog.

]]>
or

Some Musings on the Security Implications of Oracle Database Initialization Parameters

Still following the steps of a database installation, this article will muse about some Initialization Parameters with security relevance.
In order to make a Standard Edition database as secure as possible we could start by looking what is the same in SE and EE, which are in their basic security functions more or less equal (security targets of 11g for EE and SE. And after having installed and secured the software (in Part 1 of this series ) we are now ready to create our first database instance. One of the first steps in this process is – and I assume you don’t use clicka-di-click-DBCA blindly – creating/adapting the initial init.ora–file.

Of the hundreds of initialization parameters in 11g, quite a handful also influence the behavior in such a way that it counts as security relevant. These parameters are often barely noticed or rarely changed from their defaults.

Take for example the parameters OPEN_LINKS and OPEN_LINKS_PER_INSTANCE. When asking colleagues around me most of them never-ever change(d) these defaults (both: 4) and when asked whether the database instance actually uses database links or other remote, distributed connections (XA connections) I harvested looks which can only be interpreted as “ehm… why do you ask, should I have bothered?” Maybe … we should at least look what these parameters are intended to do.

OPEN_LINKS determines the maximum number of concurrently open database links and/or connections of external procedure calls of a single session and OPEN_LINKS_PER_INSTANCE does almost the same but for the whole instance and it includes migratable XA connections as well. First of all, it makes no sense to set OPEN_LINKS larger than OPEN_LINKS_PER_INSTANCE, which is pretty obvious. But why do they matter to security?
Especially the OPEN_LINKS_PER_INSTANCE can consist of connections which are relatively easy to highjack. So, if a hacker got access to a database server with open connections (s)he can get access to one of the connections and access the (target) database without the need to authenticate, because the open connection is already authenticated when the connection was established. And each currently unused, but open connection is a “hole” in the security shell of the targeted database (for example of a pending distributed transaction). So, allowing more connections than you will ever use is like pricking holes in the defenses of the targeted databases. If you know your instance will never use database links or allow XA connections, set these parameters to 0 and close the holes before someone else pokes them open. On the other hand application developers should take care not to leave database links open unnecessarily.
(BTW: securing database links might be another blog in the future …)
Another often overlooked parameter is SQL92_SECURITY which is default (in 11.2) set to FALSE, but should be TRUE. The effect of TRUE is that a user must also have SELECT privileges on a table/view used in a WHERE-clause of an INSERT or UPDATE-statement in order to be able to execute updates or inserts. This tightens the restrictions a little more to prevent unauthorized data changes.

Ever heard of the “SEC_”-parameters like SEC_PROTOCOL_ERROR_FURTHER_ACTION and SEC_PROTOCOL_ERROR_TRACE_ACTION? Both reign over the TTC protocol and its possible errors. The first one governs what should be done if such an error occurs or what has to happen when too many errors have occurred, the second sets the tracing options of these errors. TTC is the Oracle wire protocol used by OCI in the JDBC Thin drivers that allow direct connections to the database on top of Java sockets. Again if something goes wrong with a connection it would be nice to know why. And if someone is trying to break in via a JDBC connection the admin/DBA can directly be warned if the trace action is set to ALERT.
The default trace action is set to TRACE which is okay but it should never be changed to NONE because you could easily miss the many undetected bad packets which can indicate a Denial of Service (DoS) attack on your database clients.
SEC_PROTOCOL_ERROR_FURTHER_ACTION can be set to the values of CONTINUE (the default), DELAY or DROP. The actual actions taken are A) DEFAULT: do nothing and go on, normal operations just continue (except maybe logging it when SEC_PROTOCOL_ERROR_TRACE_ACTION is set to TRACE or LOG), B) DELAY the bad packets of a session and therefore all communication sessions to this instance are slowed down (which is to say until it gets unattractive for the attacker and/or normal user) or C) DROP the offending session after x bad attempts. Setting the last two is a bit tricky because they also must contain a value to indicate what the delay should be or after how many bad packets Oracle server should start dropping sessions.
When setting these option, don’t forget the brackets as indicated in the documentation! The value must be written like below in order to be effectively changed:

SQL > ALTER SYSTEM SET SEC_PROTOCOL_ERROR_FURTHER_ACTION = “(DROP, 20)” SCOPE=BOTH;

In this example the database server would drop offending sessions after 20 bad TTC-packets and the client would show the error ORA-03134.
CONTINUE does not impact the good sessions as does the DELAY, which impacts other sessions by delaying the bad session as wel the waiting good session. This is an indication that something is going on. So, I tend to choose DROP in conjunction with SEC_PROTOCOL_ERROR_TRACE_ACTION=TRACE or even ALERT. LOG only registers a short notice in the alert log which often is not enough to debug what precisely happened.

Aprospos DoS attacks… setting SEC_MAX_FAILED_LOGIN_ATTEMPTS (default: 10) to a value equal or just a tiny bit higher than the highest value used in all of the profiles (where it is called FAILED_LOGIN_ATTEMPTS (default: 10)) is the overall emergency break for failed login’s into the instance and can help to prevent or stop brute force attacks or at least break them when someone is just trying to guess the password of a specific account. This parameter caps higher values of the profiles! Personally, I find 10 consecutive failed login attempts quite high. Batches and other automated processes logging in “know” their correct passwords and users who manually log in and miss it more than 5 times (counted since: a) the last password reset, b) the last succesfull login of c) the unlock command of a dba) are simply clumsy and should be reminded to take more care typing their passwords.

The next SEC_-parameter is SEC_CASE_SENSITIVE_LOGIN. Luckily it defaults to TRUE in 11g and so activates the case_sensitivity of passwords. When migrating from 10g to 11g the former un-sensitive passwords of 10g are kept until the first password change in 11g. It should stay TRUE and case-sensitive password should always be used if possible.
In 12c this parameter will be deprecated and there are other ways to force a case-insensitive login. Have good look into Database Upgrade Guide 12c and follow the link therein to the Database Security Guide.

The last of the SEC_-parameters is a static parameter SEC_RETURN_SERVER_RELEASE_BANNER. This parameter works a little bit like the “ServerTokens” directive of an APACHE Webserver but is only effective for unauthenticated clients which makes it very difficult to test.
In FULL mode (here: TRUE) APACHE might result in invitations to hackers with answers like:

Server: Apache/2.0.41 (Unix) PHP/4.2.2 MyMod/1.2

In Production mode (here: FALSE) an Apache server just answers with:

Server: Apache

An Oracle instance answers, when set to FALSE, instead of the correct version number of 11.2.0.4 the server, only with the main RDBMS version of 11.0.0.0 which could be a fully patched or a just Out-of-the-box install with all its bugs.
In order to change the value the database has to be restarted! So leave this one on FALSE.

Below is a list with other parameters which are (partly) relevant to security:

  • AUDIT_FILE_DEST: sets the path to the audit-files when AUDIT_TRAIL is set to “OS” or “OS, extended”. This path should be secured and monitored to prevent or at least be able to “see” tinkering with the audit-logs.
  • AUDIT_SYS_OPERATIONS: should be set to TRUE, always. It is not as comprehensive as the Fine Grained Auditing some Auditors might expect, but nevertheless “it might guard the Guards” a little bit.
  • AUDIT_TRAIL: choose at least “DB, extended”, but on systems where the dba’s are not system administrators maybe someone else should check the audit logs on the file system?
  • DIAGNOSTIC_DEST: don’t let it block your Oracle Home and again, don’t let it be tampered with, it contains valuable (forensic?) information about the going-on’s in/of your database
  • DISPATCHERS: here goes the same as for the OPEN_LINKS, if you don’t use it don’t set or set it to 0.
  • GLOBAL_NAMES: If set to TRUE, db_links have to use the service name resp. the global_name, which could form an extra hurdle for some hackers
  • LOG_ARCHIVE_%: protect this directory carefully, because firstly you might need it to restore your database and secondly remember: It contains your data (be it in a form your are not used to access in this form) which you are trying to protect!
  • MAX_ENABLED_ROLES: This is a deprecated parameter which is default set to 30 in 11R1 and from 11R2 onward it is ignored, so there is no way to prevent users to gather all roles they can get…and in 12c it will be deprecated.
  • O7_DICTIONARY_ACCESSIBILITY: since 11g the default is FALSE, keep it that way otherwise you allow access to data dictionary objects when an ANY privilege is granted.
  • OS_AUTHENT_PREFIX: don’t use ‘OPS$’ or ” which everybody would try first…
  • OS_ROLES: TRUE would leave it to the OS to manage roles, and the OS is easier to reach than the database…
  • REMOTE_LOGIN_PASSWORDFILE: do yourself a favor and never set it to NONE
  • REMOTE_OS_AUTHENT: will get deprecated in 12c
  • REMOTE_OS_ROLES: keep the default to FALSE and let the database manage the roles of remote users
  • RESOURCE_LIMIT: in an EE it would fully activate the Resource Manager when set to TRUE and therefore enforce the resource parameters of the profiles; in SE it only seems to activate the resource limits of the profiles. So, set it to TRUE anyway
  • SMTP_OUT_SERVER: if you don’t use it, don’t set it!
  • SPFILE: it specifies the path to the binary spfile and that is part of your configuration, which should be extra protected
  • UTL_FILE_DIR: just don’t use it anymore, use DIRECTORY objects instead. All OS-paths entered here are available for all authenticated users for read AND write access via PL\SQL!

This list does not pretend to be complete. It only should fire up your imagination to study the init-parameters more. It really is quite interesting!
… and I think, in the next blog I might dive into the Possibilities and Limitations of Profiles and Roles…

The post Security Features of Standard Edition (One) – Part 2 appeared first on AMIS Oracle and Java Blog.

]]>
https://technology.amis.nl/2015/06/17/se_security_part_2/feed/ 0
De business case voor vervanging van maatwerksoftware https://technology.amis.nl/2015/06/16/de-business-case-voor-vervanging-van-maatwerksoftware/ https://technology.amis.nl/2015/06/16/de-business-case-voor-vervanging-van-maatwerksoftware/#comments Tue, 16 Jun 2015 07:44:23 +0000 https://technology.amis.nl/?p=36205 IT-projecten rond maatwerk software voldoen vaak niet aan de financiële verwachtingen. Niet zelden is een voorname oorzaak dat naast de functionele uitbreidingen waar de business case en het budget op gebaseerd zijn, ook een flinke technische inhaalslag moet worden gemaakt die niet expliciet in de begroting is opgenomen. Vervanging van maatwerksoftware drukt zwaar op de [...]

The post De business case voor vervanging van maatwerksoftware appeared first on AMIS Oracle and Java Blog.

]]>
IT-projecten rond maatwerk software voldoen vaak niet aan de financiële verwachtingen. Niet zelden is een voorname oorzaak dat naast de functionele uitbreidingen waar de business case en het budget op gebaseerd zijn, ook een flinke technische inhaalslag moet worden gemaakt die niet expliciet in de begroting is opgenomen.

Vervanging van maatwerksoftware drukt zwaar op de begroting, waardoor uitstellen vaak makkelijker is dan vervanging. Ik verbaas me hier over, want het is toch bekend dat maatwerksoftware een eindige levensduur heeft? Dit geldt voor de economische gronden zoals snelheid en gebruikersvriendelijkheid, maar ook zeker uit technische overwegingen zoals ondersteunde hardware en compatibiliteit met Operating System. Maar qua budgettering lijkt het toch vaak onverwacht, waardoor de business case voor de vervanging moeilijk te maken is.

Aanschaf en vervanging van machines

Als een fabriek een grote machine aanschaft, dan is het ‘usance’ om de machine als kapitaalgoed bij de activa op de balans op te nemen en een jaarlijkse afschrijving te doen. Deze periode is de kortste van de technische en de economische levensduur. Naast de operationele kosten van het reguliere onderhoud zijn daarmee de investeringskosten ook geoperationaliseerd: de maandelijkse kosten inclusief afschrijving omvatten een deel van kosten van de machine.

Als de machine tijdens de implementatie op maat moet worden gemaakt om te passen in de fysieke ruimte en binnen het bedrijfsproces, worden ook deze kosten geactiveerd. En als er geen standaard-machine beschikbaar is voor de situatie van het bedrijf dan kan er zelf een machine ontworpen en gebouwd worden – precies op maat. De kosten van de bouw van deze machine zijn vergelijkbaar met de aanschafkosten van een ‘commercial off the shelf’ systeem en worden ook geactiveerd op de balans en via afschrijvingen geoperationaliseerd. De afschrijvingen kunnen worden beschouwd als terugbetaling op een lening die is verkregen – intern of extern – om de aanschaf te doen. Of ze kunnen worden gezien als een spaarpot voor het gaan doen van de vervanging.

Als er in de fabriek en zijn omgeving niets verandert, vervangt het de machine aan het eind van de afschrijvingsperiode zonder wijziging in het kostenpatroon. Als de nieuwe machine goedkoper of duurder is of de levensduur is anders, worden de maandelijkse afschrijvingen hoger of lager. Er is in elk geval geen ingewikkelde exercitie nodig om budgetten vrij te maken voor de vervanging. Er is geen sprake van een verrassing waarop de organisatie niet is voorbereid.

Het kan zijn dat er wel veranderingen zijn – waardoor een tussentijdse vervanging van de machine wordt overwogen. Door nieuwe regelgeving (bijvoorbeeld op ARBO-gebied of met betrekking tot milieu), een veranderde marktsituatie, een gewijzigde inrichting van het bedrijfsproces, nieuw ontwikkelde diensten, stijgende kosten van energie of onderhoudsmiddelen of schaarser wordende kennis kan er een business case zijn om voor het einde van verwachte levensduur de machine te vervangen. Deze business case wordt geleid door concrete business wensen en kansen en kan maar gedeeltelijk gebaseerd zijn op de waarde van de afschrijvingen.

En nu: Maatwerk Software

Voor software zou bovenstaand verhaal ook van toepassing moeten zijn. Een project voor de realisatie van een nieuw maatwerksysteem is soms een expliciete ‘as-is’ vervanging van een bestaande oplossing en is ook in veel andere gevallen op zijn minst gedeeltelijk bedoeld ter vervanging van bestaande systemen. Hoewel deze projecten dus, net als de machines uit de fabriek, gefinancierd zouden moeten kunnen worden uit de afschrijvingen, moeten voor het IT project vaak volledig nieuwe budgetten worden vrijgemaakt. Alsof er sprake is van een eenmalige, onverwachte uitgave waar geen fondsen voor beschikbaar zijn.

maatwerk softwareDoordat de overgang van ‘CAPEX’ naar ‘OPEX’ op basis van cloud-diensten een actueel gespreksonderwerp is, lijkt het misschien raar en tegendraads om te pleiten voor het behandelen van maatwerksoftware componenten als een kapitaalgoed. Het punt is: maatwerksoftware is een kapitaalgoed en vertegenwoordigt meestal een omvangrijke investering. En door het als zodanig te onderkennen, ontstaat juist inzicht in de operationele kosten. Via de afschrijvingen op het kapitaalgoed ontstaat de budgettaire ruimte om de maatwerk software te vervangen.

Ik zie vaak dat een business case voor waardevolle nieuwe functionaliteit wordt aangegrepen als aanleiding voor achterstallig onderhoud op of zelfs een volledige vervanging van een maatwerk applicatie. De business case zou wel de kosten voor de nieuwe functies moeten kunnen dragen, maar zou in mijn visie niet het budget moeten fourneren voor de complete vervanging. Helaas is vaak nagelaten de middelen voor vervanging te reserveren. Ik geef hierbij een paar richtlijnen die organisaties met grote belangen in maatwerk software – of aangepaste COTS software en eigenlijk ook met pure standaard pakketten – zouden kunnen overwegen:

  • Beschouw software die een cruciaal onderdeel vormt van de bedrijfsvoering en/of waar een substantiële investering voor is gedaan als kapitaalgoed. Dit betekent activering op de balans en hanteren van afschrijvingsschema’s, gebaseerd op de verwachte levensduur.
  • Realiseer je dat de investering in maatwerk-software niet alleen de kosten voor software licenties voor tools en generieke componenten en de kosten voor ontwikkeling omvat. Maar óók de kosten voor het opbouwen van de kennis van de gebruikte technologie voor beheer en onderhoud en het inrichten van een beheer-organisatie.
  • Maak kosten voor het gebruik van software systemen inzichtelijk; met inzicht in de maandelijkse kosten kun je de lasten doorberekenen en kun je de economische levensduur bepalen. Ook ontstaat inzicht dat bij het opstellen van business cases voor vernieuwing en vervanging kan worden ingezet; deze kosten omvatten naast de afschrijving op de initiële investering en de kosten voor fixes en changes ook de kosten voor het op peil houden van de technische infrastructuur.
  • Houd rekening met de kosten voor het op peil houden van de technische infrastructuur en de kennis bij betrokken medewerkers. Met het schaarser worden van kennis van verouderende technologie lopen deze kosten vaak op.
  • Inventariseer ook risico’s en druk deze zo mogelijk ook in geld uit. Verouderde technologische componenten leveren risico’s op qua beveiliging, compliance, ondersteuning en beschikbare kennis. Deze risico’s vragen tegenmaatregelen die geld kosten.

Van kosten naar investering

Het is waardevol de vergelijking van maatwerk-software met het machinepark regelmatig te maken als gedachtenexercitie. Als het gaat om concrete machines lijkt het makkelijker te denken in termen van investering, afschrijving, levensduur, business case en vervanging dan als we spreken of zoiets ongrijpbaars als software.

Kortom, activeer je investeringen in software en de daaraan gekoppelde implementatiekosten op de balans. Doe regelmatig een herwaardering of herijking van de systemen en volg een afschrijvingsschema. Zo voorkom je dat inspanningen voor onderhoud en vernieuwing als onverwachte en ongebudgetteerde kosten worden ervaren. En zo zorg je ervoor dat de business case voor nieuwe functionaliteit zuiver blijft en de investeringsbeslissing goed afgewogen kan worden genomen.

Zie ook het artikel Activering van zelfontwikkelde software en websites in jaarrekeningen van Nederlandse ondernemingen (http://www.compact.nl/artikelen/C-2003-2-Ginkel.htm) van Drs. R.M. van Ginkel RA ✝ en Drs. A.J. van de Munt RA over de fiscale en boekhoudkundige overwegingen rond de activering van maatwerk software.

The post De business case voor vervanging van maatwerksoftware appeared first on AMIS Oracle and Java Blog.

]]>
https://technology.amis.nl/2015/06/16/de-business-case-voor-vervanging-van-maatwerksoftware/feed/ 0
Licensing ODA on NUP’s and with different metrics https://technology.amis.nl/2015/06/05/licensing-oda-on-nups-and-with-different-metrics/ https://technology.amis.nl/2015/06/05/licensing-oda-on-nups-and-with-different-metrics/#comments Fri, 05 Jun 2015 14:34:05 +0000 https://technology.amis.nl/?p=35427 Since Oracle launched the Oracle Database Appliance a few years ago it has become clear that only the Enterprise Edition is allowed on the machine. But when NOT using the bare metal setup (not the OracleVM) it’s not always quite transparent what kind of licensing requirements is needed and allowed. More direct : 1. Is it [...]

The post Licensing ODA on NUP’s and with different metrics appeared first on AMIS Oracle and Java Blog.

]]>
Since Oracle launched the Oracle Database Appliance a few years ago it has become clear that only the Enterprise Edition is allowed on the machine. But when NOT using the bare metal setup (not the OracleVM) it’s not always quite transparent what kind of licensing requirements is needed and allowed. More direct :

1. Is it allowed to license the ODA on NUP´s ? Spoiler alert: yes, this is allowed.

2. Is it allowed to license with different metrics within 1 ODA, e.g. 1 node on NUP’s and the other node processor-based?

I’ll try to explain and answer the two questions in the next chapter.

 

1. Is it allowed to license the ODA on NUP´s ?

This question may be well known to some of us, but for the context of the second question and the new model x5-2 it’s worth mentioning.

The Documentation is not completely clear about this. In a former version of the documentation it was stated as: “Customers are only required to license processor cores”. That phrase disappeared in the current documentation somehow, but is still valid: licensing on processor cores, but not processor metric!

In the FAQ :

What database licenses are required for the Oracle Database Appliance X5-2?

–> Answer:  The Oracle Database Appliance enables customers to purchase database licenses using a capacity licensing model. Therefore, customers are only required to license processor cores that they plan to use.

In the partitioning document:

Oracle recognizes a practice in the industry to pay for server usage based on the number of CPUs that are actually turned on – the “Capacity on Demand,” or Pay as You Grow” models. Oracle allows customers to license only the number of cores that are activated when the server is shipped. Note: Oracle does not offer special licensing terms for server usage models where the number of CPUs used can be scaled down or their usage varied – the “Pay Per Use” or “Pay Per Forecast” models.

So basically in all the papers it´s stated that you have to licence on cores, but not which processor metric…

So I asked LMS a while ago to verify the ‘Note’ in the partitiong document. They had (in the former days) to go to Corporate level to get this question answered:

There is no “licensing” requirement to license ODA with any specific metric. Whichever metric they can stay in compliance with, based on their usage, they can license by that.

So, yeah, it’s possible to license by NUP´s. Now there are more opportunities to use the ODA in development or testing environments..

Example for a scaling to 4 cores per node (8 cores total), based on the core-factor table:

–> 8 cores x 0,50 (core factor) x 25 (min.users per oracle processor for database usage) = 100 users.

2. Is it allowed to license 1 node on NUP’s and the other node processor-based?

Two sources gave already an answer:

– This Oracle blog-post.

– This Oracle Data Sheet X-5 of the X5-model:

Both servers must have the same number of cores enabled, however it is possible to license
software for only one of the servers or both servers, depending on the high availability requirements

but that’s not the official LMS statement. So to be sure, we asked LMS. Here’s the answer (to be clear: this is not the original quote, it’s translated from Dutch):

When both nodes within the ODA act as each other’s data-recovery environment, than they must be licensed with the same metric. Is one node acting as a production server and the other as a test server, than the nodes are allowed to differ from metric.   

So it’s allowed to license one server on NUP’s and the other one as processor. But, as always, for questions about your definitive configuration, contact your local Oracle Representative.

And just to be clear, this is only possible when creating single E.E.-databases within the ODA. When creating such a database with the OAKCLI-tool, you will get a question like this where the database should reside:

ODA_node_choice

Regards…

 

Sources:

ODA-licensing doc: http://docs.oracle.com/cd/E22693_01/doc.12/e25375/toc.htm

ODA-FAQ: http://www.oracle.com/technetwork/database/database-appliance/oracle-database-appliance-faq-1903200.pdf 

Partitioning doc: http://www.oracle.com/us/corporate/pricing/partitioning-070609.pdf

Core-factor table: http://www.oracle.com/us/corporate/contracts/processor-core-factor-table-070634.pdf

Oracle blogging: https://blogs.oracle.com/eSTEP/entry/oda_is_licensing_each_node

ODA X5 datasheet: http://www.oracle.com/technetwork/database/database-appliance/documentation/oracle-database-appliance-ds-1867697.pdf 

The post Licensing ODA on NUP’s and with different metrics appeared first on AMIS Oracle and Java Blog.

]]>
https://technology.amis.nl/2015/06/05/licensing-oda-on-nups-and-with-different-metrics/feed/ 1
Oracle Process Cloud – first impressions https://technology.amis.nl/2015/06/01/oracle-process-cloud-first-impressions/ https://technology.amis.nl/2015/06/01/oracle-process-cloud-first-impressions/#comments Mon, 01 Jun 2015 11:05:11 +0000 https://technology.amis.nl/?p=36155 As of this week, Oracle has released the Oracle Process Cloud Service (PCS): https://cloud.oracle.com/process. This PaaS cloud service offers a development platform for implementing business processes. Underpinning technology for this cloud service is the Oracle Fusion Middleware BPM stack. As a result, using the Process Cloud Service should be easy for people that are already [...]

The post Oracle Process Cloud – first impressions appeared first on AMIS Oracle and Java Blog.

]]>
As of this week, Oracle has released the Oracle Process Cloud Service (PCS): https://cloud.oracle.com/process. This PaaS cloud service offers a development platform for implementing business processes. Underpinning technology for this cloud service is the Oracle Fusion Middleware BPM stack. As a result, using the Process Cloud Service should be easy for people that are already familiar with Oracle Fusion Middleware BPM. For example, familiar platform components like BPM Composer and BPM WorkSpace are also present in PCS. With that in mind, we decided to give it a go!

This article outlines the steps for implementing a simple process in the Oracle Process Cloud Service. We were inspired during a discussion with a customer on SOA Governance, more specifically: how to handle/grant access to specific services. The customer mentioned that ‘he had people were complaining that it was unpredictable how long it took before access to a specific service in a specific environment was granted’. These types of processes can be handled very well within the Oracle Process Cloud. So, the ‘ServiceAccessApproval’ process will be shown in this example.

Sample process ServiceAccessApproval

The process ServiceAccessApproval in short consists of the following steps:

  1. a Project Member requests access to a specific service in a specific environment
  2. when approval is required, the Service Access Approver will get a human task where he/she has to approve access to the service
  3. after service access is approved, the weather forecast of Amsterdam is retrieved… so … um … well … we can show a webservice invoke
  4. then, a human task is created for the Service Access Granter. This person will (technically) configure the service, so the requesting party can really access it

Implementation steps

The implementation of the ServiceAccessApproval process is done in the following steps:

  1. 1. create a new application: ServiceAccessApproval
  2. 2. model the process flow (BPMN flow, without the implementation)
  3. 3. model the Business Object types (data modelling)
  4. 4. model the Decisions (business rules)
  5. 5. configure external Web Services
  6. 6. model the Web Forms (task screens)
  7. 7. finalize the process implementation
  8. 8. deploy the process
  9. 9. run the process

Step 1. create application ServiceAccessApproval

Login as user weblogic into Process Composer

Log in into Oracle Process Composer

Create a new application: ServiceAccessApproval

Create a new application: ServiceAccessApproval

Pick the ‘Web From / start your process with a web form’ option

Start your process with a web form

Step 2. model the process flow

Model the process flow in the following 2 steps:

  • Make 3 swim lanes: ProjectMember, ServiceAccessApprover, ServiceAccessGranter
  • Add all process activities

The process flow:

Step 3. model the Business Object types

Now, add a business object ServiceAccessBO

Add business object

Note that already a lot of pre-defined business types are present, mainly coming from/with the NotificationService

Pre-defined business types

Step 4. model the Decisions

Create a Decision (business rule) with a string as input (service name) and a boolean as output. The boolean output indicates if the requested service access has to be approved. The decision logic checks for the presence of the substring “GEN”in the service name. If present, the service access has to be approved, so the boolean output is set to true.

Decision logic

Step 5. configure external Web Services

Create a web service for retrieving weather information. Note that for adding a web service, the WSDL has to be uploaded. If there are imports included, the WSDL and the accompanying files have to be uploaded in a ZIP file. The used web service WSDL can be found on http://www.webservicex.net/globalweather.asmx?WSDL

External web service

Step 6. model the Web Forms

Make 3 webforms: ‘ServiceAccessStart’ for starting a process, ‘ApproveServiceAccess’ for approving access to a service, and ‘GrantServiceAcess’ for indicating that access has actually been granted.

Model 3 web forms

The basis web forms look like

Basic web form

So, let’s do some layout work on them…

Do some layout work

Ah, that looks better 😉

Step 7. finizalize the process implementation

So, by now we have all the pieces. They only need to be used in the proces implementation. So, we go back to the process and …

Add the process data objects

Add the process data objects

And for each of the process activities, link them to the right implementation. For example, link the Decision Service

Link decision implementation

And when that is done for all of the process activities, the data associations have to be established. Again for the ‘ApprovalRequired’ Decision Service:

Establish data associations

Step 8. deploy the process

Now that the process is done, go to the management console:

Management console

Select the appropratie environmetn and click on ‘deploy’

Deploy

Now, go through the deployment screens:

Deployment screens

Step 9. run the process

Now, the ‘project member’ can start a process instance. First, he will log in into the workspace and then select the SA application:

Login into Process WorkSpace

Then, the ‘project member’ will enter the details on his request:

Enter service access request details

Next step, the ‘service access approver’ can approve:

Approve service access request

Next step, the ‘service access granter’ will configure service access and then confirm his task:

grant access

Summary

First impressions on the Process Cloud Service is that it is fairly straightforward to use, especially for those with experience with the current FMW BPM Suite. More detailed investigations on the Process Cloud Service will follow in other blog articles.

The code for this example can be downloaded:exported process

The post Oracle Process Cloud – first impressions appeared first on AMIS Oracle and Java Blog.

]]>
https://technology.amis.nl/2015/06/01/oracle-process-cloud-first-impressions/feed/ 0
Complex probleem? Een simpele oplossing is genoeg https://technology.amis.nl/2015/05/29/complex-probleem-een-simpele-oplossing-is-goed-genoeg/ https://technology.amis.nl/2015/05/29/complex-probleem-een-simpele-oplossing-is-goed-genoeg/#comments Fri, 29 May 2015 08:01:08 +0000 https://technology.amis.nl/?p=36065 De dagelijkse praktijk van de IT specialist bestaat uit het oplossen van problemen.  Nu houdt een IT specialist wel van een uitdaging.  De oplossing wordt dan ook vaak gezicht in de lijn van het probleem, dat vaak complex is. Geef een ingewikkeld probleem aan een groep hoogopgeleide puzzelaars en je krijgt een ingewikkelde oplossing.  Engineers [...]

The post Complex probleem? Een simpele oplossing is genoeg appeared first on AMIS Oracle and Java Blog.

]]>
De dagelijkse praktijk van de IT specialist bestaat uit het oplossen van problemen.  Nu houdt een IT specialist wel van een uitdaging.  De oplossing wordt dan ook vaak gezicht in de lijn van het probleem, dat vaak complex is. Geef een ingewikkeld probleem aan een groep hoogopgeleide puzzelaars en je krijgt een ingewikkelde oplossing.  Engineers zullen niet vaak zeggen dat iets té complex is. Ze zien het juist als een mooie uitdaging. Want een probleem waarvoor geen oplossing gevonden kan worden bestaat niet.

zero-gravity-pen_Het verhaal gaat dat NASA in de jaren 60 op zoek was naar een pen voor gebruik door astronauten tijdens een ruimtereis. Ze legden deze vraag neer bij hun beste engineers, die 3 jaar later en enkele miljoenen verder met de oplossing kwamen: Een pen die kon schrijven in gewichtsloosheid, bestand was tegen 3 keer normale druk en temperaturen tussen tot 150 graden aankon. Toen ze deze oplossing aan hun Russische collega’s lieten zien, vertelden deze Russische collega’s dat ze hun astronauten gewoon een potlood meegaven.

Achterliggende vraag of wens

Dit voorbeeld geeft helder weer dat we vaak proberen het probleem op te lossen (mijn pen werkt niet in gewichtsloosheid) en niet kijken naar de achterliggende vraag of wens (schrijven in de ruimte). We moeten daarom vaker stil staan bij de achtergrond van onze problemen. Zijn ze wel echt een probleem, of alleen maar een symptoom? Kunnen we wellicht een ingewikkelde oplossing voorkomen door te onderzoeken wat de oorzaak is?

Indicatoren

Als oplossingsgerichte IT Engineers worstelen we met dit dilemma. Zijn we slechts bezig met het oplossen van problemen? Of zijn we echt de oorzaken aan het wegnemen? En hoe herken je dit? Indicatoren hiervoor zijn bijvoorbeeld onevenredige complexiteit of een oplossing die lijkt te werken maar ergens anders veel problemen veroorzaakt.

De volgende vragen kunnen helpen bij het vinden van de achterliggende oorzaak:

  • Waarom hebben we dit echt nodig?
  • En waarom hebben we dat dan nodig?
  • Wat gebeurt er als we niets doen en deze functie helemaal weg laten?
  • Wat is de achterliggende reden van het implementeren van deze regeling/code/functie?
  • Hebben we al deze uitzonderingen echt nodig? En als ze optreden, kunnen we ze dan anders afhandelen?
  • Wat zou er gebeuren als dit hele systeem er niet is?
  • Wie heeft er last als dit verkeerd gaat?

Speurtocht essentieel

De speurtocht naar de achterliggende oorzaak kan bevestigen dat de complexiteit echt nodig is. Of er kan blijken dat er een eenvoudigere oplossing mogelijk is. Het kan zelfs blijken dat er helemaal geen oplossing meer nodig is omdat de achterliggende noodzaak is weggevallen. Deze speurtocht is essentieel. Want je wilt zeker niet overblijven met een oplossing waarvoor je op zoek moet naar een probleem.

AMIS streeft er naar om zijn IT oplossingen eenvoudig te houden.  Dit maakt de realisatie goedkoper en het product beter onderhoudbaar. En dat is een mooie uitdaging voor onze Engineers: ga op zoek naar een oplossing die hetzelfde kan maar die eenvoudiger is.

 

The post Complex probleem? Een simpele oplossing is genoeg appeared first on AMIS Oracle and Java Blog.

]]>
https://technology.amis.nl/2015/05/29/complex-probleem-een-simpele-oplossing-is-goed-genoeg/feed/ 0
Introducing the Integration Cloud Service https://technology.amis.nl/2015/05/27/introducing-the-integration-cloud-service/ https://technology.amis.nl/2015/05/27/introducing-the-integration-cloud-service/#comments Wed, 27 May 2015 09:58:36 +0000 https://technology.amis.nl/?p=35744 Oracle released some more Cloud offerings and in this article we introduce the Integration Cloud Service. This cloud service lets your organization create integrations between cloud application, but also between cloud and on-premise applications. Create connections to well known and less known SaaS applications using a bunch of cloud adapters, publish or subscribe to the [...]

The post Introducing the Integration Cloud Service appeared first on AMIS Oracle and Java Blog.

]]>
Oracle released some more Cloud offerings and in this article we introduce the Integration Cloud Service. This cloud service lets your organization create integrations between cloud application, but also between cloud and on-premise applications. Create connections to well known and less known SaaS applications using a bunch of cloud adapters, publish or subscribe to the Messaging Cloud Service, or use industry standards like SOAP & REST. The available set of cloud adapters will certainly grow in the future when the marketplace is fully up-and-running.

Why should organizations consider the Cloud?
Let’s get started with the key benefits and features before diving into them more detailed. Why should organizations consider the Cloud?
In this day and age more and more software is going into the cloud, maybe they’re even developed with a cloud-first strategy. Thinks of your CRM, ERP of your HCM application. These applications do not do standalone business they communicate with each other, they exchange information. The Integration Cloud (fno ICS) provides this integrations and does it simplified.

The Cloud has a lot advantages, it is probably the most cost efficient method to use, maintain and upgrade an enterprise service bus.  It is available at much cheaper rates and hence, can significantly lower the company’s IT expenses. Besides, there are many pay-as-you-go and other scalable options available, which makes it very reasonable for your organization. Since all your data is stored in the cloud, backing it up and restoring the same is relatively much easier than storing the same on a physical device. Once you register yourself in the cloud, you can access the information from anywhere, where there is an Internet connection.

So what has the Integration Cloud Service to offer to meet these demands?

Simplified UI
ICS gives a web-based, point & click integration experience where you can easily create integrations between Cloud applications, public web services and on-premise applications.

Rich Connectivity
ICS has a standard library of Cloud & On-premise connectors which includes Oracle SaaS applications, but also connectors for the Messaging Cloud Service and industry standards like SOAP and REST.

Recommendations
The mapping builder to create the necessary mappings between the adapter connections has a build-in recommendation engine for guidance how to best map source to target fields.

Visibility & Error Detections
ICS as build-in a rich monitoring and error management. With advanced tracking you can easily spot inconsistencies and monitoring the usage and performance of integrations. It generates alerts, and even emails them, when connections fail to work. With the guided error handling the errors are easy to repair.

Overview of the Integration Cloud Service
Because it is fully web-based you only need to open a browser and go to the URL you received after creating your ICS instance. After signing in to the Integration Cloud Service you are welcomed by the home page.

ICS start page

The start page is constructed of a couple of tiles of each mayor functionality of ICS. Through this page you can easily learn more about a functionality, or you can navigate to that functionality. All the functionalities are part of the Designer Portal so my guess is that this page is not going to be used much or not at all. To navigate to the Designer Portal click on the associate menu item at the top right corner.

Designer Portal Page

The Designer Portal page shows the four pillars of ICS; Integrations, Connections, Lookups and Packages.

  • Integrations: Connect two cloud applications, using available connections, and define how they interact
  • Connections: Define connections to the cloud and on-promises applications
  • Lookups: Map the different values used by your applications to describe the same thing
  • Packages: A package associates to integrations and can be used as a way to group them

Before you can create integrations between cloud applications you need to define the connections. It is also possible to create SOAP and Messaging Cloud connections out of the box, but let’s look at the connections first.

Connections
At this moment there are almost ten adapters out-of-the-box available:

Oracle ERP Cloud Oracle ERP CloudConnector for the Oracle ERP Cloud GenericCloudConnector Rest AdapterGeneric Connector for REST APIs
GenericCloudConnector Web Service (Soap) AdapterGeneric Connector for Web Services Eloqua Marketing Cloud Eloqua (Marketing Cloud)Connector for the Oracle Marketing Cloud
Oracle Messaging Cloud Service Oracle Messaging Cloud ServiceConnector for the Messaging Cloud Service Oracle HCM Cloud Oracle HCM CloudConnector for the Human Capital Management Cloud
sales_92 Oracle Sales CloudConnector for the Oracle Sales Cloud Customer Service Cloud Oracle RightNowConnector for the Customer Service Support Cloud
GenericCloudConnector SalesforceConnector for the Salesforce CRM (SaaS)

Click on the Connections image on the Developer Portal page to navigate to the list of connections. By default all connections are listed. A connection can be in one of these three statuses; draft, in progress or configured. Draft means is is not 100% finished, in progress means a user is working on it right now, and configured means it is 100% done and the connection test was successful.

All connections

You can look at only connections that are in progress or configured by clicking on the status in the menu at the left side. If you’re looking for specific entries to look at you can search by entering the name of part of the name in the searchbox. You can use the * character as wildcard.

Search Connections

Each connections displays its name, version and the kind of application it connects to. Each kind of application has its own image to differentiate itself from one another.  Also the status and last update date and user is shown.

Connections Details

Also if you click on the Connection Details icon a overlay appears with more details like the who created the connection and when. On each connections some actions can be executed. A connection can be edited, cloned or deleted. Some connection allow the metadata to be refreshed like with the RightNow adapter.

Connection  Actions

Connection can be edited on the fly. If the WSDL url or the credentials change, the settings can be updated. Let’s look at the details of this RightNow connection.

Connection Settings

You can assign an email address of an administrator to the connection. This address is used to send notifications to when problems or changes occur in the connection. On the settings page, for this adapter, you can configure the connectivity and credentials.

Connections Connectivity Settings

Configure the WSDL of the RightNow Cloud service

Connection Credentials Settings

Configure the username and password to access the Cloud service with

Before a connection can be used by integrations it needs to be tested first. Click on the Test button on the top right corner and if the test is successful a green notification, and if it fails a red notification is displayed.

Test Connection

In a separate article, which will be published in the upcoming week(s), I will go in full details about creating connections.

Integrations
After defining the connections it is time to create a integration between two cloud connections. At this moment there are three types for integrations possible:

Blank Canvas Map My Data
Drop source and target onto a blank canvas
Publich Integrations Publish to ICS
Connect your source to send messages to ICS
Subscribe Integration Subscribe to ICS
Add targets to receive messages from ICS

Click on the Integrations image on the Developer Portal page to navigate to the list of integrations.

Designer Portal Integrations

By default all integrations are listed. An Integration can be in one of these five statuses; draft, in progress, configured, active or failed activation. Draft means it is not 100% finished, in progress means a user is working on it right now, configured means it is 100% done, active means a configured connections was successfully activated, and failed activation is an integration which had problems during activation.

All Integrations

You can look at only integrations that are in progress, configured, active or failed by clicking on the status in the menu at the left side.

Configured Integrations Active Integrations Failed Integrations

If you’re looking for specific entries to look at you can search by entering the name of part of the name in the search-box.
You can use the * character as wildcard, for example KV*.

Search Integrations

On a integration it is possible to execute a few actions based on its status. A connection can be viewed, edited, cloned, exported and deleted. Active connections can be deactivated. Some actions are disabled in certain statuses (e.g. it is not possible to edit an active integration).

Integration Actions

When viewing or editing an integration the Integration Canvas is used.

Integration Canvas

It consist of a source and target adapter connection. Between the adapters you can create mappings for the request and for the response flow. It is also possible to enrich data by calling a secondary adapter (callout). This is possible on both the request as response flow just after the source and target adapter.

Let’s have a look at the source adapter and the target adapter. In this example both are Generic SOAP connections. A Generic SOAP Connection can be created without the creation of a connection first.

SOAP Source wizard step 1

The first step consists of basic information and the choose to define the connection from an existing schema or in this example a WSDL.

SOAP Source wizard step 2

Secondly enter the WSDL URL and choose the Port Type and Operation to use for the incoming adapter. Besides a source every integration needs a target. In this example this is also a Generic SOAP connection, it works just like the source SOAP connection, but uses a different UI.

SOAP Target wizard

If extra data is needed that is not available in the request or reponse message of an adapter it is possible to use callouts to a secondary adapter connection.

Integration Canvas Callouts

Because the data type of the request is different than of the response the data needs to be mapped. Click on the Request Mapping to view, create or edit the mapping. The request mapping is straight forward. The input is mapped to the only field available.

Integration Request Mapping

The response mapping, maps the response from the target adapter to the source adapter. If you have call-outs the variable data is also available for this mapping. In the response mapping you can have access to a maximum of four data objects.

Integration Response Mapping

To view the XSLT mapping behind it or to create more advanced mappings, click on a target element name that you want to map. In this detailed view mode you can mapped source fields to target fields, view the used XSLT syntax and you have the possibility to edit the structure using Mapping Components.

Integration Mapping Builder

Mapping Components include functions for conversions, dates and strings, and Operators and XSL elements like choice, when, and other structures.

Integration Mapping Components

Below another example of a integration but this one connects a generic SOAP connection with the Oracle RightNow adapter. Both the Web Service and RightNow adapter support Faults to be passed through.

Integration Canvase with RightNow

Each adapter has it own kind of connection setup wizard. RightNow supports different operation modes (single or batch) and types (CRUD or ROQL). The CRUD operation type has four cloud operations; create, detroy, get and update. The RightNow adapter works with Business Objects defined in RightNow. It is possible to select multiple Business Objects.

Integration Rightnow

In a separate article, which will be published in the upcoming week(s), I will go in full details about creating integrations.

Lookups
The Integration Cloud Service also gives to possibility to map different values in your applications to describe the same thing, like currency codes. For everybody that uses SOA Suite, it’s a DVM (Domain Value Map). Click on the Lookups image on the Developer Portal page to navigate to the list of lookups.

Designer Portal Lookups

The Lookups page show all lookups in one list.

All Lookups

A few actions can be taken on each lookup. A lookup can be edited, cloned, exported and deleted.

Lookup Actions

A lookup is a table of connectors and domain value mappings. You can easily add other connectors or more values.

75_lookups_lookup_edit

When adding a connector column you first need to select the connector to assign values to. For example the Rest Adapter and enter the associated domain values.

Lookup Add Connector

Other to mention features are the possibility to export and import lookups. The export format is CSV.

Export Lookup

Lookup can be used in mappings between source and target integrations. Use the lookupValue function and select the source value to map.

Use Lookup

In a separate article, which will be published on the 26th of May, I will go in full details about creating and using lookups.

Packages
The last feature of ICS are packages. With packages you can group integrations together. When creating an integration you can assign it to specific package name. Multiple integrations can be assign to the same package name. Packages can be exported, imported and deleted, which mean integrations can easily be transported to a different ICS instance.

To view all integrations part of a package click on the “Action” icon and select “View Integrations”.

89_packages_package_actions

The pop-up shows the details about the integration, e.g. description, creator, last updater and optionally an Endpoint URL where the integration can be accessed on.

View Package

Recap
Oracle’s Integration Cloud Service is a hourly or monthly subscription based Cloud solution and bring  a web-based, point & click experience where you can easily create integrations between Cloud applications, (public) webservices and on-premise applications. It has a standard library of Cloud & On-premise connectors which includes Oracle SaaS applications, but also connectors for the Messaging Cloud Service and industry standards like SOAP and REST.


The post Introducing the Integration Cloud Service appeared first on AMIS Oracle and Java Blog.

]]>
https://technology.amis.nl/2015/05/27/introducing-the-integration-cloud-service/feed/ 0
Kan iemand mij uitleggen waar IT heen gaat? https://technology.amis.nl/2015/05/21/kan-iemand-mij-uitleggen-waar-it-heen-gaat/ https://technology.amis.nl/2015/05/21/kan-iemand-mij-uitleggen-waar-it-heen-gaat/#comments Thu, 21 May 2015 11:51:20 +0000 https://technology.amis.nl/?p=36101 “Als je het niet kan zeggen, zing het dan maar”. Dat was vroeger een veel gebruikte kreet als je niet onder woorden kon brengen of kon benoemen wat je wilde zeggen. Hier moet ik vaak aan terugdenken als ik de fantastische kretologieën voorbij zie komen die nieuwe trends of hypes in onze mooie IT wereld [...]

The post Kan iemand mij uitleggen waar IT heen gaat? appeared first on AMIS Oracle and Java Blog.

]]>
“Als je het niet kan zeggen, zing het dan maar”. Dat was vroeger een veel gebruikte kreet als je niet onder woorden kon brengen of kon benoemen wat je wilde zeggen. Hier moet ik vaak aan terugdenken als ik de fantastische kretologieën voorbij zie komen die nieuwe trends of hypes in onze mooie IT wereld beschrijven. Soms iets van oude wijn en nieuwe zakken of door alleen het oogstjaar te veranderen, maar vaak zijn de nieuwe kretologieën zo liederlijk dat je je afvraagt: wat bedoelen ze er nu eigenlijk mee?

Nummer 9 op de trendlijst van Gartner

Een mooi voorbeeld: Web Scale IT! Deze term staat op nummer 9 op de Trendlijst van Gartner voor 2015. De uitleg die Gartner hierbij geeft:

“Gartner notes that more companies will think, act, and build applications and infrastructure in the same way that technology stalwarts like Amazon, Google, and Facebook do. There will be an evolution toward web-scale IT as commercial hardware platforms embrace the new models and cloud-optimised and software-defined methods become mainstream.”

Snapt u hem nog? Weer een evolution? Embrace new models? Cloud-optimised? Software-defined?

Wat mij in Gartner’s trendlijst triggert

Om het af te maken zegt Gartner:

“Gartner notes that the marriage of development and operations in a coordinated way (referred to as DevOps) is the first step towards the web-scale IT.”

Wanneer ik de uitleg lees, valt mij op dat dit wel een heel erg IT gebaseerde benadering is. Terwijl men naar mijn mening juist wil dat IT en business samen met nieuwe ontwikkelingen aan de slag gaan. Dat ze vriendjes worden om gezamenlijk doelen te bereiken. Niet hij en zij maar wij, wij samen. Vervolgens komt de ene na de andere vraag en in mij op:

  • “Companies will build applications and infrastructure like Amazon, Google“? Gaan ze opnieuw beginnen? Bedoelen ze startende ondernemingen? Of is de wet van de remmende voorsprong vervallen? Heb ik geen toekomst meer met een 20 jarig of 120 jarig bedrijf?
  • Het gaat wel om hele grote stappen. We zijn net gewend aan cloud-based, gaan we al naar cloud-optimized. Wat is dat dan weer? De overtreffende trap? Niveau 2, 3 of 4?
  • Waar is de stem van de business, wanneer er wordt gesproken over software-defined methods en DevOps, the marriage between operations en development?
  • En wat bedoelen ze met ‘a coordinated way’? Is er soms een roadmap to success? Dan zou ik die graag willen ontvangen want ja, zo’n kans wil je niet laten lopen. Toch?

Weerstand tegen veranderingen

weerstand tegen veranderingZo zou ik nog wel even door kunnen gaan. Waar dit soort trends vooral aan voorbij gaat, is dat we te maken hebben met mensen. Mensen die vaak weerstand hebben tegen veranderingen en vooral de grotere veranderingen. En geef ze eens ongelijk! Want wat er niet verteld wordt, is wat de veranderingen voor een werknemer kunnen betekenen. Dat er bijvoorbeeld mogelijkheden zijn om zich verder te ontwikkelen of te scholen, dat zijn ervaring steeds crucialer wordt en dat vooral het saaie deel van het werk geautomatiseerd kan worden. Het is niet voor niets dat ruim 50% van de IT projecten fout afloopt omdat de gebruikers en andere stakeholders niet goed worden meegenomen…

Enorm veel treinen

Ook komen veranderingen vaak in een hoog tempo voorbij. Het is daarbij niet eenvoudig om te bepalen welke verandering je moet omarmen en welk je aan je voorbij laat gaan. Op welke rijdende trein spring je als bedrijf wel en op welke niet? Een ding is zeker, er rijden heel veel treinen. Je moet wel de juiste kiezen en ook nog op tijd springen.

Exponentiële groei?

En dan zal exponentiële groei zal de uwe zijn! Wat voor groei? Ja, exponentiële groei, oftewel snel en hard. Maar wat als ik niet exponentieel wil groeien, maar gewoon en gedegen wil ondernemen. Mag dat nog? Of is mijn bestaansrecht eigenlijk al weg op het moment dat ik dit denk? Exponentieel is te veel een toverwoord geworden, volgens mij te verheerlijkt. Wat is er mis met gedegen ondernemen, uiteraard wel blijven bewegen en veranderen, maar vooral goed luisteren naar wat de klant wil? En tuurlijk, je wilt niet op een dag wakker worden om te constateren dat de wereld om jou heen veranderd is en dat er nieuwe concurrentie opstaat. Niet elke industrie kent zijn Ubers en AirBnB’s. Er zijn gelukkig genoeg voorbeelden van al langer bestaande bedrijven die vandaag de dag nog steeds goed mee komen. Wakker blijven, ja uiteraard, maar niet perse om exponentieel te groeien.

Wat gaat het nu eigenlijk om?

Er komen dus heel veel vragen bij me naar boven bij nieuwe trends en kretologieën, zoals bij Web-scale IT. Het lijkt alsof ik alleen maar bezwaren opnoem en op de rem trap. Maar dit soort vragen zijn veelvuldig het onderwerp van gesprek als ik bij een CEO of andere leidinggevende aan tafel mag schuiven. Ze zijn hier allemaal mee bezig of minimaal over aan het nadenken. Ze denken dan vooral na over waar hun mogelijkheden liggen, waar zij in de toekomst hun bijdrage kunnen leveren en zij voor de werknemers en klanten van nu en in de toekomst een rol van belang kunnen spelen. Ik ben daarbij nog niemand tegengekomen die aangaf daar Web-scale IT bij nodig te hebben. Ze weten vaak niet eens dat de term bestaat en wat die dan inhoudt. Ligt het aan de naam en bedenkt de markt over 2 jaar wel weer iets anders?

Hoe dan ook: het gesprek over hoe de toekomst er voor een bedrijf uit zal zien is vele malen belangrijker dan de kretologieën die daar wel of niet bij horen. What’s in a name….

The post Kan iemand mij uitleggen waar IT heen gaat? appeared first on AMIS Oracle and Java Blog.

]]>
https://technology.amis.nl/2015/05/21/kan-iemand-mij-uitleggen-waar-it-heen-gaat/feed/ 1
Stream Explorer and JMS for both inbound and outbound interaction https://technology.amis.nl/2015/05/19/stream-explorer-and-jms-for-both-inbound-and-outbound-interaction/ https://technology.amis.nl/2015/05/19/stream-explorer-and-jms-for-both-inbound-and-outbound-interaction/#comments Tue, 19 May 2015 18:48:09 +0000 https://technology.amis.nl/?p=36131 In this article, we will look at the very common interaction between Stream Explorer and JMS. JMS is a commonly used channel for decoupled exchange of messages or events. Stream Explorer can both consume messages from a JMS destination (through Stream) and publish findings to a JMS destination (with a target). The use case we [...]

The post Stream Explorer and JMS for both inbound and outbound interaction appeared first on AMIS Oracle and Java Blog.

]]>
In this article, we will look at the very common interaction between Stream Explorer and JMS. JMS is a commonly used channel for decoupled exchange of messages or events. Stream Explorer can both consume messages from a JMS destination (through Stream) and publish findings to a JMS destination (with a target). The use case we discuss here is about temperature sensors: small devices distributed over a building, measuring the local room temperature every few seconds and reporting it over JMS. The Stream Explorer application has to look out for rooms with quickly increasing temperatures and report those over a second JMS queue. Note: this article describes the Java (SE) code used for generating temperature signals. This class generates temperature values (in Celsius!) for a number of rooms, and publishes these to the queue temperatureMeasurements. At some random point, the class will start a fire in a randomly selected room. In this room, temperatures will soon be over 100 degrees. Also in this article is Java class HotRoomAlertProcessor  that consumes messages from a second JMS Queue. Any message received on that queue is reported to the console.

Our objective in this article is to read the temperature measurements from the JMS Queue into a Stream Explorer application, calculate the average value per room and then detect the room on fire. This hot room should then be reported to the JMS Queue.

Open Stream Explorer and from the Stream Explorer Catalog page, create a new item of type Stream. Select JMS as the source type.

image

Press Next.

Configure the URL for the WebLogic domain (http://localhost:7101), the WebLogic Admin’s username and password (weblogic/weblogic1) and the JNDI Name for the JMS Queue (or Topic): jndi/ temperatureMeasurements

image

Press Next.

Define a new Shape. The properties in the JMS (Map)Message produced by the Java Class TemperatureSensorSignalPublisher are called RoomId (of type String) and Temperature (of type Float).
image

Press Create.

The Exploration editor appears to create an exploration based on the Stream.

Define a Name. Then click on Create.

image

The temperature measurement events start streaming in:

image

The first step is the definition of a Summary: calculate the average temperature per room. Also set the time range for the aggregation to 10 seconds (determine the temperature using the most recent 10 seconds worth of data) and the evaluation frequency to 5 seconds.

image

Fewer events are shown in the Live Output Stream – and with less variation.

Next, add a filter: we are going to hunt for the room on fire. Only records with an average temperature higher than 80 degrees should be reported. Also change the name of the property AVG_of_Temperature to AverageTemperature.

image

The screenshot shows that in this case, it is the Cafeteria where there is a fire. If you stop class TemperatureSensorSignalPublisher and then start it again, it will take some time for it to start a fire again and when the fire was started, the Live Output Stream will show it.

Finally, click on Configure Target.

Configure a JMS Target, as shown in the figure. The URL is the familiar one (t3://localhost:7101), username and password are weblogic and weblogic1 and the JNDI Name of the JMS target is jndi/hotRooms.
image

Click on Finish. Publish the Exploration.

When there is now a room discovered with temperatures in the hot zone, a message will be published to the JMS Queue, in the form of a MapMessage with properties RoomId and AverageTemperature.

Stop and start class TemperatureSensorSignalPublisher. Run class HotRoomAlertProcessor to have it start listening to the jndi/hotRooms queue.

The former writes:

image

And the latter will report hot rooms by writing a message to the console:

image

While the Stream Explorer browser interface shows:

image

The post Stream Explorer and JMS for both inbound and outbound interaction appeared first on AMIS Oracle and Java Blog.

]]>
https://technology.amis.nl/2015/05/19/stream-explorer-and-jms-for-both-inbound-and-outbound-interaction/feed/ 0
WebLogic Server and OpenLDAP. Using dynamic groups https://technology.amis.nl/2015/05/18/weblogic-server-and-openldap-using-dynamic-groups/ https://technology.amis.nl/2015/05/18/weblogic-server-and-openldap-using-dynamic-groups/#comments Mon, 18 May 2015 12:45:30 +0000 https://technology.amis.nl/?p=36072 Dynamic groups in an LDAP are groups which contain a query to specify its members instead of specifying every member separately. Efficient usage of dynamic groups makes user maintenance a lot easier. Dynamic groups are implemented differently in different LDAP server implementations. Weblogic Server can be configured to use dynamic groups in order to fetch [...]

The post WebLogic Server and OpenLDAP. Using dynamic groups appeared first on AMIS Oracle and Java Blog.

]]>
Dynamic groups in an LDAP are groups which contain a query to specify its members instead of specifying every member separately. Efficient usage of dynamic groups makes user maintenance a lot easier. Dynamic groups are implemented differently in different LDAP server implementations. Weblogic Server can be configured to use dynamic groups in order to fetch users for a specific group. In this blog I will describe how dynamic groups can be created in OpenLDAP and used in Weblogic Server.

In this example I use two users. smeetsm the developer and doej the operator. As shown in the title image, there are many servers which follow similar patterns to allow access to operators and developers. We are considering a case here where users do not use a shared account (e.g. weblogic) to login to different systems. This is for trace-ability and security purposes a better practice than when everyone uses the same shared user. See http://otechmag.com/magazine/2015/spring/maarten-smeets.html for a more thorough explanation on why you would want this.

A small note though. I’m a developer and this is not my main area of expertise. I have not implemented this specific pattern in any large scale organization.

Why dynamic groups?

In the group definition you can specify a query which determines members based on specific attribute values of users (e.g. privileges). What can you achieve with dynamic groups? You can provide an abstraction between users and groups which allows management of just user attributes to grant privileges. Groups which are usually per server, do not require as much changing this way. Since usually there are many servers (see example above) this saves a lot of time.

For example, you can use the departmentNumber attribute to differentiate what developers and operators can do on different machines. For readability I have misused the employeeType here since it allows string content. In the below image there are two users. smeetsm who is a developer and doej who is an operator. I have defined roles per server in the LDAP. The Monitor role on Server1 has smeetsm and doej as members because the memberURL query selects persons who have employeeType Developer or Operator. On Server1 only doej is Administrator and not smeetsm. This can for example be considered an acceptance test environment. On Server2 both are Administrator and Monitor. This can be considered a development environment. When smeetsm leaves and goes to work somewhere else, I just have to remove the Developer employeeType attribute at the user level and he won’t be able to access Server1 and Server2 anymore. So there is no problem anymore with forgetting which server which person has access to.

DevelopersAndOperators

OpenLDAP configuration

Install

First download OpenLDAP from http://sourceforge.net/projects/openldapwindows.

In order to reproduce the configuration I have used, download the configuration and LDAP export: here

Put the slapd.conf in <OpenLDAP INSTALLDIR>\etc\openldap

Check if the password specified for the administrator works. Not sure if the seed is installation dependent. You can generate a new password by going to <OpenLDAP INSTALLDIR>\bin and execute slappasswd -h {SSHA}

Start OpenLDAP by executing <OpenLDAP INSTALLDIR>\libexec\StartLDAP.cmd (or the shortcut in your startmenu)

Put the export.ldif in <OpenLDAP INSTALLDIR>\bin
Open a command-prompt and go to the <OpenLDAP INSTALLDIR>\bin

Execute ldapadd.exe -f export.ldif -xv -D “cn=Manager,dc=smeetsm,dc=amis,dc=nl” -w Welcome01

Now you can browse your OpenLDAP server using for example Apache Directory Studio. In my case I could use the following connection data (I used Apache Directory Studio to connect);

BindDN or user: cn=Manager,dc=smeetsm,dc=amis,dc=nl
Password: Welcome01

DevelopersAndOperators

The member field gets generated automatically (dynlist configuration in slapd.conf). This happens however after a search is performed. WebLogic can’t find this person if defined as a static group (I’ve enabled authentication debugging to see this in the log, Server, Debug, weblogic.security.Atn);

<search(“ou=Server1, ou=groups, dc=smeetsm, dc=amis, dc=nl”, “(&(member=cn=doej,ou=people,dc=smeetsm,dc=amis,dc=nl)(objectclass=groupofurls))”, base DN & below)>
<getConnection return conn:LDAPConnection {ldaps://localhost:389 ldapVersion:3 bindDN:”cn=Manager,dc=smeetsm,dc=amis,dc=nl”}>
<Result has more elements: false>

Unless you want to invest time in getting to know your specific LDAP server in order to make the dynamic groups transparent to the client (so you can access them in a similar way as static groups), you’re probably better of fixing this in WebLogic Server using dynamic groups (at least for development purposes). You can try however to let OpenLDAP produce memberof entries at the user level. This will perform better as WebLogic does not need to analyse all groups for MemberURL entries to determine in which group the user is present.

There are several tutorials available online for this (for example http://www.schenkels.nl/2013/03/how-to-setup-openldap-with-memberof-overlay-ubuntu-12-04/). Most however use OpenLDAPs online configuration (olc) and not slapd.conf. olc is the recommended way of configuring OpenLDAP and in most distributions the default. However not in the one I was using.

From slapd.conf to olc (optional)

This part is optional. It might help if you’re planning to take a dive into the depths of OpenLDAP (don’t forget the oxygen… I mean coffee). You can convert your slapd.conf to an online configuration as shown below.

See http://www.zytrax.com/books/ldap/ch6/slapd-config.html. I had some problems with creation of the slapd.d directory so I first create another directory called ‘t’ and rename it. It is a good idea to also rename the slapd.conf in order to make sure this configuration file is not used anymore.

cd <OpenLDAP INSTALLDIR>\etc
mkdir t
<OpenLDAP INSTALLDIR>\sbin\slaptest.exe -f openldap\slapd.conf -F t
move openldap\slapd.d
move openldap\slapd.conf openldap\slapd.conf.bak

Update the last line of <OpenLDAP INSTALLDIR>\libexec\StartLDAP.cmd to use the newly created directory for its configuration
slapd.exe -d -1 -h “ldap://%FQDN%/ ldaps://%FQDN%/” -F ..\etc\openldap\slapd.d

Create a user which can access cn=config. Update <OpenLDAP INSTALLDIR>\etc\openldap\slapd.d\cn=config\olcDatabase={0}config.ldif (from: http://serverfault.com/questions/514870/how-do-i-authenticate-with-ldap-via-the-command-line)

Add between
olcMonitoring: FALSE
and
structuralObjectClass: olcDatabaseConfig
the following lines. Use the same password as in the previously used slapd.conf (created with slappasswd -h {SSHA})

olcRootDN: cn=admin,cn=config
olcRootPW: {SSHA}2HdAW3UmR5uK4zXOVwxO01E38oYanHUa

Now you can use a graphical LDAP client to browse cn=config. Authenticate using cn=admin,cn=config and use cn=config as Base DN. This makes browsing and editing configuration easier.

cnconfig

To add a configuration file you can do the following for example;

<OpenLDAP INSTALLDIR>\bin>ldapadd.exe -f your_file.ldif -xv -D “cn=admin,cn=config” -w Welcome01

This will get you started with other online tutorials about how to get the memberof overlay working.

WebLogic configuration

In the WebLogic Console, Security Realms, myrealm, Providers, New, OpenLDAPAuthenticator.

Use the following properties;
Common: Control Flag. SUFFICIENT. Also set the control flag for the DefaultAuthenticator to SUFFICIENT.

Provider specific

Connection

  • Host: localhost
  • Port: 389
  • Principle: cn=Manager,dc=smeetsm,dc=amis,dc=nl
  • Credential: Welcome01

Users

  • User Base DN: ou=people, dc=smeetsm, dc=amis, dc=nl
  • All users Filter:
  • User from name filter: (&(cn=%u)(objectclass=inetOrgPerson))
  • User Search Scope: Subtree
  • User name attribute: cn
  • User object class: person
  • Use Retrieved User Name as Principal: (leave unchecked)

Groups

  • Group Base DN: ou=Server1, ou=groups, dc=smeetsm, dc=amis, dc=nl
  • All groups filter:
  • Group from name filter: (&(cn=%g)(|(objectclass=groupofnames)(objectclass=groupofurls)))
  • Group search scope: Subtree
  • Group membership searching: unlimited
  • Max group membership search level: 0

Static groups

  • Static Group Name Attribute: cn
  • Static Group Object Class: groupofnames
  • Static Member DN Attribute: member
  • Static Group DNs from Member DN Filter: (&(member=%M)(objectclass=groupofnames))

Dynamic groups

  • Dynamic Group Name Attribute: cn
  • Dynamic Group Object Class: groupofurls
  • Dynamic Member URL Attribute: memberurl
  • User Dynamic Group DN Attribute:

GUID Attribute: entryuuid

Points of interest

  • The group from name filter specifies two classes. The class for the static groups and the class for the dynamic groups.
  • User Dynamic Group DN Attribute is empty. If you can enable generation of the memberof attribute in your LDAP server, you can use that.
  • The Group Base DN specifies the server (Server1). For Server2 I would use Server2 instead of Server1.
  • You can use static and dynamic groups together and also nest them. In the below image, Test3 is a groupofnames with smeetsm as static member. Monitor is a dynamic group. Be careful though with the performance. It might not be necessary to search entire subtrees to unlimited depth.

result2

Result

After the above configuration is done, can login with user smeetsm on Server1 into the WebLogic Console and get the Monitor role while on Server2 with the same username, you get the Administrator role.

result

If I change the employeeType of smeetsm to operator, I get the Administrator role on Server1. If I remove the attribute, I cannot access any system. User management can easily be done this way on user level with very little maintenance needed on group level (where there usually are many servers) unless for example the purpose of an environment changes. Then the query to obtain users needs changing.

I could not get the memberof attribute working in my OpenLDAP installation. Luckily for a development environment you don’t need this but if you plan on using a similar pattern on a larger scale, you can gain performance by letting the LDAP server generate these attributes in order to allow clients (such as WebLogic Server) to get quick insight into user group memberships.

Please mind that in order for the FMW components (from the IdentityService to WebCenterContent) to use dynamic groups you need to enable the DynamicGroups plugin in the LibOVD configuration. See: http://www.ateam-oracle.com/oracle-webcenter-and-dynamic-groups-from-an-external-ldap-server-part-2-of-2/ in order to allow usage of the dynamic groups.

The post WebLogic Server and OpenLDAP. Using dynamic groups appeared first on AMIS Oracle and Java Blog.

]]>
https://technology.amis.nl/2015/05/18/weblogic-server-and-openldap-using-dynamic-groups/feed/ 0
Interacting with JMS Queue and Topic from Java SE https://technology.amis.nl/2015/05/16/interacting-with-jms-queue-and-topic-from-java-se/ https://technology.amis.nl/2015/05/16/interacting-with-jms-queue-and-topic-from-java-se/#comments Sat, 16 May 2015 15:33:22 +0000 https://technology.amis.nl/?p=36047 This article is just a quick post of some code I want to have easy access to. It runs in Java SE – outside any container in a stand alone JVM. It creates a connection with a JMS Queue. One class sends messages to the Queue, the other class registers as a listener and consumes [...]

The post Interacting with JMS Queue and Topic from Java SE appeared first on AMIS Oracle and Java Blog.

]]>
This article is just a quick post of some code I want to have easy access to. It runs in Java SE – outside any container in a stand alone JVM. It creates a connection with a JMS Queue. One class sends messages to the Queue, the other class registers as a listener and consumes messages from a different queue.

I have created the code in JDeveloper. It runs stand-alone and connects to a WebLogic Server where the JMS Queues (and JMS Server, JMS Module and JMS Connection Factory) have been created. (blog article http://blog.soasuitehandbook.org/setup-for-jms-resources-in-weblogic-chapter-6/ provides an example of how JMS resources are configured on WebLogic)

image

The project has two libraries associated with it: Java EE and WebLogic Remote Client.

image

 

The JDeveloper application TemperatureMonitoring (created for a Stream Explorer/Event Processing demonstration) contains two projects that each contain a single class. One project is HotRoomAlertProcessor with class HotRoomAlertProcessor that registers as a listener to the HotRooms queue. Any message received on that queue is reported to the console.

The second project is TemperatureSensors. It contains class TemperatureSensorSignalPublisher. This class generates temperature values (in Celsius!) for a number of rooms, and publishes these to the queue temperatureMeasurements. At some random point, the class will start a fire in a randomly selected room. In this room, temperatures will soon be over 100 degrees.

Class TemperatureSensorSignalPublisher, publishing to the JMS Queue:

package nl.amis.temperature;

import java.util.Hashtable;

import java.util.Random;

import javax.jms.JMSException;
import javax.jms.MapMessage;
import javax.jms.MessageProducer;
import javax.jms.Queue;
import javax.jms.QueueConnection;
import javax.jms.QueueConnectionFactory;
import javax.jms.QueueSession;
import javax.jms.Session;

import javax.naming.Context;
import javax.naming.InitialContext;
import javax.naming.NamingException;


public class TemperatureSensorSignalPublisher {
    public final static String JNDI_FACTORY = "weblogic.jndi.WLInitialContextFactory";
    public final static String JMS_FACTORY = "jms/handson-jms-connectionFactory";
    public final static String QUEUE = "jndi/temperatureMeasurements";
    private QueueConnectionFactory qconFactory;
    private QueueConnection qcon;
    private QueueSession qsession;
    private MessageProducer qproducer;
    private Queue queue;

    private static final int SLEEP_MILLIS = 100;
       private static Random rand = new Random();
       private boolean suspended;
       private int index = 0;

       public static int randInt(int min, int max) {
           // NOTE: Usually this should be a field rather than a method
           // variable so that it is not re-seeded every call.     
           // nextInt is normally exclusive of the top value,
           // so add 1 to make it inclusive
           int randomNum = rand.nextInt((max - min) + 1) + min;
           return randomNum;
       }

        public void run() {
            System.out.println("Started Producing Temperature Signals to "+QUEUE);
            suspended = false;
            while (!isSuspended()) { // Generate messages forever...
                generateTemperatureSensorSignal();
                try {
                    synchronized (this) {
                        wait(randInt(SLEEP_MILLIS/2, SLEEP_MILLIS*2));
                    }
                } catch (InterruptedException e) {
                    e.printStackTrace();
                }
            }
        }

        /* (non-Javadoc)
         * @see com.bea.wlevs.ede.api.SuspendableBean#suspend()
         */
        public synchronized void suspend() {
            suspended = true;
        }

        private synchronized boolean isSuspended() {
            return suspended;
        }

    String[] rooms = new String[]{"Cafeteria","Kitchen","Reception","Meetingroom One","CEO Office","Lounge Area","Office Floor A"};
    boolean onFire=false;
    int roomOnFireIndex ;

    private void generateTemperatureSensorSignal() {
        // determine roomId
        int roomIndex = randInt(1, rooms.length)-1;
        
        // determine if one room should be set on fire
        if (!onFire) {
            // chance of 1:500 that a fire is started
            onFire = randInt(1,50) < 2;
            if (onFire){
              roomOnFireIndex = roomIndex;
              System.out.println("Fire has started in room "+ rooms[roomOnFireIndex]);
            }
        }        
        // determine temperatureValue
        float temperature = randInt(160, 230)/11;
        if (onFire && roomIndex == roomOnFireIndex) {           
            temperature = temperature + randInt(90, 150);
        }
        publish(rooms[roomIndex], temperature);        
    }


    public void publish(String roomId, Float temperature) {
        try {
            MapMessage message = qsession.createMapMessage();
            message.setString("RoomId", roomId);
            message.setFloat("Temperature", temperature);
            qproducer.send(message);
            //System.out.println("- Delivered: "+temperature+" in "+roomId);
        } catch (JMSException jmse) {
            System.err.println("An exception occurred: " + jmse.getMessage());
        }
    }

    public void init(Context ctx, String queueName)
        throws NamingException, JMSException
    {
        qconFactory = (QueueConnectionFactory) ctx.lookup(JMS_FACTORY);
        qcon = qconFactory.createQueueConnection();
        qsession = qcon.createQueueSession(false, Session.AUTO_ACKNOWLEDGE);
        queue = (Queue) ctx.lookup(queueName);
        qproducer = qsession.createProducer(queue);
    }

    public void close() throws JMSException {
        qsession.close();
        qcon.close();
    }

    public static void main(String[] args) throws Exception {
        InitialContext ic = getInitialContext();
        TemperatureSensorSignalPublisher qr = new TemperatureSensorSignalPublisher();
        qr.init(ic, QUEUE);
        qr.run();
        qr.close();
    }

    private static InitialContext getInitialContext()
        throws NamingException    {
        Hashtable<String, String> env = new Hashtable<String, String>();
        env.put(Context.INITIAL_CONTEXT_FACTORY, JNDI_FACTORY);
        env.put(Context.PROVIDER_URL, "t3://localhost:7101");
        return new InitialContext(env);

    }
    
}

Class HotRoomAlertProcessor  consumes messages from a second JMS Queue:

package nl.amis.temperature;

import java.util.Enumeration;
import java.util.Hashtable;

import javax.jms.JMSException;
import javax.jms.MapMessage;
import javax.jms.Message;
import javax.jms.MessageListener;
import javax.jms.Queue;
import javax.jms.QueueConnection;
import javax.jms.QueueConnectionFactory;

import javax.jms.QueueReceiver;
import javax.jms.QueueSession;

import javax.jms.Session;

import javax.naming.Context;
import javax.naming.InitialContext;
import javax.naming.NamingException;


public class HotRoomAlertProcessor implements MessageListener {
    public final static String JNDI_FACTORY = "weblogic.jndi.WLInitialContextFactory";
    public final static String JMS_FACTORY = "jms/handson-jms-connectionFactory";
    public final static String QUEUE = "jndi/hotRooms";
    private QueueConnectionFactory qconFactory;
    private QueueConnection qcon;
    private QueueSession qsession;
    private QueueReceiver qreceiver;
    private Queue queue;
    private boolean quit = false;

    public void onMessage(Message msg)     {
        try {
            if (msg instanceof MapMessage) {
                MapMessage mess = ((MapMessage) msg);
//                Enumeration enumeration = mess.getMapNames();
//                while (enumeration.hasMoreElements()) {
//                    System.out.println(enumeration.nextElement());
//                }
                System.out.println("Room On Fire: " + mess.getString("RoomId"));
                System.out.println("Last Measured Temperature: " + mess.getFloat("AverageTemperature"));
            }
        } catch (JMSException jmse) {
            System.err.println("An exception occurred: " + jmse.getMessage());
        }
    }

    public void init(Context ctx, String queueName)
        throws NamingException, JMSException     {
        qconFactory = (QueueConnectionFactory) ctx.lookup(JMS_FACTORY);
        qcon = qconFactory.createQueueConnection();
        qsession = qcon.createQueueSession(false, Session.AUTO_ACKNOWLEDGE);
        queue = (Queue) ctx.lookup(queueName);
        qreceiver = qsession.createReceiver(queue);
        qreceiver.setMessageListener((MessageListener) this);
        qcon.start();
    }

    public void close() throws JMSException    {
        qreceiver.close();
        qsession.close();
        qcon.close();
    }

    public static void main(String[] args) throws Exception {
        InitialContext ic = getInitialContext();
        HotRoomAlertProcessor qr = new HotRoomAlertProcessor();
        qr.init(ic, QUEUE);
        System.out.println("JMS Ready To Receive Messages (To quit, send a \"quit\" message).");
        synchronized (qr) {
            while (!qr.quit) {
                try {
                    qr.wait();
                } catch (InterruptedException ie) {
                }
            }
        }
        qr.close();
    }

    private static InitialContext getInitialContext()
        throws NamingException
    {
        Hashtable<String, String> env = new Hashtable<String, String>();
        env.put(Context.INITIAL_CONTEXT_FACTORY, JNDI_FACTORY);
        env.put(Context.PROVIDER_URL, "t3://localhost:7101/");
        return new InitialContext(env);
    }
}

Here is some output from the second class:

image

The post Interacting with JMS Queue and Topic from Java SE appeared first on AMIS Oracle and Java Blog.

]]>
https://technology.amis.nl/2015/05/16/interacting-with-jms-queue-and-topic-from-java-se/feed/ 0
AMIS organiseert: Workshop Stream Explorer and Oracle Event Processor – Dinsdag 19 mei 2015, 17.30 uur https://technology.amis.nl/2015/05/16/amis-organiseert-workshop-stream-explorer-and-oracle-event-processor-dinsdag-19-mei-2015-17-30-uur/ https://technology.amis.nl/2015/05/16/amis-organiseert-workshop-stream-explorer-and-oracle-event-processor-dinsdag-19-mei-2015-17-30-uur/#comments Sat, 16 May 2015 05:11:41 +0000 https://technology.amis.nl/?p=36039 Dinsdag 19 mei vanaf 17.30 uur vindt bij AMIS (Edisonbaan 15, Nieuwegein) een gratis community workshop plaats over Stream Explorer en Oracle Event Processor in het kader van de AMIS SOA SIG. Lucas Jellema zal een presentatie geven waarin hij Stream Explorer introduceert. Hij laat een aantal demonstraties zien van Stream Explorer, OEP en de [...]

The post AMIS organiseert: Workshop Stream Explorer and Oracle Event Processor – Dinsdag 19 mei 2015, 17.30 uur appeared first on AMIS Oracle and Java Blog.

]]>
imageDinsdag 19 mei vanaf 17.30 uur vindt bij AMIS (Edisonbaan 15, Nieuwegein) een gratis community workshop plaats over Stream Explorer en Oracle Event Processor in het kader van de AMIS SOA SIG. Lucas Jellema zal een presentatie geven waarin hij Stream Explorer introduceert. Hij laat een aantal demonstraties zien van Stream Explorer, OEP en de interactie met SOA Suite 12c. Vervolgens krijgens deelnemers de beschikking over een Virtual Machine (Stream Explorer, OEP, SOA Suite 12c, Oracle Database 11gR2 XE en JDeveloper 12c ) waarin een ruime set aan praktijkvoorbeelden kan worden doorgenomen. In de handson sessie komen ondermeer aan bod:

  • Stream Explorer Aggregatie en Pattern Detectie
  • Interactie met SOA Suite via Event Delivery Network
  • REST en Stream Explorer (inbound en outbound)
  • Stream Explorer en Web Sockets voor live dashboards
  • Gebruik van Stream Explorer voor live monitoring van service execution in SOA Suite
  • Events rechtstreeks gepubliceerd vanuit de Oracle Database
  • Stream Explorer en JMS (inbound en outbound)
  • Bewerken van Stream Explorer applicaties in de OEP IDE (JDeveloper) – om de kracht van OEP toe te voegen aan het gebruiksgemak van Stream Explorer

Als je interesse hebt om mee te doen aan de workshop, stuur dan een email naar info @ amis.nl.

NB: voor de handson is een laptop nodig met minimaal 8 GB RAM en 25 GB vrije schijfruimte. Je hebt eigenlijk geen specifieke voorkennis nodig – de meeste acties met Stream Explorer kunnen immers voor business users worden uitgevoerd. Onderwerpen die langskomen zijn o.a. Java, PL/SQL, JSON, REST, JMS, WebSocket, JavaScript, HTML, CQL, XML, EDN, SOA Suite (Mediator, BPEL).

De hands-on instructions voor de workshop kunnen hier worden gedownload.

Een paar van de onderwerpen die aan bod komen gevisualiseerd:

 

 

image

image

image

image

image

image

The post AMIS organiseert: Workshop Stream Explorer and Oracle Event Processor – Dinsdag 19 mei 2015, 17.30 uur appeared first on AMIS Oracle and Java Blog.

]]>
https://technology.amis.nl/2015/05/16/amis-organiseert-workshop-stream-explorer-and-oracle-event-processor-dinsdag-19-mei-2015-17-30-uur/feed/ 0
StreamExplorer pushing findings as JSON messages to a WebSocket channel for live HTML Dashboard updates https://technology.amis.nl/2015/05/15/streamexplorer-pushing-findings-as-json-messages-to-a-websocket-channel-for-live-html-dashboard-updates/ https://technology.amis.nl/2015/05/15/streamexplorer-pushing-findings-as-json-messages-to-a-websocket-channel-for-live-html-dashboard-updates/#comments Fri, 15 May 2015 08:32:00 +0000 https://technology.amis.nl/?p=36008 A common desire when doing real time event processing with Stream Explorer and/or Oracle EVent Processor is the ability to present the findings from Stream Explorer in a live dashboard. This dashboard should hold a visualization of whatever information we have set up Stream Explorer to find for us – and it should always show [...]

The post StreamExplorer pushing findings as JSON messages to a WebSocket channel for live HTML Dashboard updates appeared first on AMIS Oracle and Java Blog.

]]>
A common desire when doing real time event processing with Stream Explorer and/or Oracle EVent Processor is the ability to present the findings from Stream Explorer in a live dashboard. This dashboard should hold a visualization of whatever information we have set up Stream Explorer to find for us – and it should always show the latest information.

User interfaces are commonly presented in web browsers and created using HTML(5) and JavaScript. As part of the HTML5 evolution that brought today’s browsers, we now have the ability to use Web Sockets through which we can push information from server to browser to have the user interface updated based on messages pushed from the server. This allows us to create a dashboard that listens from the browser to a Web Socket and use whatever messages appear on the web socket to actualize the user interface. Such a dashboard and its implementation using standard Java (EE) was discussed in a recent article: Java Web Application sending JSON messages through WebSocket to HTML5 browser application for real time push. The results from that article provide the foundation for this article you are reading right now.

We will create a Stream Explorer application that exposes a REST interface to which we will publish JSON messages (in this example using SoapUI as the client from which to generate the test events). These messages report on groups of people entering or leaving a specific room in a movie theater. The exploration we create will aggregate the information from the messages – providing us with a constant insight in the total number of people in each room. This information is subsequently pushed to the REST service exposed by a Java EE application that routes that information across the web socket to the HTML5 client. The next figure illustrates the application architecture:

image

In this article, we will assume that Java EE application including the dashboard are already available, as described in the referenced article. All we need to do is

  • Create a Stream exposed as (inbound) REST interface – discussed in this article.
  • Create an Exploration on top of this Stream – to aggregate the events from the Stream.
  • Configure a target for this Exploration using the outbound REST adapter (an example of which is discussed here) and publish the exploration.
  • Run the Java EE application, open the dashboard and publish messages to the Stream Explorer REST service; watch the dashboard as it constantly updates to reflect the actual status

 

After configuring the Stream (as discussed in this article), create an exploration, for example called CinemaExploration. Create a Summary of type SUM based on the property partySize and group by room. Edit the Properties and change the name of property SUM_of_partySize to occupation. The exploration will look like this:

 

image

We can start pushing some messages to it from SoapUI:

image

based in part on twice sending this SoapUI request:

image

 

Next, click on Configure a Target.

image

Select type REST and set the URL

image

Click on Finish.

Publish the Exploration.

image

 

The dashboard is opened:

image

Now we can run a test case in SoapUI to send test messages to the Stream Explorer application:

image

 

Here is what the live output stream in the Stream Explorer UI shows next to a screenshot taken of the Cinema Monitor dashboard:

image

The dashboard is constantly updated with the most recent finding published by Stream Explorer.Note: the notion of having a negative occupancy is one that will require some explaining! I(more careful test data management seems to be called for)

After running some more of the SoapUI Test Cases that publish cinema events to the REST ful entry point to the Stream Explorer application, the situation is as follows:

image

The post StreamExplorer pushing findings as JSON messages to a WebSocket channel for live HTML Dashboard updates appeared first on AMIS Oracle and Java Blog.

]]>
https://technology.amis.nl/2015/05/15/streamexplorer-pushing-findings-as-json-messages-to-a-websocket-channel-for-live-html-dashboard-updates/feed/ 0