AMIS Blog http://technology.amis.nl Friends of Oracle and Java Fri, 19 Dec 2014 10:38:25 +0000 en-US hourly 1 http://wordpress.org/?v=4.0.1 Re-establishing reference from Vagrant to VirtualBox VM http://technology.amis.nl/2014/12/19/re-establishing-reference-from-vagrant-to-virtualbox-vm/ http://technology.amis.nl/2014/12/19/re-establishing-reference-from-vagrant-to-virtualbox-vm/#comments Fri, 19 Dec 2014 10:38:25 +0000 http://technology.amis.nl/?p=33454 Stress in the middle to demo preparation: Vagrant refused to bring up my suspended VMs. The message is wrote: D:\GitHub\biemond-orawls-vagrant-12.1.3-infra-soa>vagrant resume soa2admin2==> soa2admin2: VM not created. Moving on… After some Googling I discovered that the link between Vagrant and my VM consists of a the file called “id” in the subfolder This file contains nothing [...]

The post Re-establishing reference from Vagrant to VirtualBox VM appeared first on AMIS Blog.

]]>
Stress in the middle to demo preparation: Vagrant refused to bring up my suspended VMs. The message is wrote:

D:\GitHub\biemond-orawls-vagrant-12.1.3-infra-soa>vagrant resume soa2admin2
==> soa2admin2: VM not created. Moving on…

After some Googling I discovered that the link between Vagrant and my VM consists of a the file called “id” in the subfolder

image

This file contains nothing but UUID used by VirtualBox to identify the VM.

Unfortunately, this file was missing (not sure why).

It is fairly easy to restore.

First, find out the required UUID, using the management tools for Virtual Box; in the home directory for Virtual Box, I sued this command to list all VMs and their UUID values:

VBoxManage list vms

SNAGHTML102d9cf

Now I can create the file

D:\GitHub\biemond-orawls-vagrant-12.1.3-infra-soa\.vagrant\machines\soa2admin2\virtualbox\id

SNAGHTML1047bd8

using the UUID learned from VBoxManage.

After restoring this file, I can start the VM as wanted and expected:

image

The post Re-establishing reference from Vagrant to VirtualBox VM appeared first on AMIS Blog.

]]>
http://technology.amis.nl/2014/12/19/re-establishing-reference-from-vagrant-to-virtualbox-vm/feed/ 0
Begin! Transformatie begint met irritatie, verwondering en …ACTIE! http://technology.amis.nl/2014/12/19/begin-transformatie-begint-met-irritatie-verwondering-en-actie/ http://technology.amis.nl/2014/12/19/begin-transformatie-begint-met-irritatie-verwondering-en-actie/#comments Fri, 19 Dec 2014 10:23:11 +0000 http://technology.amis.nl/?p=33432 Als je echt iets wilt veranderen, moet je last ondervinden of onderkennen. Vlucht je in gelatenheid en acceptatie of kies je voor creatief overwinnen? Vind motivatie, overwin tegenslag en kom verder. Op wat voor manier dan ook… Regelmatig erger ik me aan kleine dingen. Nieuwe medewerkers of klanten verwijzen wel eens verwonderd naar zaken die [...]

The post Begin! Transformatie begint met irritatie, verwondering en …ACTIE! appeared first on AMIS Blog.

]]>
Als je echt iets wilt veranderen, moet je last ondervinden of onderkennen. Vlucht je in gelatenheid en acceptatie of kies je voor creatief overwinnen? Vind motivatie, overwin tegenslag en kom verder. Op wat voor manier dan ook…

Regelmatig erger ik me aan kleine dingen. Nieuwe medewerkers of klanten verwijzen wel eens verwonderd naar zaken die eigenlijk niet goed (genoeg) zijn, maar wel ingesleten en geaccepteerd… Er zijn dan work-arounds beschikbaar, dus de last is net niet groot genoeg. Nóg niet groot genoeg. Niet groot genoeg om er energie in te steken. De rem die is gebaseerd op de aanname dat het aanpakken moeilijk is of lang zal duren, dus beginnen we er maar niet aan… Ken je dat, deze excuses?

Het leek zo moeilijk, het bleek zo eenvoudig

Hoe vaak blijkt achteraf dat de oplossing eigenlijk voor de hand liggend was, dat het wel meeviel, dat de vraag rees “waarom hebben we of heb ik dat niet veel eerder gedaan”? Er is altijd ruimte voor verbetering, altijd ruimte voor innovatie. Herken je irritatie, koester de verwondering van anderen en neem wat afstand. Wat is het echte probleem? Wat kan ik eraan doen, hoe kan ik het voorkomen, hoe verlaag ik de negatieve uitstraling, …?

Durf te beginnen

Transformatie begint met de eerste stapAls je iets echt wilt aanpakken, lukt het altijd om te verbeteren. Creëer je eigen sfeer van ‘can-do’. Ik verken de context van een probleem en het probleem zelf. Dan bedenk ik hoe deden we dat ‘vroeger’? Hoe doen anderen het, in andere omgevingen? Kan het beter, of moet het anders? Kan ik het oplossen met een instrument, een proces, of een methode? Bedenk en deel je motivatie. Beschrijf voorbeelden hoe het beter kan zijn, schrijf je krantenartikel van de toekomst, dwing jezelf terug te kijken vanuit het succes, en wat daarvoor nodig was.

 

Dan kom je ook ergens

Het is niet altijd makkelijk, het verloopt niet altijd vloeiend, het zit natuurlijk wel eens tegen én het kost soms meer moeite dan je vooraf optimistisch hebt ingeschat. Als je je motivatie hebt gedeeld, kun je nu ook tegenslag delen, heel vaak heeft iemand in je directe omgeving een idee, een associatie, waarmee je over het dode punt heen komt. Vanaf het moment van de aha-erlebnis ontstaat bij mij de flow, dan wordt het almaar beter, en merk ik dat ik in ieder geval verder ben, dan waar ik begon. Begonnen en verder gekomen. Het voelt zo goed. Ik kijk dan tevreden terug omdat ik gewoon ben begonnen.

The post Begin! Transformatie begint met irritatie, verwondering en …ACTIE! appeared first on AMIS Blog.

]]>
http://technology.amis.nl/2014/12/19/begin-transformatie-begint-met-irritatie-verwondering-en-actie/feed/ 0
Creating Intuitive & Interactive Dashboards with the ADF Data Visualization Components http://technology.amis.nl/2014/12/18/creating-intuitive-interactive-dashboards-adf-data-visualization-components/ http://technology.amis.nl/2014/12/18/creating-intuitive-interactive-dashboards-adf-data-visualization-components/#comments Wed, 17 Dec 2014 23:20:02 +0000 http://technology.amis.nl/?p=33394 Last week I presented at the UKOUG’14 conference on creating intuitive & interactive dashboards with the ADF Data Visualization Components. Frequently end-users are overwhelmed with too much and confusing information displayed in rows and columns. It can be difficult to quickly get the relative significance. This session discussed how to create intuitive, interactive dashboards made with the ADF [...]

The post Creating Intuitive & Interactive Dashboards with the ADF Data Visualization Components appeared first on AMIS Blog.

]]>
Last week I presented at the UKOUG’14 conference on creating intuitive & interactive dashboards with the ADF Data Visualization Components. Frequently end-users are overwhelmed with too much and confusing information displayed in rows and columns. It can be difficult to quickly get the relative significance. This session discussed how to create intuitive, interactive dashboards made with the ADF Data Visualization Components. You can use of the power of visualization to present information; to call the end-user to action instead of presenting raw data – as we frequently do today. Visualizations can be used to help end-users focus on what is relevant: aggregates, exceptions, trends, comparisons e.g.

This blog posts the slides from this session.

dvts

The agenda of this session:

  • Why data visualization is important
  • Examples where DVTs are used
  • Graph demo: ADF Performance Monitor
  • Basic steps creating a graph (ADF11g)
  • Special features
    • Mouseover info
    • Alerts,
    • Reference line,
    • Animation 3D,
    • Clicklistener,
    • Hide and show,
    • Dual graphs
  • Advanced Graph Examples
    • Bubble
    • Spark
    • Treemap
  • Other Tips & Challenges
  • 12.1.3 DVT Components

You can download the slides here UKOUG_Creating Intuitive & Interactive Dashboards with the ADF Data Visualization Components

 

The post Creating Intuitive & Interactive Dashboards with the ADF Data Visualization Components appeared first on AMIS Blog.

]]>
http://technology.amis.nl/2014/12/18/creating-intuitive-interactive-dashboards-adf-data-visualization-components/feed/ 0
Cyber security is goed, cyber-weerbaarheid (Cyber Resilience) is beter http://technology.amis.nl/2014/12/17/cyber-security-cyber-cyber-resilience/ http://technology.amis.nl/2014/12/17/cyber-security-cyber-cyber-resilience/#comments Wed, 17 Dec 2014 06:44:24 +0000 http://technology.amis.nl/?p=33412 Accepteer dat je wordt gehackt Cyber security is een serieuze zaak. De bedreigingen die op ons afkomen beperken zich niet tot het missen van een aantal bestanden, het uitlekken van een adressenlijst of het uit de lucht halen van een website. Inmiddels vormen IT-systemen een essentiële en serieuze rol in ons dagelijks leven. Het uitvallen [...]

The post Cyber security is goed, cyber-weerbaarheid (Cyber Resilience) is beter appeared first on AMIS Blog.

]]>

Accepteer dat je wordt gehackt

Robbrecht van Amerongen

Robbrecht van Amerongen
Business Innovation Manager

Cyber security is een serieuze zaak. De bedreigingen die op ons afkomen beperken zich niet tot het missen van een aantal bestanden, het uitlekken van een adressenlijst of het uit de lucht halen van een website.

Inmiddels vormen IT-systemen een essentiële en serieuze rol in ons dagelijks leven. Het uitvallen of zelfs haperen van deze systemen kan drastische gevolgen hebben in de digitale en de fysieke wereld. Zo heeft de hack van Diginotar ons laten zien hoe kwetsbaar enkele essentiële onderdelen in onze IT infrastructuur kunnen zijn. De gevolgen beperkten zich niet tot uitval van enkele websites, maar ook essentiële koppelingen tussen systemen en de interfaces met de belastingdienst waren niet meer veilig. De fysieke infrastructuur die we dagelijks gebruiken is dusdanig afhankelijk van IT dat essentiële onderdelen van ons dagelijks leven niet meer functioneren zonder een veilige en betrouwbare verbinding. Denk daarbij aan elektronisch handelsverkeer, routeplanning, bewegwijzering, treinbesturing of aan fundamentele diensten zoals elektriciteit, gas en water.

We beschermen onze systemen tegen ongeoorloofd en crimineel gebruik. Maar we realiseren ons te weinig dat cyber security nooit 100% veiligheid gegarandeerd. De genomen maatregelen en checklists wekken vaak de illusie dat “het goed geregeld is”. Het tegendeel is waar. Organisaties moeten voorbereid zijn op de gevolgen van cyberinbraken en een plan klaar hebben dat de financiële-, operationele-, en reputatieschade beperkt. Dit betekent niet alleen gevolgen voor techniek, maar ook in de processen en attitude van de medewerkers. Neem hierbij als uitgangspunt dat cyber criminaliteit jou gaat raken. “Accepteer dat je gehackt gaat worden” en bekijk dan welke maatregelen je gaat nemen.

Op dit moment is het melden van een cyber inbraak nog altijd een gevoelig punt. Incidenten komen te laat aan het licht en maatregelen worden te laat genomen. Het reduceren van de ernst en het bagatelliseren van de gevolgen is nog al te vaak de standaard reactie. Dit moet veranderen willen we weerbaarder worden. Organisaties moeten zich actief voorbereiden op cybercriminaliteit en het melden van incidenten moet de normaalste zaak van de wereld worden.

Drie niveaus in beveiliging

In de beveiliging van onze organisaties onderkennen we drie niveaus; cyber security, detectie / monitoring en cyber weerbaarheid (cyber resilience).

Cyber security

cyber_ResilienceCyber security is beveiliging door het afschermen van systemen voor het garanderen van continuïteit. Het systeem wordt veiliger gemaakt met firewalls, access rules en certificaten die de toegang beperken. Dit werkt zoals een slotgracht om een kasteel. In een wereld met veel gekoppelde systemen is deze aanpak steeds minder effectief. Elke nieuwe koppelingen geeft namelijk extra toegang en brengt additionele bedreigingen met zich mee. Het beheren en beheersen van deze toegang is erg complex. Is iemand eenmaal over de slotgracht heen dan is het relatief eenvoudig om je onopgemerkt binnen het kasteel te bewegen. Kern bij deze aanpak is het toepassen van de juiste regels voor toegang. Het toetsen van de aanwezigheid van de juiste cyber security maatregelen gebeurt met checklists. De praktijk wijst echter uit dat deze passieve controle onvoldoende is om een veilig gebruik van systemen te garanderen. Het met goed gevolg doorlopen van een cyber security checklist betekenen alleen dat de administratie klopt, het geeft geen garantie op een veilig systeem.

Detectie en monitoring

Het tweede niveau is het continu meten van de cyber security van je organisatie. Bij het toepassen van cyber security moet de organisatie in staat zijn om de inrichting te monitoren. De organisatie moet ook in staat zijn om afwijkingen te signaleren en potentiele inbraken te detecteren. Hiermee vormt de organisatie een beeld van de aanvallen die er vanuit het ecosysteem ontstaan. Met deze monitoring ben je als bedrijf in staat om trends en ontwikkelingen in soorten bedreigingen te herkennen en proactief passende maatregelen te nemen. Er moet dus iemand in de organisatie verantwoordelijk zijn voor het detecteren en rapporteren van opvallende zaken en het nemen van passende maatregelen.

Cyber weerbaarheid (Cyber Resilience)

Uiteindelijk moet iedere organisatie naast een actief cyber security model ook over een actief programma op het gebied van cyber weerbaarheid (‘Cyber Resilience’) beschikken. Dit gaat verder dan het aflopen van de eerder genoemde “compliance checklist”. Een actief programma van cyber weerbaarheid bestaat uit een risico inventarisatie, het toepassen van een security policy, een recovery plan, een test protocol en een communicatieplan.

Maar hoe pak je dan aan?

  1. Redeneer vanuit Bedrijfsrisico’s (en niet vanuit checklists). Dit geeft een totaal andere benadering van security. Kijk niet naar de voorgeschreven lijsten, kijk naar je werkelijke organisatie. Wat is de belangrijkste informatie die de organisatie wil beschermen? Wat zijn de kritische bedrijfsfuncties en wat zijn de zaken waar je, in het geval van een cyber aanval, het risico op verlies kunt verwerken? Op deze wijze maak je een afgewogen keuze waarbij je de continuïteit van je organisatie maximaal waarborgt tegen een acceptabele prijs.
  2. Hanteer een Cyber Security Policy gericht op het actief beschermen van de belangrijkste activa van je organisatie (financieel maar ook qua informatie). Bepaal hoe men toegang krijgt tot deze bronnen en welke maartregelen zijn genomen om deze te beschermen. Zoek het antwoord op de volgende vragen: “Wie (maar ook welke systemen) hebben toegang tot welke informatie en bedrijfsfuncties?” “Wat is het beleid bij het wijzigen van deze functies? “ Breng dit in kaart en zorg voor een actief security beleid om deze toegang te beheren.
  3. Maak en Cyber Recovery Plan. Met dit plan stel je je organisatie in staat om te overleven tijdens een cyber security aanval en daarna snel te herstellen. Maak een gedetailleerd plan met de juiste prioriteiten. Bepaal de essentiële bedrijfsfuncties en leg vast hoe deze in het geval van een cyber aanval beschermd en eventueel hersteld moeten worden.
  4. Stel een Cyber Test Protocol samen waarin je de uitvoering van het cyber recovery plan frequent oefent. Hoe vaker je dit plan test hoe soepeler de uitvoering verloopt in het geval van een echte calamiteit. Ook zorg je ervoor dat het plan blijft werken in de continue veranderingen van de technische omgeving (infrastructuur) die je organisatie gebruikt.
  5. Maak een cyber communicatie plan waarin je op elk niveau van de organisatie helder hebt beschreven welke informatie en signalen er gedeeld moeten worden. Beschrijf in dit plan aan wie twijfels aan de huidige cyber security inrichting gemeld kunnen worden zonder dat daar enige vorm van repercussies aan verbonden zijn. Ook is de inrichting van goede communicatie vanuit de het directieteam van de organisatie is van essentieel belang. Op het moment dat er een aanval plaatsvindt moet de directie klanten, aandeelhouders en de andere stakeholders op een heldere en eenduidige wijze kunnen voorlichten over de situatie en de maatregelen die de organisatie heeft genomen. Niets is zo funest als een CEO die geen idee heeft waar het om gaat.

Cyber Resilience

Met de verschuiving van aandacht van cyber security naar cyber resilience wordt de wijze waarop organisaties met cybercriminaliteit omgaan volwassener. Het is niet meer van deze tijd dat we een diepe slotgracht om onze systemen graven en “hopen dat het goed geregeld is”. Een aanval mag niet als een verrassing komen. Het uitgangspunt moet zijn dat we, ondanks alle preventiemaatregelen, ooit een keer slachtoffer worden. En als we dan daadwerkelijk gehackt worden, zijn we er in ieder geval klaar voor.

Cyber Resilience gaat over het actief voorbereid zijn op de meest negatieve scenario’s waarbij we weten wat we moeten doen. Zodat we met vertrouwen een aanval inperken en zo snel mogelijk weer operationeel zijn. Zo kunnen we de investering in cyber security richten op de zaken met de hoogste business prioriteit en daarmee een effectief rendement op onze investering laten plaatvinden.

Zorg dus dat je klaar staat als je gehackt wordt!

Meer achtergond over dit onderwerp is te vinden op https://www.tno.nl/en/focus-area/defence-safety-security/cyber-security-resilience/

The post Cyber security is goed, cyber-weerbaarheid (Cyber Resilience) is beter appeared first on AMIS Blog.

]]>
http://technology.amis.nl/2014/12/17/cyber-security-cyber-cyber-resilience/feed/ 0
Some thoughts on Continuous Delivery http://technology.amis.nl/2014/12/14/thoughts-continuous-delivery/ http://technology.amis.nl/2014/12/14/thoughts-continuous-delivery/#comments Sun, 14 Dec 2014 12:25:59 +0000 http://technology.amis.nl/?p=33347 Continuous Delivery is something a lot of companies strife for. It is changing the way we develop software to allows quick (continuous) delivery of business value. Why is it difficult to achieve and what are the challenges which need to be faced? Inspired by a Continuous Delivery conference in the Netherlands and personal experiences, some personal thoughts [...]

The post Some thoughts on Continuous Delivery appeared first on AMIS Blog.

]]>
Continuous Delivery is something a lot of companies strife for. It is changing the way we develop software to allows quick (continuous) delivery of business value. Why is it difficult to achieve and what are the challenges which need to be faced? Inspired by a Continuous Delivery conference in the Netherlands and personal experiences, some personal thoughts on the subject. The bottom line is that it requires a cultural change in a company and it is a joint effort of several departments/disciplines to make it work. The below image is taken from here. The Continuous Delivery maturity model is an interesting read to understand what Continuous Delivery is and provides a way to measure where you are as a company.

continuous delivery maturity model

What has changed?

Current day, software development has changed much compared to let’s say 20 years ago. First I’ll describe some of the current issues. Then I’ll provide the often obvious (but curiously not often implemented) solutions (mostly in the context of Continuous Delivery).

Changes for business

Speed gives a competitive advantage. This is more so then in the past since it is easier for customers (due to the internet) to find, compare and go to competitors. It has become normal to not go to the local store anymore by default. Especially since customers start to realize they can save money by switching regularly. For governments, speed is important to be able to quickly implement new legislation.

Racing-against-competition

Changes for developers

More frameworks

There used to be only the choice of vendor and integration / portability were not really issues because people tended to work in isolated silos. Currently however, application landscapes are made up of multiple technologically diverse systems integrated with open standards. The choice of software to implement a solution in is not as straightforward as it used to be. ‘we use Microsoft, Microsoft has one product for that so weĺl use that!’. Today it’s more like; we have a problem, what is the best software available to fix it?

This change requires a different type of architects and developers. People who are quick learners, flexible and are able to make objective choices.

In my experience, as an Oracle SOA developer, I should also have knowledge of Linux, application servers, Java, Python (WLST) and my customers appreciate it if I can also do ADF. Of course I should be able to design my own services (using BPEL, BPMN, JAXWS or whatever other framework) and write my own database code.

More integration

Since systems become more and more distributed and technologically diverse, integration effort increases. For example, if in the past Oracle and Microsoft were living in their own distinct silo;

oracle_and_microsoft_silo

It is now not so strange anymore (because of open standards) to have an Oracle backend with a Microsoft frontend working together.

Integration also translates in integration suites becoming more popular such as Service Bus products, BPEL, BPMN engines which help automating business processes over applications/departments. The below screenshot is from Oracle’s BPM Suite.

bpm sample

More security

Security is becoming increasingly important. Security on network/firewall level is the responsibility of the operations department but application and integration security is part of the developers job. Especially when the application is exposed externally, this becomes important.

Continuous Delivery is becoming a topic

Because the complexity of environments has increased, so has the complexity of the installation and release process. More companies start to realize this is something which is often a bottleneck. Classic delivery patterns are not suitable for such complexity and do not provide the delivery speed and quality which is required. For developers tools like Jenkins, Hudson, Bamboo, TFS, SVN, Git, Maven, Ant (and of course long lists of test frameworks) are becoming more part of daily life.

Changes for operations

Systems are becoming more diverse

In ‘the old days’ an Oracle database administrator would ‘only’ have to know the Oracle database. In present day, he is also expected to know application servers and know his way around Linux. He may even get requests to be the database administrator for a Microsoft database.

services_application_software

He is confronted with all kinds of new tools to roll out changes such as Jenkins, Hudson, Bamboo, XL-Deploy, SVN, Git, Ant, Maven, etc. Just having specific knowledge, will not be enough for him to keep his job until his retirement so he needs to learn some new things.

some continuous delivery toolsDistributed systems require new monitoring mechanisms

Systems have become more and more distributed. Monitoring becomes more of a challenge. For example, the database can be up and running, the application-server can be up and running but the application cannot access the database. What could be wrong? Well, the database might have been down (some companies still do offline backups…) and the datasource configured in the application-server has not recovered (yet?). In order to detect such issues, you need to monitor functionality and connectivity instead of individual environments.

breaking-chain

Security is becoming more important

Companies slowly start to realize that security is also a major concern. To be secure, it is important to be thorough and quick with security updates and patches. Also it requires more advanced monitoring and intrusion detection. A plain old firewall alone will not suffice since visitors need to access resources from within the company (for example in case of self-service portals). They need to be allowed to access them to allow certain functionality. When they are in, they can do all kinds of interesting things.

Downtime is expensive and hurts your reputation

Being responsive when something goes wrong, is not good enough anymore. It will cost you customers. Also, when a problem is found, you are usually already too late. Proactive monitoring is required. If it is possible to prevent a problem, it is usually less expensive then waiting for a disaster and trying to fix it when it happens.

disaster recovery versus prevention

I borrowed this image from an inspiring presentation by Mark Burgess

Test

In order to make Continuous Delivery work, test automation is a must. The role of the technical tester becomes more important (since manual work is error prone and likely to give low coverage). Tests must be environment independent, rerunnable, independent of the data-set. During acceptance testing, if the acceptance criteria are well automated in tests, when they work on the acceptance test environment, it’s save to go to production. Manual testing is not required anymore. This is also a matter of trust.

How to make things better

Of course the below suggestions are obvious but surprisingly, a lot of companies do not implement them yet.

Optimize the cycle time!

Dave Farley gave a nice presentation in which he mentioned a measure of performance for a software project; the cycle time. This measure provides a nice illustration on how thinking about the software delivery process should change and what the bottlenecks are.

The cycle time is the period it takes from an idea to provide actual business value. For example, marketing has thought up a new product. Most profits could be gained from this if it can be implemented quickly. If the cycle time is too high, competitors might be first or the idea might not be relevant anymore.

The cycle time can consist of steps like;

  • business: new idea
  • business: marketing research, will this work? business case
  • business: decision making, are we going to do this?
  • architecture: stakeholder analysis, non-functional and functional constraints
  • design: how should the system work?
  • operations: which and how many servers should be installed? which software versions?
  • development: creating the system/application/feature
  • test: is the quality good enough? if not, iterative loop with development
  • development: providing operations with an installable package
  • operations: running production

It becomes clear that optimizing the cycle time requires an efficient process which usually spans multiple departments and involves a lot of people.

Companies usually suffering from long cycle-times are companies who have implemented a strict separation of concerns where for example development is split in frontend, middleware, backend (of course all with their own budgets and managers), operations is split in hardware (physical and virtual), operating system (Windows, Linux), database (Oracle, Microsoft), application server, etc. If such a company implements quality gates (entry and exit criteria), this problem becomes worse. Quality and control is not gained by such a structure/process.

It is easy to understand why cycle-times in such companies usually are very low. In such a structure, there is little shared responsibility to get a new feature to production. Everyone just responds to his or her specific orders. Communication is expensive and it needs a lot of managing and reporting.

Organisation structure

Don’t separate development and operations

Separation between development and operations is not a good idea. To get a new piece of software running in production, they need each other. Developers need environments and prefer minimal effort in having to maintain them. Operations prefer installations without much problems. If installations are not automated, operations can help the development team to write a thorough installation manual or automating steps. Also it helps if the operations people have a say in requirements since it allows monitoring for example to be done more specifically. Being physically close to each other reduces the communication gap.

devopsWork together in cross functional teams

This reduces the communication time, time required to manage over the departments, makes the discussion of who is responsible a lot easier (the team is) and reduces the tower of Babel effect. Cross functional meaning business, development, test, operations together in single small teams with responsibilities for specific features including running production with them (BizDevOps). Take into account that feature teams tend to overlap in the code they edit, thus some coordination is necessary.

Use stable teams

The feeling of responsibility increases when the team who build it is also responsible for keeping it running in a production environment. The team who build it, know best how it works and problems can easily be solved. Be careful though not to become too dependent on a specific team. Also the people in the teams will still require to talk about methods and standards with people from other teams to make company wide policies possible.

Test

Do specific assertions work?

On the Continuous Delivery conference, there was a nice presentation on approval testing. If for example I put a funny image on the site, will automated tests detect it? It is likely that they will not because no tester will have put in a specific assertion to detect this error. The approval testing methodology uses expected output converted to text in order to compare the produced output. PDF documents and images can both be converted to text and it is easy to compare text with tools like TextTest. Thus if a site contains a funny image and the image is converted to text, a text compare will detect it. This method also requires less work since not every assertion has to be written. A drawback is that it requires some additional effort on maintaining the tests since every change to the output needs to be approved. Coverage is a lot better though.

texttest

Is the application secure?

Most applications nowadays are outwards facing. This implies security is a major concern. This should be well tested. Ethical hackers are quite suitable for the job. Don’t let them look just at the production code since then it will be too late! If they are involved early in the development process, it will provide useful insight on what should be re-factored before going live (see the flood picture before).

ceh

Development

Create modular software which allows updating without downtime

Very obvious and of course a best practice. However for example your J2EE web-application which is deployed in a single EAR is not modular since it is not possible to replace a component without having to redeploy the entire thing. The idea of microservices has some nice suggestions to make services independent and modular. See for example: http://martinfowler.com/articles/microservices.html

components to products

Use open standards

When a new component or application is added to the landscape, it is easier to link it to the already existing software. Also vendor lock is reduced this way. For internet sites it is nice that it looks the same in every standards based browser.

useopenstandards

Operations

Keep the number of environments to a minimum

This is especially true if they are hard to create and expensive to maintain. If a certain environment is difficult to keep stable, try to identify cause and work on that instead of fighting symptoms. Introducing more environments increases the problem. If you are proficient in creating environments (you can create one running a working application in a matter of minutes) and maintaining them, of course this is not a problem and you can use as many environments as you see fit.

Automate everything

This is a joint effort of test, operations and development. Sometimes companies do not acknowledge this as a separate task which needs investment since it does not seem to provide direct business value.

automate everything

Environments, configuration, releases, deployments, patches

Especially environments and configuration should be automated but also releases, deployments and patches. People make errors and get bored of repetitive actions (at least I do). Fixing errors takes a lot more time then making them.

Automated provisioning makes it easier to deploy patches to multiple environments. This allows faster installations when security patches become available. If you want to manage security updates, do it thoroughly. If a Windows security update gets installed half a year after it is released… well, you get the drift.

Results in reduction of cycle time, increase in quality and better security

It reduces the time operations needs to create a new environment, reduces the time developers need to fix errors in the environment configuration, reduces the time testers require to check if the environment is setup correctly (if it is correctly automated, you don’t need to check) and last but certainly not least, it reduces the frustration the business experiences because it takes long to create new environments and the quality is poor.

Conclusion

Many of the things in this post are obvious. I’ve mentioned several challenges and some solutions to help in eventually getting to the goal of Continuous Delivery. It does require a change in culture though to make it happen. As well formulated by the CIO of the Dutch ING bank Amir Arooni;

(in Dutch)
From “Kan niet”, “Mag niet” and “We doen het altijd zo” to “Kan”, “Mag” en “Het gaat echt nu anders”

It translates to something like;
From “Not possible”, “Not allowed” and “We always do it like this”, To “You can”, “It is allowed”, “We really are going to do it differently.”

The post Some thoughts on Continuous Delivery appeared first on AMIS Blog.

]]>
http://technology.amis.nl/2014/12/14/thoughts-continuous-delivery/feed/ 0
Instrumenting, Analysing, & Tuning the Performance of Oracle ADF Applications http://technology.amis.nl/2014/12/12/33312/ http://technology.amis.nl/2014/12/12/33312/#comments Fri, 12 Dec 2014 16:21:08 +0000 http://technology.amis.nl/?p=33312 Last week I presented at the  UKOUG’14 conference on instrumenting, analyzing, & tuning the performance of Oracle ADF applications. Instrumentation refers to an ability to monitor or measure the level of a product’s performance, to diagnose errors and to write trace information. Instrumenting gives visibility and insight of what is happening inside the ADF application and in the [...]

The post Instrumenting, Analysing, & Tuning the Performance of Oracle ADF Applications appeared first on AMIS Blog.

]]>
Last week I presented at the  UKOUG’14 conference on instrumenting, analyzing, & tuning the performance of Oracle ADF applications. Instrumentation refers to an ability to monitor or measure the level of a product’s performance, to diagnose errors and to write trace information. Instrumenting gives visibility and insight of what is happening inside the ADF application and in the ADF framework (what methods and queries are executed, when and how often). These runtime diagnostics can be very effective in identifying and solving performance issues and end-user behavior. This enables developers and operations teams to quickly diagnose and solve performance problems in a test and production environment. This blog posts the slides from this session. It  shows how you can instrument your own ADF application and build your own performance monitor.

Gauges_real_world

Gauges_real_world

Why is instrumentation important?

  • Many applications are like a smoke screen; it is very unclear what is happening in the background. Where should we look for bottlenecks ?
  • End-users do not accept slow applications anymore
  • Operation teams and developers need visibility:
    • Are response times within SLA agreements?
    • What are the exact pain-points and weakest links of the ADF application ?
    • Are there errors? What type/severity of errors?

The Agenda of this session:

  • What is instrumentation
  • Why instrumentation is important
  • Cost of tracking
  • Analyzing and tuning
  • Oracle ODL Analyzer
  • Build your own performance monitor
    • What you can instrument – Not ADF specific
    • What you can instrument – ADF specific
    • Five examples how you can instrumentation key spots in ADF applications
      • Instrumenting HTTP Request
      • Instrumenting Errors / Exceptions
      • Instrumenting ADF Business Components
      • Instrumenting JVM memory consumption
      • Instrumenting  ApplicationModule pooling (activation and passivation)

You can download the slides: Instrumenting, Analysing, & Tuning the Performance of Oracle ADF Applications.

 

The post Instrumenting, Analysing, & Tuning the Performance of Oracle ADF Applications appeared first on AMIS Blog.

]]>
http://technology.amis.nl/2014/12/12/33312/feed/ 0
The caveats of running .sql scripts with GUI tools http://technology.amis.nl/2014/12/10/caveats-running-sql-scripts-gui-tools/ http://technology.amis.nl/2014/12/10/caveats-running-sql-scripts-gui-tools/#comments Wed, 10 Dec 2014 12:00:51 +0000 http://technology.amis.nl/?p=33306 One of my pet peeves is people using GUI tools like Toad or SQL Developer while running release scripts on test, acceptation or production systems. Actually, pet peeves is putting it too mildly. Ive had to troubleshoot enough incidents because of this to hold a serious grudge against that careless practice. Especially when running a [...]

The post The caveats of running .sql scripts with GUI tools appeared first on AMIS Blog.

]]>
One of my pet peeves is people using GUI tools like Toad or SQL Developer while running release scripts on test, acceptation or production systems. Actually, pet peeves is putting it too mildly. Ive had to troubleshoot enough incidents because of this to hold a serious grudge against that careless practice. Especially when running a script that manipulates data, you are much better of using SQL*Plus. Most hardboiled DBAs probably know what Im talking about, but if you don’t, have a look at the two examples below.

Example 1: Do I need to end the line with semicolon or use a slash? Lets just use both!
The differences between the use of semicolons at the end a SQL statement and the use of a slash to run whatever is in the buffer can be confusing at first, especially in those cases where they both seem to be doing the same thing. Ive seen a surprising amount of scripts that use both, ‘just to be sure’. Lets have a look at a script that does just that:

select sum(SAL) as total_salary from EMP;
/
update EMP set SAL = SAL * 1.1 where EMPNO =1;
/
update EMP set SAL = SAL * 1.2 where EMPNO =2;
/
update EMP set SAL = SAL * 1.3 where EMPNO =3;
/
update EMP set SAL = SAL * 1.4 where EMPNO =4;
/
update EMP set SAL = SAL * 1.5 where EMPNO =5;
/
select sum(SAL) as total_salary from EMP;
/

In my experience, scripts like these are made by people who only work with Oracle sporadically. (Unfortunatly this also seems to include some application vendors that sell software that runs on “any database”, be it Oracle, SQL Server or some other flavour.)

If you run the script in Toad, you will get the result that was probably intended by the person who wrote it; five records will each be updated once. However, if you run the script from SQL*Plus, you’ll get a very different result; the five records will each be updated twice! Good news for the employees who just got their intended salary raise twice, bad news for everyone else, especially the person who ran the script.

You might object at this point: “If the use of Toad leads to the intended result and the use of SQL*Plus doesn’t, why are you trying to convince me to use the latter?” In that case you should realise updating the records twice is exactly what the script specifies. The fact that Toad handles this differently is because its a developerstool; it recognizes common errors that are frequently made during development and automaticly corrects them. (like the one in the example above.) This can be perfectly acceptable on development systems where you can code quick and dirty at first to check if things work generally, only to focus on the nitty gritty details once the big picture is complete. But as soon as your code moves to test, acceptance or production, you should no longer rely on tools that guess the intention and change the code accordingly. The code should be correct in the first place! If its not, you’ll want to see an error raised (or get a wrong result if the code is correct but written with different intentions) as soon as it gets to the test phase, so you can go back and fix it. You might think; “but what if I use Toad on development, test, acceptance AND production? Then every run should be identical?” Yes, you might be right as long as you are the only person running the scripts and you never have to take a day off from work because you are never ill and never need a vacation, because as soon as someone else has to take over from you they’ll have to use Toad (configured with exactly the same preferences!) as well or risk running into unexpected problems because there might still be errors in the script. Do you have your Toad prefences documented for them? Furthermore, do you have Toad installed on all servers (with exactly the same preferences) or do you work from a client? My guess is you dont want to install it on all database servers (because you want those servers as clean and simple as possible and dont want to use up a Toad licence per server) but running it from a client brings new risks. I will illustrate this with another example:

Example 2: network hickups
A while back I ran into the stangest issue. Someone had run a very basic sql-script with about 4000 insertstatements on acceptance and, after verification by someone else, ran the same script on production. Unfortunatly during verification on production only a quarter of the records turned out to be inserted in the database. Because the whole run of the script was spooled to a logfile i checked the logfile for ORA errors and counted the amount of succesful inserts. Everything seemed the same as on acceptance; all 4000 inserts were in the spoolfile, no ORA errors were logged. I thought I was losing my mind! Then I checked the alertlog of the database and noticed a TNS error logged during the run of the insert script. Maybe the script wasnt executed on the server itself but on a client and had been interrupted by the network hickup? Even if that was the case, I still couldnt believe that single transaction had apparently partly failed without any warnings. This seemed to undermine the very basic ACID properties of a relational database.
While discussing this with my collegues the person who had run the script confirmed he had run in from a client, but to my surprise he also mentioned he had run it in Toad instead of SQL*Plus. Within a second after saying this he figured out what had happened:

* After the first 75% of the insertstatements had been processed, the connection to the database was terminated, and the transaction was rolled back by the database. (yay ACID test)
* Upon noticing the connection error, Toad had automatically reconnected to the database, and continued with the final 25% of the script in this new session as if nothing happened.

I have never seen issues like these while using SQL*Plus. Sure, you can loose your connection to the database while running a script in SQL*Plus on a client as well, but at least it wont automatically reconnect and continue as if nothing happened. Even better; SQL*Plus will be available on the database server with no extra licence costs, so you can run your scripts on the server itself and not worry about potential network problems.

For convenience sake it might be tempting to use Toad on a client instead of SQL*Plus on a server, but after reading these examples I hope you are aware of the extra risks that this convenience might bring.

The post The caveats of running .sql scripts with GUI tools appeared first on AMIS Blog.

]]>
http://technology.amis.nl/2014/12/10/caveats-running-sql-scripts-gui-tools/feed/ 2
RAC and preventing Active Data Guard: My experiences http://technology.amis.nl/2014/12/08/rac-preventing-active-data-guard-experiences/ http://technology.amis.nl/2014/12/08/rac-preventing-active-data-guard-experiences/#comments Mon, 08 Dec 2014 14:16:54 +0000 http://technology.amis.nl/?p=33292 When you happen to have a customer that want’s to use Data Guard on Oracle RAC without a license for Active Data Guard then you might want to read this post. You probably have searched on the internet (just like I did) and already found this nice post from Uwe: http://uhesse.com/2013/10/01/parameter-to-prevent-license-violation-with-active-data-guard/ If you haven’t already then [...]

The post RAC and preventing Active Data Guard: My experiences appeared first on AMIS Blog.

]]>
When you happen to have a customer that want’s to use Data Guard on Oracle RAC without a license for Active Data Guard then you might want to read this post.

You probably have searched on the internet (just like I did) and already found this nice post from Uwe: http://uhesse.com/2013/10/01/parameter-to-prevent-license-violation-with-active-data-guard/

If you haven’t already then go ahead and read it, but come back when you also use RAC.

I had set-up Data Guard between two RAC databases with Data Guard being managed by the DG Broker.

Environment

Oracle 11.2.0.4.2 EE on Red Hat Linux 6.5 using ASM. Two RAC nodes in each cluster. A Primary and a Physical Standby with redo apply on.

2014-12-08 DG with RAC

Test scenario’s

  1. Crash node PRIMARY1 and get it running again.
  2. Crash node STANDBY1 and get it running again.
  3. Switchover from PRIMARY to STANDBY and back.
  4. Failover from PRIMARY to STANDBY and back.

Test 1 works like a charm, the services failover to node PRIMARY2 and bringing up node PRIMARY1 causes no problems.
Test 2 also works as designed. The apply process gets started on the other node. No problems there.
Tests 3 and 4 also seem to work as expected, but while taking a closer look it appears that the standby database is opened READ ONLY WITH APPLY!

That is NOT want I wanted. We cannot be using Active Data Guard! So I went looking for a solution and found the blog post mentioned above. Created a service request with Oracle Support to see if there are any issues I involved with this parameter but none were mentioned to me.

Trying to open the standby database now results in a : ORA-16669: instance cannot be opened because the Active Data Guard option is disabled. A bit rough but at least you won’t use a license option that we do not have.

Test again

Executing test1 again: Oeps! While trying to start the instance again I get:

ORA-01105: mount is incompatible with mounts by other instances
ORA-03175: parameter _query_on_physical mismatch

The only way to get past this is to shutdown the other RAC instance as well and start them both together!

Thus: Setting the parameter _query_on_physical to false results in having to restart the whole database on an instance crash!

Digging Deeper

It appears that when we performed a switchover or a failover the startup parameters in clusterware weren’t adjusted by the broker. The primary database has a start_mode of OPEN, check with:

srvctl config database –d <dbname>

But the standby database should have a start_mode of MOUNT. That way the database is mounted when somebody uses the srvctl start database –d <dbname> command. Therefore when the primary database fails over to the standby database these start options should be adjusted by the broker but they aren’t! That results in the standby database having the start_mode of the former primary database, which is OPEN. And visa versa for the new primary database. The broker still opens the new primary database, and that is what we wanted (even when the start_mode is set to MOUNT), but the new standby database gets opened sometimes as well (READ ONLY WITH APPLY). And when you experience an cluster reboot (by a power outage) clusterware will restart with the specified start_mode, and thus open your database and activating Active Data Guard.

It appears that there is a patch for that issue: 15986647. So we needed to apply that. But check yourself if it isn’t already included in the patches you installed.

Conclusion

In our case setting the parameter _query_on_physical to false was not an option because of the side effect with an instance crash. It’s always good idea to test something yourself as well.

The post RAC and preventing Active Data Guard: My experiences appeared first on AMIS Blog.

]]>
http://technology.amis.nl/2014/12/08/rac-preventing-active-data-guard-experiences/feed/ 0
Securing OHS environments with latest SSL TLS protocols and SHA-2 certificates http://technology.amis.nl/2014/12/07/securing-ohs-environments-latest-ssl-tls-protocols-sha-2-certificates/ http://technology.amis.nl/2014/12/07/securing-ohs-environments-latest-ssl-tls-protocols-sha-2-certificates/#comments Sun, 07 Dec 2014 17:44:32 +0000 http://technology.amis.nl/?p=33257 Customer case A while ago I was contacted by a customer about their old Oracle Application and Weblogic Server environment. They were receiving complaints from users that they can’t connect to the secure site any longer. Most of the complaints came from users that just recently updated their tablet or smartphone. After a quick look [...]

The post Securing OHS environments with latest SSL TLS protocols and SHA-2 certificates appeared first on AMIS Blog.

]]>
Customer case

A while ago I was contacted by a customer about their old Oracle Application and Weblogic Server environment.
They were receiving complaints from users that they can’t connect to the secure site any longer. Most of the complaints came from users that just recently updated their tablet or smartphone.
After a quick look in the logs of the OHS servers, I found out that the problem had to do with the SSL protocols being used.
The servers were providing connections through either SSLv3 or TLSv1.0, while the devices requested at least TLSv1.1.

The environment comprises of an Oracle HTTP server 10.1.x, for SSO, in front of their Application Server.
For the applications they are using OHS 11.1.1.x. in front of a mix of applications. Varying from oc4j 10.1.2 all the way up to 11.1.1, including Oracle Forms and Reports.
Unfortunately, due to this complexity of components, they were not able to upgrade the environment in time.

SSL Current Situation

SSL Current Situation

 

Requirements

The customer asked to provide a solution with the following requirements.

  • Disable the old, insecure, SSLv3
  • Enable TLSv1.1 and TLSv1.2 for all sites
  • Current hostnames for the url’s must not change
  • Support SHA-2 SSL certificates for all sites

Circumstances I had to take into account

  • Oracle HTTP Server (OHS) 10.1.x and 11.1.1.x do not support TLS 1.1 and TLS 1.2.
    This is due to the Oracle NZ layer used by OHS 10g/11g for its SSL implementation which doesn’t support TLS 1.1/1.2.
  • There is no support for SHA2 certificates (SHA256 or SHA512) or algorithms in Oracle Application Server 10g (10.1.2.X.X/10.1.3.X.X)
  • SHA2 is certified for Fusion Middleware 11g (11.1.1.X) with caveats
  • As part of their SHA-2 migration plan, Microsoft, Google, and Mozilla have announced that they will stop trusting SHA-1 certificates.
    Google will begin phasing out trust in SHA-1 certificates in November 2014.
  • Replacing the old 11.1.1.x OHS with FMW Webtier 12.1.3.0. is not an option.
    OSSO from the 10.1.x appserver is being used and in FMW Webtier 12.1.x the mod_osso module is no longer supported.

note. Oracle Traffic Director on Exalogic is also based on FMW 11.1.1.x !!

Solution

There are several options to meet the requirements set by the customer.
Unfortunately the best solution, upgrading the environment, cannot yet be implemented.

In this case the requirements were met by placing a reverse proxy in front of the entire environment.
The reverse proxy acts as an SSL terminator for client connections using the latest SHA-2 SSL Certificates.
To encrypt the connection, using TLSv1.0, between the reverse proxy and the backend OHS, I generated Self-Signed SHA-1 certificates compatible with the old servers .

As a reverse proxy I had the choice between using Oracle Fusion Middleware 12c 12.1.3 Webtier or the plain Apache HTTP Server.
I decided to go with Apache HTTP Server.

The reason for this choice were.

(Security) Updates – (Security) updates are released more frequent for plain Apache than for Webtier
Easier to maintain – The server will be managed by Linux engineers, not the Oracle Engineers
Smaller footprint – I only need the reverse proxy functionality, not all the fancy stuff that comes with Oracle Webtier.

SSL Installed Solution

SSL Installed Solution

Pretty much all requirements were met by using the latest Apache with the correct SSL settings and new SSL Certificates.

For one requirement we needed to play a little trick:

Current hostnames for the url’s must not change
After setup of the reverse proxy, all DNS entries for the url’s hostnames where changed to the IP-addresses of the reverse proxy.
For the reverse proxy to be able to do its work, I placed the old IP-addresses in the local hosts file of the server running Apache HTTP Server.
So the clients browsers are accessing the url’s via DNS resolving to the reverse proxy which on his turn resolves the hostsnames on the backend using /etc/hosts.

Final thoughts

It was not my intension to describe the complete setup of an Apache based reverse proxy here.
There are tons of how-to’s, blogs, etc. that describe all the setups and features.
The main purpose of this article is to make people aware of the fact that there are some changes in SSL security upcoming that can have a direct impact on your environment.

In the case described above, users were already experiencing problems with mobile devices and tablets. And as I finished the setup, their developers discovered that Java 1.8 uses TLSv1.2 by default.
So a problem, they did not yet relate to SSL protocols, was solved in the process.

As reminder

Oracle supports the use of TLSv1.1 and TLSv1.2 as of version FMW 12.1.x
Oracle supports the use of SHA-2 as of FMW 11.1.1.x (with caveats)

Related Oracle support notes:
Does Oracle HTTP Server (OHS) 10g Or Higher Support TLS 1.1 and TLS 1.2? (Doc ID 1503476.1)
Using OHS 12c With TLS 1.1 and 1.2 Protocols as an SSL Reverse-Proxy to OHS 11g (Doc ID 1920143.1)
Is SSLHonorCipherOrder and TLS 1.1/1.2 Supported for Oracle HTTP Server? (Doc ID 1485047.1)
How to Change SSL Protocols (to Disable SSL 3.0) in Oracle Fusion Middleware Products (Doc ID 1936300.1)

The post Securing OHS environments with latest SSL TLS protocols and SHA-2 certificates appeared first on AMIS Blog.

]]>
http://technology.amis.nl/2014/12/07/securing-ohs-environments-latest-ssl-tls-protocols-sha-2-certificates/feed/ 4
Bulk authorizing Oracle Unified Directory (OUD) users by adding them to OUD groups from the Linux/Unix Command Line http://technology.amis.nl/2014/12/03/bulk-authorizing-oracle-unified-directory-oud-users-adding-oud-groups-linuxunix-command-line/ http://technology.amis.nl/2014/12/03/bulk-authorizing-oracle-unified-directory-oud-users-adding-oud-groups-linuxunix-command-line/#comments Wed, 03 Dec 2014 14:52:00 +0000 http://technology.amis.nl/?p=33238 When using Oracle Unified Directory (OUD) as an identity store, it is in some occasions needed to add OUD users to OUD groups by hand. When you have to grant privileges to one user, this is easily done through the Oracle Directory Services Manager (ODSM) interface. However doing so for more then one user and [...]

The post Bulk authorizing Oracle Unified Directory (OUD) users by adding them to OUD groups from the Linux/Unix Command Line appeared first on AMIS Blog.

]]>
When using Oracle Unified Directory (OUD) as an identity store, it is in some occasions needed to add OUD users to OUD groups by hand. When you have to grant privileges to one user, this is easily done through the Oracle Directory Services Manager (ODSM) interface. However doing so for more then one user and more then one group, this might easily turn into a dreadful job. Luckily there are some command line utilities which can do that for you. In this blog I’ll guide you to the process on how I have done that with a given list of user names (e.g. Steven King, Neena Kochhar etc) and a gives list of groups (e.g. cn=Marketing,cn=Groups,dc=oracle,dc=org etc).

All utilities used in this blog can be found in the ORACLE_HOME/bin directory of OUD. In order to use them you have to set the ORACLE_HOME environment variable:

export ORACLE_HOME=<your-systems-location>

Find UIDs
To begin with we have to find the unique User IDs (UIDs) of the given users by their names (displayname attribute):

while read p; do
./ldapsearch -h <host> -p <port> -D cn=orcladmin -w <password> “displayname=$p” uid | grep cn
done > users.ldif <<EOF
Steven King
Neena Kochhar
Lex De Haan
EOF

It will give you a file users.ldif like:

cn=SKING,cn=Users,dc=oracle,dc=com
cn=NKOCHHAR,cn=Users,dc=oracle,dc=com
cn=LDEHAAN,cn=Users,dc=oracle,dc=com

You should check the file carefully on containing the correct entries, since the displaynames are not unique. You might end up with duplicates (more then 1 UID on the same displayname). Another flaw is that you might be missing some uids due to spelling or unclear conventions on the displayname. You might see displayname occurences of “firstname lastname” or “lastname, firstname” in one OUD instance.

Add the found users to a given list of groups

First we have to create a LDIF file which can do that for us (using the users.ldif file created before):

while read p; do
  echo dn: $p
  echo “changetype: modify”
  echo “add: uniquemember”
  while read u; do echo uniquemember: $u; done < users.ldif; echo
done > authorizations.ldif <<EOF
cn=Administration,cn=Groups,dc=oracle,dc=org
cn=Marketing,cn=Groups,dc=oracle,dc=org
EOF

This will create a file authorizations.ldif like:

dn: cn=Administration,cn=Groups,dc=oracle,dc=org
changetype: modify
add: uniquemember
uniquemember: cn=SKING,cn=Users,dc=oracle,dc=com
uniquemember: cn=NKOCHHAR,cn=Users,dc=oracle,dc=com
uniquemember: cn=LDEHAAN,cn=Users,dc=oracle,dc=com

dn: cn=Marketing,cn=Groups,dc=oracle,dc=org
changetype: modify
add: uniquemember
uniquemember: cn=SKING,cn=Users,dc=oracle,dc=com
uniquemember: cn=NKOCHHAR,cn=Users,dc=oracle,dc=com
uniquemember: cn=LDEHAAN,cn=Users,dc=oracle,dc=com

Then use it to add the authorizations to OUD (first try with the -n flag for testing purposes) :
./ldapadd -h <host> -p <port> -D cn=orcladmin -w <password> -n -f authorizations.ldif

Eventually when adding the users by running ldapadd without the -n option, you should test if all worked correctly. For this purpose the “ldapsearch” utility can be used.

The post Bulk authorizing Oracle Unified Directory (OUD) users by adding them to OUD groups from the Linux/Unix Command Line appeared first on AMIS Blog.

]]>
http://technology.amis.nl/2014/12/03/bulk-authorizing-oracle-unified-directory-oud-users-adding-oud-groups-linuxunix-command-line/feed/ 0
MAF 2.0 : Custom Toggle Springboard Functionality (or how I discovered AdfmfSlidingWindowUtilities) http://technology.amis.nl/2014/12/03/maf-2-0-custom-toggle-springboard-functionality-discovered-adfmfslidingwindowutilities/ http://technology.amis.nl/2014/12/03/maf-2-0-custom-toggle-springboard-functionality-discovered-adfmfslidingwindowutilities/#comments Wed, 03 Dec 2014 12:24:46 +0000 http://technology.amis.nl/?p=33230 Mobile apps usually have the possibility to toggle the springboard by using an icon that is displayed in the header of the app. The Oracle MAF reference app, Work Better, also tries to implement this behavior. The showing of the springboard works fine, however, hiding it does not really work as expected. In this post [...]

The post MAF 2.0 : Custom Toggle Springboard Functionality (or how I discovered AdfmfSlidingWindowUtilities) appeared first on AMIS Blog.

]]>
Mobile apps usually have the possibility to toggle the springboard by using an icon that is displayed in the header of the app. The Oracle MAF reference app, Work Better, also tries to implement this behavior. The showing of the springboard works fine, however, hiding it does not really work as expected. In this post I show you how to implement a working custom toggle springboard functionality.

Default Toggle Springboard Implementation

First let’s take a look at how the toggle springboard functionality works out of the box. In your application configuration file you need to set the “Show Springboard Toggle Button” to true in order to enable toggle functionality.

default

All the rest is taken care of by the framework at runtime and these setting results in the default toggle springboard icons to show up on both iOS and Android. Note that this of course also works with a custom springboard.

An obsolete way to implement Custom Springboard Toggle (you might want to skip reading this)

Now lets see what we need to implement the custom functionality. First we need to show the springboard. This can be done by calling gotoSpringboard() on the containerUtilities class, or by invoking it from the applicationFeatures Datacontrol.

 AdfmfContainerUtilities.gotoSpringboard(); 

This all pretty straightforward and provided by the framework.
Second we need to be able to hide the springboard. This can be done by no particular builtin. However, when you call gotoFeature(), the springboard is hidden and the requested feature is displayed.

AdfmfContainerUtilities.gotoFeature("feature.id");  

It works ok, but what if you don’t select a feature to go to, and simply want to stay on the already active feature ? In that case we could really use a hideSpringboard() method, or something similar.

If we create a hideSpringboard() and combine it under one button with the gotoSpringboard() we can use this one button to show and hide the springboard. In order to implement all this we also need to know whether or not the springboard is visible. For that we can use a simple custom property in a bean, lets call it springboardToggleFlag.

Whenever the springboard is shown, we invert the state of the springboardToggleFlag:

springboardToggleFlag=!springboardToggleFlag;  

and the app knows the state of the springboard. All that we need to do from here is find a way to nicely show and hide the springboard.

While figuring out how to implement the rest of this example I surprisingly found that the solution is already provided by the Framework and also somewhat documented and also available in the PublicSamples provided by Oracle. Because I never had any previous requirement to implement this functionality I totally missed that Oracle added this to the framework. Also I am not sure in what specific version it was added. I know now that the in the MAF 2.0.0 docs it is mentioned very briefly, and in the MAF 2.0.1 docs it is described in a more elaborate way, including a sample app. The API documentation was already available in MAF 2.0.0. Below you can read the details on where to find this samples and docs.

The (Somewhat) Out of the Box Implementation

By implementing the oracle.adfmf.framework.api.AdfmfSlidingWindowUtilities interface in the application lifecycle listener (ALCL), you can use an application feature as a sliding window, which displays concurrently with the other application features that display within the navigation bar or springboard. You can use a sliding window to display content that always present within the application, such as a Springboard.

An example of the implementation can be found in a workspace called “slidingWindows”, and is aprt of the Public Samples. This application demonstrates the use of the AdfmfSlidingWindowUtilities API, which can be used to display multiple features on the screen at the same time. This sample shows how you can create a custom springboard using the AdfmfSlidingWindowUtilities API.

Note that the sliding window plugin API can only be used for features defined within the application that do not appear in the navigation bar and is not the springboard feature. So in order to make a custom springboard that nicely slides in and out of view we need to instruct the app that it has NO springboard, and create a custom feature that functions as a springboard.
none
All the other details of this implementation can be found in the sample app.

Resources

https://docs.oracle.com/middleware/mobile201/mobile/ADFMF.pdf
https://docs.oracle.com/middleware/mobile201/mobile/OEPMF.pdf (is missing the description of the Sample app)
https://docs.oracle.com/middleware/mobile201/mobile/api-ref/oracle/adfmf/framework/api/AdfmfSlidingWindowUtilities.html
http://docs.oracle.com/middleware/mobile201/mobile/api-ref/oracle/adfmf/framework/api/AdfmfSlidingWindowOptions.html

The post MAF 2.0 : Custom Toggle Springboard Functionality (or how I discovered AdfmfSlidingWindowUtilities) appeared first on AMIS Blog.

]]>
http://technology.amis.nl/2014/12/03/maf-2-0-custom-toggle-springboard-functionality-discovered-adfmfslidingwindowutilities/feed/ 1
SQL*Plus / SQL*Net Dead Connection Detection http://technology.amis.nl/2014/11/28/sqlplus-sqlnet-dead-connection-detection/ http://technology.amis.nl/2014/11/28/sqlplus-sqlnet-dead-connection-detection/#comments Fri, 28 Nov 2014 18:50:53 +0000 http://technology.amis.nl/?p=33213 Recently I came across the situation where I knew for a fact that my sessions to the database were dead because I pulled the power plug out of my application server for a failover test. But the sessions stayed visible in the database and kept their locks therefore the failover failed. Now how is that possible? First let [...]

The post SQL*Plus / SQL*Net Dead Connection Detection appeared first on AMIS Blog.

]]>
Recently I came across the situation where I knew for a fact that my sessions to the database were dead because I pulled the power plug out of my application server for a failover test. But the sessions stayed visible in the database and kept their locks therefore the failover failed.

Now how is that possible?

First let me sketch the layout of the setup.

Suppose you have a vendor supplied application that runs on their middleware. That middleware is using two application servers in a cluster with only one of the application servers communicating to the database. Lets name this one AppServerOne.

The second application server (AppServerTwo) contacts AppServerOne when its clients need to talk to the database. But when AppServerOne becomes unresponsive then AppServerTwo will try to setup a connection to the database and check if AppServerOne is indeed no longer handling the database requests.

Environment sketch

 

In this case the middleware checks if AppServerOne has released a lock in the database that AppServerTwo can then take.

When I kill AppServerOne by killing an important process on that server, AppServerTwo indeed takes over and the clients do not experience a problem. But I want to make this a bit more realistic and decide to pull a power plug.

AppServerTwo still notices the problem with AppServerOne and tries to obtain the lock, but it fails to do so. AppServerTwo gives up trying after about 15 minutes.

High Availability down the drain! Now what?

I repeat the test and look a bit closer at what is happening in the database.
In the first test the sessions from AppServerOne are disappearing from the database, but in the second test they are still there after 18 minutes. In fact they disappear somewhere between 18 and 21 minutes.

This is not acceptable as the failover should not be noticed by clients and they are noticing.

Of course I had created a sqlnet.ora on the database server with sqlnet.expire_time specified to enable Dead Connection Detection:

$ cat sqlnet.ora
sqlnet.expire_time = 2

But that is apparently not working as expected.

The vendor had specified that the following TCP/IP parameters had to be set at all the servers involved.

net.ipv4.tcp_keepalive_time = 180
net.ipv4.tcp_keepalive_probes = 10
net.ipv4.tcp_keepalive_intvl = 6

With these settings a stale socket is detected after 4 minutes (180 + (10 * 6)). So they claim.

Their default values are (On Oracle Linux 6.5):

net.ipv4.tcp_keepalive_intvl = 75
net.ipv4.tcp_keepalive_probes = 9
net.ipv4.tcp_keepalive_time = 7200

Humm? Could the vendor be right after all?

I wanted to get to the bottom of this and needed to test more without causing problems on the application servers. So I created a test with a sqlplus session to a database. When I kill that session the database notices that the session has died and removes it from the database. But when I pull the power plug of the server that initiated the sqlplus session I get the same behavior.

Setting the parameters that the vendor wanted set didn’t change the test result. The sqlplus session still stays way too long in the database.

Using Google didn’t get me closer to a solution, but searching on Oracle Support let me to these documents:
“Performance problem with Oracle*Net Failover when TCP Network down (no IP address) (Doc ID 249213.1)”
“Tuning TCP/IP parameter in Linux Box for SQLNET (Doc ID 274953.1)”
“Dead Connection Detection (DCD) Explained (Doc ID 151972.1)”

The problem here is that the database cannot reach the network endpoint that initiated the connection (because it is powered off) and the network stack tries too long to reach it before giving up. Once it gives up the database removes the session. That is probably one of the reasons that Oracle started using VIP’s and SCAN_LISTENERS for RAC. Those will be started on the surviving nodes and thus reappear on the network.

The parameter that I had to change is: net.ipv4.tcp_retries2.
It defaults to 15 retries.

Once this was set to 3 the database sessions where removed quickly enough. Just a little over 4 minutes, and still way more then the 2 minutes an unsuspecting DBA might expect.

Apparently it isn’t a linear function as my results varied in the time that a session was removed from the database. The explanation of the parameter net.ipv4.tcp_retries2 didn’t help me in the reason why, and I had to give up looking for a reason because I ran out of time.

If you are reading this blog because you have a similar problem please make sure you test your situation carefully. And do not forget to make your changes reboot persistent. In Linux 6 you edit the file: /etc/sysctl.conf but that can change in future versions.

Hope This Helps

The post SQL*Plus / SQL*Net Dead Connection Detection appeared first on AMIS Blog.

]]>
http://technology.amis.nl/2014/11/28/sqlplus-sqlnet-dead-connection-detection/feed/ 0
Christmas Masterclass Oracle SOA Suite 12c http://technology.amis.nl/2014/11/20/christmas-masterclass-oracle-soa-suite-12c/ http://technology.amis.nl/2014/11/20/christmas-masterclass-oracle-soa-suite-12c/#comments Thu, 20 Nov 2014 15:16:56 +0000 http://technology.amis.nl/?p=33188 On Friday, December 19, AMIS organises a special XMas-terclass on SOA Suite 12. SOA Suite 12c (June 2014) was a major release for Oracle’s flagship integration product. This release introduces new functionality, higher developer productivity, more robust run-time scalability and performance. 12c brings together development and administration of Service Bus and SOA composite applications, it ships [...]

The post Christmas Masterclass Oracle SOA Suite 12c appeared first on AMIS Blog.

]]>
On Friday, December 19, AMIS organises a special XMas-terclass on SOA Suite 12.

SOA Suite 12c (June 2014) was a major release for Oracle’s flagship integration product. This release introduces new functionality, higher developer productivity, more robust run-time scalability and performance. 12c brings together development and administration of Service Bus and SOA composite applications, it ships new technology adapters as well as a SDK to develop custom adapters and provides across the stack support for REST-style services and JSON-format messages. New facilities were added to the stack, like Managed File Transfer and Enterprise Scheduling Services, and others better integrated, such as Oracle Event Processing and Business Activity Monitoring.

This masterclass, based on over a year’s beta program participation and extensive research and experimentation, deals with many of the essential topics for SOA Suite 12c. In one day, presenters Robert van Mölken and Lucas Jellema will discuss how SOA Suite 12c will change and enhance the way SOA projects develop services. They will show many live demonstrations of the tools in action – providing you concrete insight in new and improved features. At the end of this masterclass, attendees will know how SOA Suite 12c can benefit their organizations and how they can get started with building skills, migrating applications and using this new release properly.

Throughout the day, participants and presenters have enough opportunity to discuss real world experience and brainstorm on how to best make use of SOA Suite 12c in their organizations.

All demo code and slides will be made available. Participants will also receive a voucher for the Oracle SOA Suite 12c Handbook by Lucas Jellema (Oracle Press, Spring 2015).

The SOA Suite 12c Xmas-terclass is aimed at developers, solution architects and administrators with SOA Suite 11g experience.

More information & registration

The post Christmas Masterclass Oracle SOA Suite 12c appeared first on AMIS Blog.

]]>
http://technology.amis.nl/2014/11/20/christmas-masterclass-oracle-soa-suite-12c/feed/ 0
MAF 2.0 : Loading Images in a Background Process – Part I http://technology.amis.nl/2014/11/18/maf-2-0-loading-images-background-process/ http://technology.amis.nl/2014/11/18/maf-2-0-loading-images-background-process/#comments Tue, 18 Nov 2014 09:38:15 +0000 http://technology.amis.nl/?p=32895 Images are heavily used in Mobile apps. For instance a list that contains employees usually shows the images of these employees. This works well when you have a WIFI connection, but what if you are using slower connections ? If you look at a mobile twitter app you will see that, depending on connectivity, images [...]

The post MAF 2.0 : Loading Images in a Background Process – Part I appeared first on AMIS Blog.

]]>
Images are heavily used in Mobile apps. For instance a list that contains employees usually shows the images of these employees. This works well when you have a WIFI connection, but what if you are using slower connections ? If you look at a mobile twitter app you will see that, depending on connectivity, images are loaded instantaneously or delayed. In this post I explain how you can load the images of a List in a background process, after the other content has been loaded.

As mentioned before, a twitter client is able to defer the loading of images whenever a slow connection is detected. You are able to read the content as soon as it is loaded, and the images will show up with a delay, one at a time.

No WIFI connection active

No WIFI connection active

WIFI connection active

WIFI connection active

Implementing the basics

I was tempted to make a working sample of this behavior in an Oracle MAF app. The best way to prove this concept is to simply build an app containing a simple list showing an image of the employee and the employee’s name. The data in this list would typically come from a web service but for the simplicity of this example I use a POJO. The POJO is based on the Employee POJO from Oracle’s WorkBetter sample app. After I created the properties, I make sure that all getters and setters are generated.

public class Employee {
    private int empId;
    private String firstName;
    private String lastName;
    private boolean active = false;
    private String image;

There is one extra method, called setImageById(), that takes the empId and constructs the image name. This will be used and explained later in this post. The actual images are provided as part of the application.
images

    public void setImageById(int id){
        Integer i = new Integer(id);
        String image = i.toString() + ".png";
        setImage(image);  
    }

As a dataprovider I create a new class called EmployeeService. In the constructor of this a list of Employees is created and once this list is ready, a call to setEmpImage() is made. In this method calls out to out POJO class to set the Employees Image.

public class EmployeeService {
    protected static List s_employees = new ArrayList();

    public EmployeeService() {
            s_employees.add(new Employee(130, "Mary", "Atkinson"));
            s_employees.add(new Employee(105, "David", "Austin"));
            s_employees.add(new Employee(116, "Shelli", "Baida")); 
            // .... More
            setEmpImage();
   } 
    public void setEmpImage() {
        for (int x = 0; x &lt; s_employees.size(); x++) {
            Employee e = (Employee) s_employees.get(x);
                e.setImageById(e.getEmpId());
   }
    public Employee[] getEmployees() {
        return (Employee[]) s_employees.toArray(new Employee[s_employees.size()]);
    }

The class also contains a getEmployees() method that returns all the employees. This getter will be used on the listpage that is created next. The creation of the listpage is simple. After creating a data control on the EmployeeService class we can just drag and drop the EmployeesCollection to the AMX page as an MAF List View. Make sure you pick a list style where you can actually see the images. In this example, after creating the list we have to make sure that the Listview knows where the images are located so we need to make a small change.

&lt;amx:image source="#{row.image}" id="i2"/&gt;

Because the image path is pics/ this must be changed to:

&lt;amx:image source="pics/#{row.image}" id="i2"/&gt;
DnD

Drag and Drop Employees Collection

Now you can deploy the app and see how the images are loaded all at once if you open the app.

allOtOnce

All Images Displayed at Once

Implementing the background loading

Now lets see what we need to do when we want to load the images in the background. First of all we must determine the connection type. When it is WIFI, we can load all data, including the images all at once. Otherwise we use two separate calls. The first one reads the employee data and the second one is started in a background process to load the images of the employees. In this post we do not actually implement it in this way. We don’t look at the connection type at all. For now we will simply assume that there is a slow connection and always load the images in the background, as this was the purpose of this post anyway.
To load images in the background we need to create a class that can run as a separate thread.
This is a new class, BackgoundLoader. This class implements the Runnable interface.

package com.blogspot.lucbors.img.mobile.services;

import oracle.adfmf.framework.api.AdfmfJavaUtilities;

public class BackgroundLoader implements Runnable {
    public BackgroundLoader() {
        super();
    }
    EmployeeService empServ = null;

    public BackgroundLoader(EmployeeService empServ) {
        this.empServ = empServ;
    }
    boolean go = false;

    public void run() {
        while (true) {
            if (go) {
                empServ.loadImage();
            }
            try {
                Thread.sleep(10);
            } catch (InterruptedException e) {
            }
        }
    }

Note that when this is running, it will call the loadImage() method on the employeeService class in a background thread. We will use this BackgroundLoader class as the worker class where we create an instance of. This instance is passed to a new thread that is created in the setEmpImage() method, by calling startImageLoader(), when we do not have a fast connection. Once the thread is started, the setEmpImage() returns immediately.

    public void setEmpImage() {
        for (int x = 0; x &lt; s_employees.size(); x++) {
            Employee e = (Employee) s_employees.get(x);
            if (1 == 0) {
                // this is what we do with a fast connection
                e.setImageById(e.getEmpId());
            }
            if (1 == 1) {
              // this is what we do with a slow connection (or if 1==1)
              startImageLoader();
          }
        providerChangeSupport.fireProviderRefresh("employees");
      }
    }

    private BackgroundLoader loader = new BackgroundLoader(this);
    private Thread worker = new Thread(loader);

/*
* Starts the BackgroundLoader thread which invokes the loadImage method to load the images in
* a background process.
*/ 

    public void startImageLoader() {
        setLoaderStarted(true);
        loader.go = true;
        if (!worker.isAlive()) {
            worker.start();
        }
        setLoaderStarted(loader.go);
    }

The run method of the BackgoundLoader class then calls the loadImage() method in the EmployeeService class as mentioned before. This executes the loadImage() on a separate thread so the UI is not locked. The data on the screen is already shown, while the loadImage() continues to work in the background to load the images. Note that the UI is not blocked. You can scroll and navigate at your convenience.

Finally the loadImage() method has one more, very important aspect. Once data in the Employee collection changes, that is once an image is retrieved, we need to call AdfmfJavaUtilities.flushDataChangeEvents(). This is necessary because as data is updated in the background thread. Any changes in background theads are not propagated to the main thread until they are explicitly flushed by calling AdfmfJavaUtilities.flushDataChangeEvents().

public synchronized void loadImage() {
        int i = 0;
        long time = 1000;
        while (i &lt; filtered_employees.size()) {
            Employee e = (Employee) filtered_employees.get(i);
            int id = e.getEmpId();
            e.setImageById(id);

            try {
                wait(time);
                AdfmfJavaUtilities.flushDataChangeEvent();
            } catch (InterruptedException f) {
                //catch exception here
            }
            i++;
        }
    }

Now there is no need to make any changes to the UI. It will behave the same as before, with the difference that images are shown not all at once, but on a one by one basis. Simply redeploy the app and run it. You will see the images loading slowly. Note that to mimic slow connections we wait 1 second after every image. This makes the background loading very visible in the UI. The video below shows the image loading behavior.

Summary
In this post you learned how to use a background process to load images to the UI. This sample can be improved by actually calling a webservice to retrieve the data, instead of hardcoding all emps in a POJO. You can, based on connectivity, load the images in the same thread, or start a background process to call the service that returns the images.

Resources
To look at the WorkBetter demo app that is shipped with Oracle MAF,you can unzip the publicSamples.zip and open the workBetter app in Jdeveloper. The public samples also contains the StockTracker app that is used to demo the background process.
If you want to read more on Mobile Application Framework, you can also buy my book which is available from amazon: http://www.amazon.com/Oracle-Mobile-Application-Framework-Developer/dp/0071830855

The post MAF 2.0 : Loading Images in a Background Process – Part I appeared first on AMIS Blog.

]]>
http://technology.amis.nl/2014/11/18/maf-2-0-loading-images-background-process/feed/ 0
Wetgeving frustreert vernieuwing http://technology.amis.nl/2014/11/17/wetgeving-frustreert-vernieuwing/ http://technology.amis.nl/2014/11/17/wetgeving-frustreert-vernieuwing/#comments Mon, 17 Nov 2014 06:25:29 +0000 http://technology.amis.nl/?p=33080 Je bent bezig met het creëren van iets nieuws? Een nieuw product of bedrijfsmodel waardoor je organisatie een unieke positie in kan nemen? Je zoekt je uitdaging in het oplossen van uitdagende inhoudelijke hobbels waar je lekker je tanden in kunt zetten. Maar in de praktijk loop je als vernieuwer al snel tegen bestaande wet [...]

The post Wetgeving frustreert vernieuwing appeared first on AMIS Blog.

]]>
Robbrecht van Amerongen

Robbrecht van Amerongen
Business Innovation Manager

Je bent bezig met het creëren van iets nieuws? Een nieuw product of bedrijfsmodel waardoor je organisatie een unieke positie in kan nemen? Je zoekt je uitdaging in het oplossen van uitdagende inhoudelijke hobbels waar je lekker je tanden in kunt zetten. Maar in de praktijk loop je als vernieuwer al snel tegen bestaande wet en regelgeving aan als het gaat om innovatie. Wat doe je dan? Conformeren, of de bestaande regels uitdagen….? 

Innovatie: Dat mag niet…!

Innovatie gaat over het veranderen van techniek, processen en bedrijfsmodellen. Daarbij is het uiteraard de bedoeling dat je iets nieuws introduceert. Het is heel gebruikelijk dat je al snel tegen regels en wetgeving aanloopt. Ook zal je al snel te maken hebben met protectionistische maatregelen zoals certificering, vergunning en ontheffing.

Een aantal van deze regels passen in mijn ogen niet meer bij de huidige maatschappij waarbij een groot deel van de communicatie en het handelsverkeer digitaal verloopt. Waarom heb ik bijvoorbeeld een vestigingsplaats nodig wanneer ik mijn activiteiten alleen maar online uitvoer? En waarom moet ik een vergunning hebben als ik mijn woning tijdelijk wil verhuren? Raar…

Disruptive innovation

Met de opkomst van disruptive innovation zie je dat de vernieuwing en concurrentie uit een totaal andere hoek komt. Een andere bedrijfstak of een totaal ander product. Geïntroduceerd door partijen die zich niet storen aan, of zelfs geen idee hebben van vergunning en regelgeving. De introductie van een merk als Uber toont aan dat je vanuit een kantoor in San Francisco een verstorende impact kan hebben op de Nederlandse markt van taxi vergunningen. De branche is faliekant tegen dit nieuwe bedrijf, de markt vindt het geweldig.

clip_image002De voorbeelden van disruptive innovation geven aan dat deze modellen snel tegen nieuwe regelgeving aanlopen. Soms moet de overheid nieuwe wetgeving maken of schaft ze bestaande regels af. De zelfrijdende auto lokt de discussie uit over de competenties van de menselijke “bestuurder”. Een blinde man heeft in de US al 200.000 kilometer gereden met een zelf rijdende auto. Het is echter nog steeds verplicht om een auto te voorzien van een stuur en een “bekwame” bestuurder.

Innovatie: gewoon doen!

Innovaite? Dat mag niet!In mijn ervaring loopt de wetgeving altijd achter op de werkelijkheid en bij innovatie zou ik me daar ook zeker niet door laten tegenhouden. Daarbij zal je je wel bewust moeten afvragen of de regelgeving bedoeld is voor veiligheid of voor marktbescherming. Innovatie gedijt het beste in een omgeving met een liberaal vergunningenbeleid. En als dat er niet is dan is er altijd nog het “just do it” paradigma. Laat je vooral niet tegenhouden door negatieve en verbiedende geluiden uit de omgeving. Denk bijvoorbeeld maar eens hoe leuk de kinderserie Pippi Langkous is als iedereen naar het regeljuffertje Annika had geluisterd.

In mijn werkt wet- en regelgeving veelal frustrerend voor innovatie. Daarom zou ik altijd eerst iets doen dan pas kijken of het mag. Wellicht heb je iets moois gecreëerd en forceert de markt dit tot een geaccepteerd model. Innovatie : “Dat mag wel!”

 

*Dit artikel is ook verschenen in de Computable : Wetgeving frustreert vernieuwing

The post Wetgeving frustreert vernieuwing appeared first on AMIS Blog.

]]>
http://technology.amis.nl/2014/11/17/wetgeving-frustreert-vernieuwing/feed/ 0