Monday, February 1, 2021

Chunked Streaming in Weblogic

Usage of Chunked Streaming Mode Usage in Business Service of OSB 11g

As in the OSB documentation provide by Oracle 

Chunked Streaming Mode property should be selected if you want to use HTTP chunkedtransfer encoding to send messages.

Chunked transfer encoding is an HTTP 1.1 specification, and allows clients to parse dynamic data immediately after the first chunk is read. 

Use Checked Streaming Mode  to use HTTP chunked transfer encoding to send messages under HTTP transport configurations for a business service.  Do not use  this option if you have http redirects configured. Also, try to disable this option,  if you are getting

  • Client request get a read timed out error
  • "Request Entity Too Large" appears with The BEA-380000 error in the logs (OSB 11g)
  • "Request Entity Too Large" appears with the http status code 413 error in the logs (OSB 12c)
  • The last executed OSB instance continuously retries every 5 minutes

If you disable "Checked Streaming Mode"  OSB may double invoke target system for a single invocation.  Also, Quality of Service attribute of Proxy Service default behavior of "Exactly Once" is affected. To fix, you need to check on "Quality of Service" attribute to make it as "Exactly Once" for the -message flow -route node of a proxy service.

Monday, May 25, 2015

Handling End Entity Key Compromises using OCSP Stapling

While CA provided certificate breaches are fairly uncommon, end entity  key compromise occurs much more often. Whether due to a server breach, stolen laptop, or a lost smart card, these compromises occur daily. Fortunately, modern PKI systems are designed with this in mind and CAs can revoke certificates and publish revocation information in the form of CRLs, or provide online revocation status using OCSP.

OCSP is a an internet protocol based on X.509 internet public key infrastructure to check the validity status of a certificate in real time whenever browser wants to establish the https connectivity with the server. It is alternative to CRL (certification revocation list).
As CRL requires downloading of large payload and extremely time consuming when connected to the CRL issuer. Also, revocation information may not be up to date when it need to be used for pushing updates in mobile devices as applications using CRL does not require to be connected to the CRL issuer.

Advantages of using OCSP:

OCSP allows clients to query an OCSP server about the revocation status of individual certificates.

 1. There is likelihood of certificates are always latest as user may obtain revocation of certificate information immediately.
2. OCSP certificates does not require much storage space, unlike CRL as certificate which is in consideration only need to be stored.

Disadvantages of using OCSP:

1. This requires application which uses the certificate to be online to know the revocation state.


OCSP specifics can be found in RFC 2560  -   http://datatracker.ietf.org/doc/rfc2560/

An alternative to directly pushing revocation information as part of browser updates is OCSP Stapling, formerly known as the TLS Certificate Status Request extension.

However, OCSP Stampling improves handshake between the browser and server by hosting digitally signed version with timestamp from the OCSP of CA response directly on the web server. This stapled OCSP response would be refreshed from the CA at pre-defined intervals set by the CA.  The stapled OCSP response allows the webserver to include OCSP response within the initial SSL handshake without additional call to the CA server. The web server rather than the client, queries the certificate status from OCSP server in a regular interval.

OCSP Stapling specifics can be found in RFC 6066 -  http://datatracker.ietf.org/doc/rfc6066/

Advantages of using OCSP Stapling:

1. Improves connection speed of the SSL handshake by combining two separate requests into one request. It reduces the time required to load an encrypted webpage.
2. Helps maintain the privacy of the the end client/user as no direct connection using end user IP address is made to the CRL for the OCSP request and  CA  server will only see OCSP requests from the web server, not end user requests.
3. When the user connect to the captive  portal or hotspot to access the internet, client application cannot verity the OCSP check as there was no SSL/TLS handshake happened as access to the internet not granted yet due to pending authentication. However, using OCSP stapling takes care of SSL/TLS handshake.

Disadvantages of using OCSP Stapling:

1.Not all browser versions support OCSP Stapling.

Elements of an OCSP Request:


OCSPRequest     ::=     SEQUENCE {
  tbsRequest                  TBSRequest,
  optionalSignature   [0]     EXPLICIT Signature OPTIONAL }
TBSRequest      ::=     SEQUENCE {
  version             [0]     EXPLICIT Version DEFAULT v1,
  requestorName       [1]     EXPLICIT GeneralName OPTIONAL,
  requestList                 SEQUENCE OF Request,
  requestExtensions   [2]     EXPLICIT Extensions OPTIONAL }
Request         ::=     SEQUENCE {
  reqCert                     CertID,
  singleRequestExtensions     [0] EXPLICIT Extensions OPTIONAL }
CertID          ::=     SEQUENCE {
  hashAlgorithm       AlgorithmIdentifier,
  issuerNameHash      OCTET STRING,
  issuerKeyHash       OCTET STRING,
  serialNumber        CertificateSerialNumber }

Elements of an OCSP Response:


OCSPResponse ::= SEQUENCE {
  responseStatus         OCSPResponseStatus,
  responseBytes          [0] EXPLICIT ResponseBytes OPTIONAL }
ResponseBytes ::= SEQUENCE {
  responseType   OBJECT IDENTIFIER,
  response       OCTET STRING }
BasicOCSPResponse       ::= SEQUENCE {
  tbsResponseData      ResponseData,
  signatureAlgorithm   AlgorithmIdentifier,
  signature            BIT STRING,
  certs                [0] EXPLICIT SEQUENCE OF Certificate
      OPTIONAL }
ResponseData ::= SEQUENCE {
  version              [0] EXPLICIT Version DEFAULT v1,
  responderID              ResponderID,
  producedAt               GeneralizedTime,
  responses                SEQUENCE OF SingleResponse,
  responseExtensions   [1] EXPLICIT Extensions OPTIONAL }
SingleResponse ::= SEQUENCE {
  certID                  CertID,
  certStatus              CertStatus,
  thisUpdate              GeneralizedTime,
  nextUpdate        [0]   EXPLICIT GeneralizedTime OPTIONAL,
  singleExtensions  [1]   EXPLICIT Extensions OPTIONAL }
CertStatus ::= CHOICE {
   good       [0]     IMPLICIT NULL,
   revoked    [1]     IMPLICIT RevokedInfo,
   unknown    [2]     IMPLICIT UnknownInfo }





Friday, July 4, 2014

Factors need to be consider for vCPU to pCPU ratios in VMWare

Although, there are several factors need  to be considered for right vCPU to pCPU ratios. Following are high level guide lines as per VMWare.
1. Range of 6:1 to 8:1, even though theoretically it is possible to allocated 25:1
2. keep the CPU Ready metric at 5% or below

The actual ratio may depend on following factors.

1. Based on the vSphere version, if vSphere version is a recent version, more consolidation is possible
2. Based on the Process Age, if the process is a newer  one, higher processor ratios can be acheivable
3. Based on different kind of work loads on the host

In order to come to a realistic ratio which can reduce performance problems due to VMWare based virtualization,  need to monitor Metrics with utilities such as  vScope Explorer which is part of VKernel vOPS Server Explorer.



Tuesday, February 11, 2014

Weblogic Work Manager Usage


Thread utilization of a weblogic server instance can be controlled by defining rules and constraints 
and  by defining a Work Manger. Work Manager constraints can be applied   either globally to 
WebLogic Server domain or to a specific application component

Need to use work manager for the thread management for the following scenarios.

1. When one application needs to be given a higher priority over another and default fair share is not sufficient.
2. A response time goal is required
3. To avoid server dead lock by configuring minimum thread constraint


Sunday, February 9, 2014

Service Component Architecture (SCA)

Service Component Architecture (SCA) defines a programming model for composite SOA applications. SCA provides a model for the composition of services and for the creation of service components, including the reuse of existing applications within SCA composites. SCA is based on the idea of service composition
aka orchestration.

SCA specification consists of four main elements.

1. Assembly Model Specification -  This model defines how to specify the structure of a composite application.

2. Component Implementation Specification - This specification define how a component is actually written in a particular programming language.

3. Binding Specification - This specification define how the services published by a component can be accessed.

4. Policy Framework Specification - This specification desribes how to add non functional requirements to services.

More information regarding SCA can be found at http://tuscany.apache.org/documentation-2x/sca-introduction.html




Tuesday, January 28, 2014

Finding Linux Machine CPU Architecture Info

# lscpu
Architecture:          x86_64
CPU op-mode(s):        32-bit, 64-bit
Byte Order:            Little Endian
CPU(s):                4
On-line CPU(s) list:   0-3
Thread(s) per core:    1
Core(s) per socket:    1
CPU socket(s):         4
NUMA node(s):          1
Vendor ID:             GenuineIntel
CPU family:            6
Model:                 47
Stepping:              2
CPU MHz:               2396.863
BogoMIPS:              4793.72
Hypervisor vendor:     VMware
Virtualization type:   full
L1d cache:             32K
L1i cache:             32K
L2 cache:              256K
L3 cache:              30720K
NUMA node0 CPU(s):     0-3

Tuesday, January 14, 2014

Performance Testing - Considerations

Performance testing is done to provide  information about  application regarding speed, stability and scalability. In General, performance testing uncovers what needs to be improved before the application goes live. Without performance testing, applications are likely to suffer from issues such as running slow while several users use it simultaneously, inconsistencies across different operating conditions. Performance testing will determine whether or not their software meets speed, scalability and stability requirements under expected workloads. Applications gone live with poor performance metrics due to not having or poor performance testing are likely to gain a bad reputation and fail to meet expected business goals.

Common bottlenecks for the application performance  include not limited to

  1. CPU utilization
  2. Memory utilization
  3. Network utilization
  4. Operating System limitations
  5. Disk usage
In order to ascertain performance of a application during different performance testing activities, need to analyze following parameters.

  1. Processor Usage – Amount of time each processor spends executing non idle threads.
  2. Hit ratios – Hit ration measures the fraction of the traffic that is served from the web Cache. Also, total of number of SQL statements that are handled by cached data instead of expensive I/O operations. This is a good place to start for solving bottle necking issues.
  3. Hits Per Second – The number  of hits on a web server during each second of a load test.
  4. Rollback Segment - The amount of data that can rollback at any point in time.
  5. Database Locks - Locking of tables and databases needs to be monitored and carefully tuned.
  6. Top Waits – These are monitored to determine what wait times can be cut down when dealing with the how fast data is retrieved from memory
  7. Memory use – Amount of physical memory available to processes on a computer.
  8. Disk time – Amount of time disk is busy executing a read or write request.
  9. Bandwidth – Shows the bits per second used by a network interface.
  10. Committed memory – Amount of virtual memory used.
  11. Memory pages/second – Number of pages written to or read from the disk in order to resolve hard page faults. Hard page faults are when code not from the current working set is called up from elsewhere and retrieved from a disk.
  12. Network bytes total per second – The rate which bytes are sent and received on the interface including framing characters.
  13. Page faults/second – The overall rate in which fault pages are processed by the processor. This again occurs when a process requires code from outside its working set.
  14. CPU interrupts per second – It is the avg. number of hardware interrupts a processor is receiving and processing each second.
  15. Disk queue length – It is the avg. no. of read and write requests queued for the selected disk during a sample interval.
  16. Network output queue length – length of the output packet queue in packets. Anything more than two means a delay and bottle necking needs to be stopped.
  17. Response time – Time from when a user enters a request until the first character of the response is received.
  18. Private bytes – Number of bytes a process has allocated that can not be shared among other processes. These are used to measure memory leaks and usage.
  19. Throughput – Rate a computer or network receives requests per second.
  20. Amount of connection pooling – The number of user requests that are met by pooled connections. The more requests met by connections in the pool, the better the performance will be.
  21. Maximum active sessions – The maximum number of sessions that can be active at once.
  22. Thread Counts – An applications health can be measured by the no. of threads that are running and currently active.
  23. Garbage Collection – It has to do with returning unused memory back to the system. Garbage collection needs to be monitored for efficiency.
You can use any APM (Application Performance Management), NPM(Network Performance Management)  and Business Transaction Monitoring tool (BTM) tool  to analyze above mentioned parameters.

When selecting a performance monitoring tool, need to consider following factors
  • Which Monitor performance of the database's.
  • Which Monitor the Physical as well  Virtual components of the Infrastructure.
  • Understand and Map all the components involved in the transaction
  • Collect response times for a transaction
·