Relational Database Storage

0 comments
Several extensions and code libraries (including Zend Framework) offer session save handlers that store session data in a relational database such as MySQL or Oracle. Session storage in a central database solves the scalability limitation imposed by local storage mechanisms, by making the session data available to all servers in the cluster, and making sure data integrity is maintained. Database session handlers work more or less the same, regardless of implementation and database used for storage; one or more tables are created using the session ID as the primary key, and with another column for serialized session data. With each new session, another row is inserted to this table. On each request with the same session ID, this row is fetched in the beginning of the request and is then updated at the end of it. This storage method is obviously more scalable than local storage, but still has several disadvantages which should be pointed out:
Session Storage is very different from regular data that is usually stored by web applications in a relational database. Sessions have an almost 1:1 read/write ratio (PHP will save the session at the end of each request that opened the session, even if the data did not change), and as such, row-level locking is required to maintain session integrity. Database-resident query caching usually does not work well with such usage patterns. To gain optimal performance, the session storage database should be isolated from the application’s database, and tuned differently. This imposes an additional maintenance overhead, at least in higher load levels. When a single database is used for an entire cluster, the single database quickly becomes both a performance bottleneck and a potential point of failure and subsequently, database clustering is required. Again, this causes a maintenance overhead whenever the application scales up.
Read more ►

Existing Session Storage Engines

0 comments
There is a multitude of session save handlers available for PHP, each with its own advantages and disadvantages. Given the modular nature of session storage in PHP, users are free to create additional save handlers, either as C/C++ extensions, or by using user-space PHP code.
The following list covers some of the more widely used session handlers, and overviews their capabilities:
The ‘files’ save handler is PHP’s default save handler. It stores session data as files on disk, in a specified directory (or sometimes in a tree of directories). Usually, this save handler is capable of easily handling tens of thousands of sessions.
The biggest disadvantage of the files save handler is that it imposes a major scalability bottleneck when trying to scale an application to run on more than one server.
In most clusters, requests from the same users may end up on any of the servers in the cluster. If the session information only exists locally on one of the servers, requests ending on other servers will simply create a new session for the same user, resulting in data loss or inconsistency.
One way to work around this problem may be to store session data on a shared file system such as NFS (Network File System). Unfortunately, this is a highly inefficient method and usually results in performance problems and data corruption. This is due to the fact that NFS was not designed for the high read/write ratio and potential concurrency levels required for session handling.
Another potential solution is to use a “sticky”, session aware load balancer. Most load balancers today have stickiness capabilities in some form or another. While this is a better solution, experience shows that in high loads sticky load balancers tend to become a bottleneck, and cause uneven load distribution. Indeed, most heavy PHP users prefer to use round-robin load balancing with other session storage mechanisms.
In addition, sticky load balancing does not solve another inherit disadvantage of the files session handler: session redundancy. Many users rely on clustering for application high-availability. However, in many cases session high-availability is also important. If a server crashes or otherwise becomes unavailable, the load balancer will route the request to a different server. The application in this case will still be available – but the session data will be lost, which in many cases may lead to business loss.
Read more ►

Background: PHP Sessions

0 comments
State Representation in HTTP


HTTP, the protocol over which the web is built, is a stateless protocol. Each HTTP request is user session context independent, and the server is, on the HTTP protocol level, unaware of any relationship between consecutive requests.
This has made HTTP a highly scalable and versatile protocol. However, in most Web applications some notion of a user session – that is short-term, user specific data storage, is required.
For example, without some sort of state representation a web application cannot distinguish between logged-in users (or technically put, requests coming from an HTTP client that has logged in) and non logged-in users. In many cases even more complex data, such as the contents of a shopping cart, must be maintained between requests and attached to a specific user or browser.
HTTP leaves the solution of such problems to the application. In the PHP world, as in most web-oriented platforms, two main standard methods exist for storing short-term user specific data: Cookies and Sessions.
HTTP Cookies
Cookies are set by the server, and are stored by the browser on the end user’s machine. Browsers will re-send a Cookie to the same domain in which it originated, until it expires. This allows storing limited amounts of user-specific information and making them available to the application on each request made by the same user. Cookies are convenient and scalable (no storage is required on the server side), but are also limited due to a number of reasons:
Cookies are limited in size, and the limit varies per browser. Even if the limit is high, large amounts of data sent back and forth in each request may have a negative effect on performance and bandwidth consumption.
Cookies are sent repeatedly, on each request to the server. This means that any sensitive data contained in cookies is exposed to sniffing attacks, unless HTTPS is constantly used – which is in most cases not an effective option.
Cookies store strings – storing other, more complex types of information will require serialization and de-serialization to be handled in the application level.
Cookie data is stored on the client side – and as such, is exposed to manipulation and forgery by end users, and cannot be trusted by the server.

These limitations make it almost impossible to rely on Cookies to solve all state representation problems. As a better solution, PHP offers the Sessions concept.
Read more ►

Magento: Using Parallel Connections

0 comments
Browsers can load page elements in parallel. Specifying different domains for media, skin, and JavaScript URLs in the Magento Enterprise Edition configuration (System→Configuration→GENERAL→Web) will help speed the page rendering time in the browser, as most browsers limit the number of downloads to 2-4 parallel threads per domain name.

Other web page design recommendations are beyond the scope of this document. You can find the detailed list of web site design best practices at the Yahoo Developer Network at http://ping.fm/AXhAv
Read more ►

Magento: Number of HTTP Requests per Page

0 comments
In order to improve page load and processing time, it is important to reduce the number of HTTP requests per page. Magento Enterprise Edition allows combining multiple JavaScript files and style sheets into a smaller number of files.

This process is fully under the control of a theme developer who implements it through the flexible system of theme layouts instead of directly including the JavaScript files from within the templates.

The following is the example of how the number of JavaScript files can be properly reduced:










The previous layout file example combines the two scripts in a single file that will be added to the page with one request to

js/index.php?c=auto&f=,custom_js/gallery.js,custom_js/intro.js.
Read more ►

Caching in Magento Enterprise Edition-II

0 comments
Full Page Cache

Magento Enterprise Edition can cache entire page contents and return statically stored (X)HTML files rather than building pages dynamically for each request. Only CMS, category, and product view pages support full page caching.

The homepage is the CMS page that is accessed most frequently. Full Page Cache allows the web server to significantly increase the performance on pages that can be cached statically.

From the tests performed, Full page cache stores and uses generated full web page for forthcoming users. The presence of the cache significantly improves the performance of the website by up to 9.5 times where the hardware experience a decreasing load during regular surfing through the website.
Read more ►

Caching in Magento Enterprise Edition-I

0 comments
System Cache

Magento Enterprise Edition can cache frequently used data using various cache backends. Using a cache backend will always improve the performance of Magento Enterprise Edition. By default, when installed, Magento Enterprise Edition is set to use the file system as a cache backend. While the file system cache is the most reliable storage with unlimited size, it does not provide the best performance.
Magento Enterprise Edition v. 1.9 can also work with the following cache backends that provide performance better than that of the file system cache backend:  APC—a bytecode cache for PHP which also provides a shared memory storage for application data
 Memcached—a distributed, high-performance caching system
Read more ►

Caching in Magento Enterprise Edition

1 comments
Once the cache is enabled and one of the above-mentioned shared memory backends is used, Magento automatically uses a TwoLevel cache backend (see http://ping.fm/cb1yD for more information). In which case, the specified cache backend is used as a fast cache. The file system cache backend is used as a slow cache for single web node environments, and the MySQL database is used as a slow cache for multiple web node environments.

This is fully customizable and can be easily set in app/etc/local.xml, as shown in the Appendix F. When using APC, the Source Code Compiler must be disabled.
Make sure that if you are using APC or memcached as backends, you configure each of them with enough memory to include all cached data; otherwise, a backend may purge the required cache hierarchy structure.
Read more ►

Magento: Scaling Web Nodes

0 comments
Magento Enterprise Edition can be scaled over any number of additional web servers. This allows handling a bigger number of concurrent requests by simply introducing new web nodes when the number of page views and visitors grows. Doubling the number of web nodes can provide a performance increase of over 90%.
Read more ►

Magento: Handling Sessions

0 comments
Magento Enterprise Edition uses PHP sessions to store customer session data.

The default method is to use the file system storage, which works well if you are using a single web server. Its performance can be improved by configuring the tmpfs in-memory partition to avoid extra hard drive I/O activity.
In a clustered environment with multiple web servers, the first option for handling sessions is to use a load balancer capable of associating client requests with specific web nodes based on the client IP or the client cookies. If you are in a clustered environment and not using a load balancer capable of the above-mentioned, it is necessary to share the session data among all web servers. Magento Enterprise Edition supports two additional session storage types that can be used in this case.

Though fully supported, storing session data in the database is not recommended as it puts an additional load on the main database, and therefore, requires a separate database server to efficiently handle multiple connections under load in most cases. However, storing session data in the database provides an advantage in case it's important to keep user sessions in case of any server crashes. Database-driven sessions will not be damaged when one or all servers in the cluster are down.

The memcached session storage is free of these disadvantages.
The memcached service can be run on one of the cluster servers to provide fast session storage for all web nodes of the cluster. Though, because of extra overhead processing compared to raw file system session files, the memcached session storage does not show any performance improvements when used in a single server configuration.
Read more ►

Magento: Scalability

0 comments
Magento Enterprise Edition is designed to be able to take advantage of a multi-server setup in a clustered environment. Web nodes are not limited to be of exactly the same type. There may be different nodes, such as frontend servers, static content and media servers, and a separate admin panel server each performing different tasks.
Read more ►

Megento: Working with JavaScript-II

0 comments
Method 2: Use Hosted Scripts and Code Snippets To add code snippets or a link to hosted scripts, you need to include the code in a new static block. Then, use a frontend app to add the static block to your store.

Step 1: Add a New Static Block
Step 2: Create a New Frontend App
Step 3: Choose Where it Goes
Step 4: Verify that the Code Works
Read more ►

Megento: Working with JavaScript-I

0 comments
Method 1: Upload JavaScript Files

1. From the Admin panel, select Design > Theme Editor.

2. In the Theme Editor, below the thumbnail for the theme you are working with,
click Customize.

3. In the Theme Customization panel on the left, select Java Script Editor.

4. In the Theme JavaScript section, do the following:

a. Click the Browse Files button to select the JavaScript file from your computer. Repeat this step to add the path to every JavaScript file that you want to upload. To upload multiple JavaScript files, first add each file, and then upload them all at once—you don’t need to upload each file separately.

b. Click the Upload Files button to upload the JavaScript file to your store. After the file is uploaded, the JavaScript will be available to all pages.

5. Click the Save button to save your changes.
Read more ►

Megento: Working with JavaScript

0 comments
Your store layout, design, and functionality can be customized using the design tools which are available from the Admin panel. However, if you have a working knowledge of JavaScript, you can make additional changes to enhance your store. Magento Go allows you to add your own custom JavaScript files to add client-side functionality – which is executed at the individual browser level, rather than on the server.

In this blog, you will learn how to work with JavaScript libraries, hosted scripts, and snippets of code to implement custom features for your Magento Go store. This material was written for frontend developers who have a working knowledge of JavaScript. If you would like to learn more, you can find a wealth of educational material online or at your favorite book store. Adding Custom Code There are two primary methods for adding custom JavaScript to your Magento Go store, and one you use depends on what you want to do.

Method 1: To implement custom code (.js file), hosted scripts, or JavaScript libraries (such as TypeKit, JQuery, or MooTools), use the Java Script Editor in the Theme Customization section.

Method 2: To add hosted scripts or invoke custom snippets of code, add the code to a static block and use a Frontend App to make the code available to the appropriate pages.
Read more ►

What is Code Tracing?

0 comments
Think of a black box flight recorder; when something goes wrong with an airplane, the problem is not actually reproduced. Instead, the flight recorder captures the complete data that flight analysts may need in order to understand why the problem occurred. Zend Server does the same for PHP applications. Rather than spending time on trying to set up the environment and reproduce all the steps that led up to the failure, Zend Server captures the full execution of the application in real-time – in production or in the test lab – so root cause can quickly be identified. Code tracing can be activated automatically to capture problems when they occur in real time, or manually by the user for specific requests. Code tracing captures the following data:

* Function calls: every function called as part of a request, represented as a time-synchronized tree based function execution flow.
* Arguments and return values: all arguments and return values passed into and returned from functions in the traced request
* Duration: breakdown of execution time at the function level – particularly useful for identification of performance problems
* Memory usage: for every executed function, the memory it consumes – allowing easy identification of functions with abnormal memory consumption or memory leaks
* File name and line of code: for each function call, the file name and exact line of code from which the function was called

The trace displayed in the Zend Server web console functions like a DVD player, showing the recorded execution history of the application. Users can follow the footsteps of a single problematic request in order to quickly pinpoint the root cause of the problem.
Read more ►

Zend Server as Bottleneck Breaker

0 comments
Let’s rewind the scenario to the point where Ted transitions the first round of features to the operations team and make one early addition that changes everything.
While the development team moves on to the next set of features, Ted transitions the tested feature set to Mark, a project lead on the operations side of the IT department, so that he and his team can prepare and implement the first round of changes and be ready for the next round.
Mark has had experience deploying applications in previous jobs and is concerned about problems that may arise from the absence of a standardized deployment process. He knows that his team will not be able to match the development team’s agility without a consistent and automated deployment process that cuts down on manual errors. He recommends implementation of Zend Server to provide the needed structure and Ted agrees.
As development team members finish each incremental build, they create deployment packages with Zend Server. Each package contains:
• The application code
• Prerequisites that make sure the system has all the correct PHP extensions and libraries to run the application
• All the parameters that the administrator will need to enter to install it correctly (e.g. database host and credentials)
• Installation PHP scripts that can be run at various points in the deployment process (e.g. after staging or activation)
The application prerequisites save time by ensuring compatibility, avoiding unexpected “wrong library version” crashes at some later point. Predefining installation parameters eliminates potential for deployment error. Zend Server deployment also enables integration with continuous integration servers (e.g. Hudson) for continuous building and deployment of code.
To maximize the application’s scalability and flexibility, Ted’s team decides to deploy it in the cloud, knowing that Zend Server can easily handle application deployment to multiple PHP servers. Using Zend Server Cluster Manager, they are able to auto-scale the application, spinning up new virtual hardware plus PHP middleware (Zend Server) as well as the actual application itself. This means that they are able to automatically adjust their application capacity based on load.
The result exceeds expectations. The project is completed within the required timeline, customer feedback is positive as features go live, and the business sees value in the form of improved bottom line performance and competitive advantage. Behind the scenes, Ted’s team has a structured, easily maintained, business-critical application that will form the basis for a new approach to future development.
And there is more. An unexpected business benefit is the new kind of connection that the company makes with customers. Because of the short iterative deployment cycle, Ted’s team is able to get continuous and ongoing feedback from users and stakeholders. They respond quickly to this feedback with seamless roll out of incremental improvements, which has a positive impact on consumer perceptions of the company and the quality of its customer service.
Read more ►

Anatomy of a Deployment Bottleneck

0 comments
What causes the deployment slow down? To answer this question, let’s look at what must occur to get an application from development to production in a large enterprise. We’ll do that with a fictional but not uncommon scenario.
A big brand national retailer needs to improve its e-commerce site by making it easy for customers to personalize their shopping experience. This is considered a business imperative as competitors have rolled out robust personalization and social media features that have led to increased sales. Ted, the director of IT, has agreed to a hard deadline of six months to go live with this new functionality.
Ted’s development team launches the project using PHP because it allows rapid application development. It will also allow the team to roll out features incrementally on a weekly or daily basis, feature by feature. This agility has a number of advantages over the traditional approach of building a whole project that launches on a “grand opening” release date:
1. Customers will see immediate improvements in their online experience
2. Upper management will quickly see positive results from the e-commerce improvement
3. Small iterative deployments are easier to address if something goes wrong
4. All features of the new application will be live within the required time frame
The first features are ready for deployment according to the plan timeline. While the development team moves on to the next set of features, Ted transitions the tested feature set to Mark, a project lead on the operations side of the IT department, so that he and his team can prepare and implement the first round of changes and be ready for the next round.
Issues crop up almost immediately.
Mark and his team attempt to keep pace with the speed of the development team, but the lack of processes leads to errors. Deployment steps are run in the wrong order. The database schema doesn’t get synched with the new version of the code. The project slows as the operations team goes back and corrects errors. In the meantime, the development team is ready to transition the next set of features for roll out.
The original goal of deploying features in incremental fashion, along with the benefits that this approach would create, begins to evaporate as the operations team gets bogged down. Ted and Mark seriously consider adapting a deployment technology originally designed for use with a different language. However, nobody on the team has the expertise to adapt any of those technologies to a PHP platform, and hiring the needed talent will take time and money. This would simply add to the delay, so Ted and Mark decide not to pursue the idea.
In order to avoid more process errors, the operations team is forced to slow down the pace of deployment further. This effort, designed to make a strategic difference to the corporation in a short space of time, has soaked up far more resources than intended without reaching a resolution. Ted dreads C-suite meetings because he will not only have to report continued delay, he will have to listen to the problems the delay is causing elsewhere in the business:
• Sales will not meet its targets, which were based in part on the ecommerce improvement going live on time
• Marketing must keep revising its campaigns, which had centered on getting maximum brand leverage from the “rolling improvements” Ted initially promised that aren’t showing up on time
• Finance points angrily to both sides of the ledger, as Ted’s department once again spends more than allocated without any revenues to justify the extra expense
On top of everything else, Ted considers the application’s future scalability requirements, which leads to more concern. If the operations team is running into problems from manual errors and lack of processes as they deploy onto a single server, how bad will things be when they need to deploy exactly the same way onto an eight-server cluster?
Read more ►