Thursday, September 01, 2011

Find Optimal Concurrency for your Application

Although I found this article is more "academic", it is really helpful to understand how computer works when there are high concurrency coming in. So not always good idea for setting a very high number of workers/threads to handle large volume of concurrent requests , by given the certain hardware environment. Want to know why? please take a look at the post which is from book :

Finding the Optimal Concurrency [From High Performance MySQL]

Every web server has an optimal concurrency—that is, an optimal number of concurrent
connections that will result in requests being processed as quickly as possible,
without overloading your systems. A little trial and error can be required to find this
“magic number,” but it’s worth the effort.

It’s common for a high-traffic web site to handle thousands of connections to the
web server at the same time. However, only a few of these connections need to be
actively processing requests. The others may be reading requests, handling file
uploads, spoon-feeding content, or simply awaiting further requests from the client.

As concurrency increases, there’s a point at which the server reaches its peak
throughput. After that, the throughput levels off and often starts to decrease. More
importantly, the response time (latency) starts to increase.

To see why, consider what happens when you have a single CPU and the server
receives 100 requests simultaneously. One second of CPU time is required to process
each request. Assuming a perfect operating system scheduler with no overhead,
and no context switching overhead, the requests will need a total of 100 CPU seconds
to complete.

What’s the best way to serve the requests? You can queue them one after another, or
you can run them in parallel and switch between them, giving each request equal time
before switching to the next. In both cases, the throughput is one request per second.
However, the average latency is 50 seconds if they’re queued (concurrency = 1), and
100 seconds if they’re run in parallel (concurrency = 100). In practice, the average
latency would be even higher for parallel execution, because of the switching cost.
[Ideally]For a CPU-bound workload, the optimal concurrency is equal to the number of
CPUs (or CPU cores).
[Normally]However, processes are not always runnable, because they
make blocking calls such as I/O, database queries, and network requests. Therefore,
the optimal concurrency is usually higher than the number of CPUs. [That's why we are able to add more threads to handle concurrent requests]

You can estimate the optimal concurrency, but it requires accurate profiling.
[Conclusion:]It’s usually easier to experiment with different concurrency values and see what gives the
peak throughput without degrading response time.

Monday, August 15, 2011

Performance books I have or I wish to have

Overall Performance Engineering Methodology:
* Performance Analysis for Java Web Sites By Stacy Joines
* Software Performance and Scalability: A Quantitative Approach By Henry Liu
* Improve .Net Application Performance and Scalability By Microsoft
* Building Scalable Web Sites By Cal Henderson

Performance Testing Process & Practice:
* Performance Testing Guidance for Web Application By Microsoft
* The Art of Application Performance Testing By Ian Molyneaux

DB Performance tuning:
* Inside SQL Server 2005 Query Tuning and Optimization By Kalen Delaney
* High Performance MySQL By Baron Shwartz

Programming Language Performance Tuning:
* Effective Java By Joshua Bloch
* Java Concurrency in Practice By Brian Goetz
* The Art of Concurrency by Clay Breshears
* Java Performance Tuning by Jack Shirazi

Front-End Performance Optimization Practice:
* High Perforamnce WebSites By Steve Souders
* Even Faster WebSites By Steve Souders
* High Performance Javascript By Zakas

Web Operations and Capacity Planning:
* Web Operations By John Allspaw
* The Art of Capacity Planning By John Allspaw

Tuesday, August 09, 2011

C3p0 infinity wait for "availble" connections by default

If you have connection leaks on C3p0 which means the Connections are being checked-out that don't make it back into the pool. You will be in trouble...

By default, C3p0 will wait for the connection forever if there is no one available in the pool currently.

When testing against the application, i found some threads are wait for the connections for a long time, and after monitoring the JMX console in VisualVM, all the connections are taken, here is the thread dump i took:
 "1658227@qtp-25018827-56" - Thread t@321  
   java.lang.Thread.State: WAITING  
      at java.lang.Object.wait(Native Method)  
      - waiting on <103344b> (a com.mchange.v2.resourcepool.BasicResourcePool)  
      at com.mchange.v2.resourcepool.BasicResourcePool.awaitAvailable(BasicResourcePool.java:1315)  
      at com.mchange.v2.resourcepool.BasicResourcePool.prelimCheckoutResource(BasicResourcePool.java:557)  
      at com.mchange.v2.resourcepool.BasicResourcePool.checkoutResource(BasicResourcePool.java:477)  
      at com.mchange.v2.c3p0.impl.C3P0PooledConnectionPool.checkoutPooledConnection(C3P0PooledConnectionPool.java:525)  
      at com.mchange.v2.c3p0.impl.AbstractPoolBackedDataSource.getConnection(AbstractPoolBackedDataSource.java:128)  
      at org.apache.ibatis.session.defaults.DefaultSqlSessionFactory.openSessionFromDataSource(DefaultSqlSessionFactory.java:72)  
      at org.apache.ibatis.session.defaults.DefaultSqlSessionFactory.openSession(DefaultSqlSessionFactory.java:32)  

I am not discussing why the code makes connection leaks here in this post.(Actually, we did not invoke session.close from our code finally ), i just want to make sure if you meet this kind of similar pattern of problem, then you can temperately change some C3p0 configuration before you fix your bad code...

we are using C3p0 0.9.1.2 version now, here is the source code which have the infinity loop to wait for the connection, FYI
 1297        while ((avail = unused.size()) == 0)   
  1298        {  
  1299          // the if case below can only occur when 1) a user attempts a  
  1300          // checkout which would provoke an acquire; 2) this  
  1301          // increments the pending acquires, so we go to the  
  1302          // wait below without provoking postAcquireMore(); 3)  
  1303          // the resources are acquired; 4) external management  
  1304          // of the pool (via for instance unpoolResource()   
  1305          // depletes the newly acquired resources before we  
  1306          // regain this' monitor; 5) we fall into wait() with  
  1307          // no acquires being scheduled, and perhaps a managed.size()  
  1308          // of zero, leading to deadlock. This could only occur in  
  1309          // fairly pathological situations where the pool is being  
  1310          // externally forced to a very low (even zero) size, but   
  1311          // since I've seen it, I've fixed it.  
  1312          if (pending_acquires == 0 && managed.size() < max)  
  1313            _recheckResizePool();  
  1314    
  1315          this.wait(timeout);  
  1316          if (timeout > 0 && System.currentTimeMillis() - start > timeout)  
  1317            throw new TimeoutException("A client timed out while waiting to acquire a resource from " + this + " -- timeout at awaitAvailable()");  
  1318          if (force_kill_acquires)  
  1319            throw new CannotAcquireResourceException("A ResourcePool could not acquire a resource from its primary factory or source.");  
  1320          ensureNotBroken();  
  1321        }  

timeoutvalue is set to 0 by default in C3p0, which means wait indefinitely
checkoutTimeout
Default: 0
The number of milliseconds a client calling getConnection() will wait for a Connection to be checked-in or acquired when the pool is exhausted. Zero means wait indefinitely. Setting any positive value will cause the getConnection() call to time-out and break with an SQLException after the specified number of milliseconds.

And also if you have connection leaks with your code, and you hardly find it in short time, then you should pay attention to this parameter:
unreturnedConnectionTimeout
Default: 0
Seconds. If set, if an application checks out but then fails to check-in [i.e. close()] a Connection within the specified period of time, the pool will unceremoniously destroy() the Connection. This permits applications with occasional Connection leaks to survive, rather than eventually exhausting the Connection pool. And that's a shame. Zero means no timeout, applications are expected to close() their own Connections. Obviously, if a non-zero value is set, it should be to a value longer than any Connection should reasonably be checked-out. Otherwise, the pool will occasionally kill Connections in active use, which is bad. This is basically a bad idea, but it's a commonly requested feature. Fix your $%!@% applications so they don't leak Connections! Use this temporarily in combination with debugUnreturnedConnectionStackTraces to figure out where Connections are being checked-out that don't make it back into the pool!

So you can try to set these two parameters to the same time as your goal keeper, say 30 seconds. So that you can prevent some extremely leak cases, but dev need to find the root cause for sure :)
checkoutTimeout= 30000
unreturnedConnectionTimeout=30

also adjust your maxPoolSize to 30 or more instead of 15 by default under a heavy load situation.

Friday, July 08, 2011

Weekly Health check for SQL Server 2005 using DMV

Every Week, you can set up a benchmark by running following DMV, it can help to provide high level view to show whether your DB is healthy or not currently without any monitoring tools.

While, Some limitations you may have to pay attention to when you are using following DMV Queries:
(Thanks for a good reference provided by Vance: http://www.mssqltips.com/tip.asp?tip=1843)

1. Limitation with DMV queries: Keep this in mind when you are using the DMVs for query usage and performance stats. If you are using inline T-SQL and sp_executesql you may not be capturing all of the data that you need.
—Suggestion : think about using stored procedures for all data related operations instead of using inline T-SQL or sp_executesql in your application code.
2. Limitation with dbid column: there is a problem that it is limiting the result data to queries with a database id. The reason for this is that the dbid column is NULL for ad hoc and prepared SQL statements, So you can comment out the where condition which having dbid in (...);
—Suggestion : you may just comment out the dbid constrain or using “dbid = null” instead of assign a dbid in “where clause”


-- DMV FOR CHECKING CPU USAGE:
 
 SELECT TOP 50   
      DB_Name(dbid) AS [DB_Name],  
      total_worker_time/execution_count AS [Avg_CPU_Time],  
      total_elapsed_time/execution_count AS [Avg_Duration],  
      total_elapsed_time AS [Total_Duration],  
      total_worker_time AS [Total_CPU_Time],  
      execution_count,  
   SUBSTRING(st.text, (qs.statement_start_offset/2)+1,   
     ((CASE qs.statement_end_offset  
      WHEN -1 THEN DATALENGTH(st.text)  
      ELSE qs.statement_end_offset  
      END - qs.statement_start_offset)/2) + 1) AS statement_text  
 FROM sys.dm_exec_query_stats AS qs  
 CROSS APPLY sys.dm_exec_sql_text(qs.sql_handle) AS st  
 WHERE dbid in (  
           SELECT DB_ID('yourtablename') AS [Database ID]  
      )  
 ORDER BY Avg_CPU_Time DESC;  

-- DMV FOR CHECKING I/O USAGE
 SELECT TOP 50  
      DB_Name(dbid) AS [DB_Name],  
      Execution_Count,  
      (total_logical_reads/Cast(execution_count as Decimal(38,16))) as avg_logical_reads,  
      (total_logical_writes/Cast(execution_count as Decimal(38,16))) as avg_logical_writes,  
      (total_physical_reads/Cast(execution_count as Decimal(38,16))) as avg_physical_reads,  
      max_logical_reads,  
      max_logical_writes,  
      max_physical_reads,  
   SUBSTRING(st.text, (qs.statement_start_offset/2)+1,   
     ((CASE qs.statement_end_offset  
      WHEN -1 THEN DATALENGTH(st.text)  
      ELSE qs.statement_end_offset  
      END - qs.statement_start_offset)/2) + 1) AS statement_text  
 FROM sys.dm_exec_query_stats AS qs  
 CROSS APPLY sys.dm_exec_sql_text(qs.sql_handle) AS st  
 WHERE dbid in (  
           SELECT DB_ID('yourtablename') AS [Database ID]  
      )  
 ORDER BY avg_logical_reads DESC;  

-- DMV FOR CHECKING INDEX USAGE
 SELECT     top 50   
           idx.name as Index_name  
           ,obj.name   
           ,dmv.object_id  
           ,sampledatetime=Getdate()  
           ,dmv.index_id  
           ,user_seeks  
           ,user_scans  
           ,user_lookups   
 FROM sys.dm_db_index_usage_stats dmv  
 INNER JOIN sys.indexes idx on dmv.object_id = idx.object_id and dmv.index_id = idx.index_id  
 Cross Apply sys.objects obj  
 WHERE dmv.object_id = obj.object_id and database_id in (  
 SELECT DB_ID('yourtablename') AS [Database ID]  
 )  
 ORDER BY user_scans desc  

-- DMV FOR CHECKING OBJECT BLOCKING/WAITING
 SELECT TOP 50  
      DB_NAME(qt.dbid),  
      [Average Time Blocked] = (total_elapsed_time - total_worker_time) / qs.execution_count,  
      [Total Time Blocked] = total_elapsed_time - total_worker_time,  
      [Execution count] = qs.execution_count,  
      SUBSTRING(qt.text, (qs.statement_start_offset/2)+1,   
     ((CASE qs.statement_end_offset  
      WHEN -1 THEN DATALENGTH(qt.text)  
      ELSE qs.statement_end_offset  
      END - qs.statement_start_offset)/2) + 1) AS statement_text,
      [Parent Query] = qt.text
 FROM sys.dm_exec_query_stats qs  
 CROSS APPLY sys.dm_exec_sql_text(qs.sql_handle) as qt  
 WHERE DB_NAME(qt.dbid) = 'yourtablename'  
 ORDER BY [Average Time Blocked] DESC;  

-- DMV FOR CHECKING TEMPDB USAGE
 SELECT getdate(),   
      SUM(user_object_reserved_page_count) * 8 as user_objects_kb,  
      SUM(internal_object_reserved_page_count) * 8 as internal_objects_kb,  
      SUM(version_store_reserved_page_count) * 8 as version_store_kb,  
      SUM(unallocated_extent_page_count) * 8 as freespace_kb  
 FROM sys.dm_db_file_Space_Usage  
 where database_id = 2  

Thursday, June 30, 2011

Performance Toolkit in My Pocket

Please Note, this is a Draft Version, the list will be updated on the fly.

Kindly Warning:
Don’t be a slave of tools, but you can not leave without tools!
Be a master of your job with tools :)

Performance Toolkit in My Pocket:

1> Perf Testing Tools:
- Jmeter (Load generate tool for different Protocol)
- Loadrunner (Load generate tool for different Protocol)
- SoapUI (WebService load Testing preferred, or help to create mock services)
- Traffic Shaper XP (Network Bandwitch limiter)
- Badboy (HTTPS recording supported for .jmx scripts)
- JMeter plugin : http://code.google.com/p/jmeter-plugins/
- WebDriver Automation Framework for End-End Performance measurement
- ^Unit Performane testing tool need to be filled in...$ (Method level performance testing)

2> Perf Monitoring Tools:
- JConsole
- JVisualVM
- Task manager/PerfMon
- Process Explorer
- Hyperic HQ
- NetXMS
- Netstat
- typeperf Command line (with Ruby Programming)

3> Perf Profiling Tools:
- Jprofiler
- Btrace
- Jmap
- SQL Profiler
- Perf4j
- Guice AOP Profiling methods for Automation test
- HttpWatch
- Firebug
- Chrome Developer Tools
- Fiddler
- Charles
- Wireshark
- DBCC Command

4> Perf Analysis and Tuning Tools:
- Dynatrace Ajax
- MemoryAnalyzer
- TDA
- DB tuning adviser
- Yslow
- Page Speed
- Image Opertimazer : http://www.imageoptimizer.net/Pages/Home.aspx
- jpegmini - Sprite Me :http://spriteme.org/
- Minify JS :http://www.minifyjs.com/
- WebPageTest : http://www.webpagetest.org/

5> MISC:
- Text Editor/IDE you perferred: Netbeans with Ruby for me
- Windows Grep
- Regular Expression
- T-SQL
- Ruby/Python/Perl/Shell/Awk : http://www.ibm.com/developerworks/cn/education/aix/au-gawk/index.html
- STAF/STAX :
- Excel
- LogBack/Log4j
- JSLint (looks for problems in JavaScript programs)
- User Agent analysis: http://www.useragentstring.com/index.php

Tuesday, May 31, 2011

Dealing With High CPU% of SQL Server 2005

Sometimes, we experience performance problem with high CPU% on DB Server. How can we detect which process take most "contribution" to this pheromone step by step?

Step 1. Check if SQL Server Process has problem or not?
 DECLARE @ts_now bigint;  
   SELECT @ts_now = cpu_ticks / CONVERT(float, cpu_ticks_in_ms) FROM sys.dm_os_sys_info   
   SELECT TOP(10) SQLProcessUtilization AS [SQL Server Process CPU Utilization],  
           SystemIdle AS [System Idle Process],  
           100 - SystemIdle - SQLProcessUtilization AS [Other Process CPU Utilization],  
           DATEADD(ms, -1 * (@ts_now - [timestamp]), GETDATE()) AS [Event Time]  
   FROM (  
      SELECT record.value('(./Record/@id)[1]', 'int') AS record_id,  
         record.value('(./Record/SchedulerMonitorEvent/SystemHealth/SystemIdle)[1]', 'int')  
         AS [SystemIdle],  
         record.value('(./Record/SchedulerMonitorEvent/SystemHealth/ProcessUtilization)[1]', 'int')  
         AS [SQLProcessUtilization], [timestamp]  
      FROM (  
         SELECT [timestamp], CONVERT(xml, record) AS [record]  
         FROM sys.dm_os_ring_buffers  
         WHERE ring_buffer_type = N'RING_BUFFER_SCHEDULER_MONITOR'  
         AND record LIKE '%<SystemHealth>%') AS x  
      ) AS y  
 ORDER BY record_id DESC;  

Then you will get result like this:

The picture above shows 70% of CPU% is taking by SQL Server 2005 , not by other processes on DB server.

Step2. Drill down to particular Loginuser and SPID which is taking most of CPU%:
 select  loginame,  *    
 from  master.dbo.sysprocesses  
 where  spid> 50  
 order by cpu desc   

PS: Why "spid > 50" here? Notice that SPIDs 1 to 50 are reserved for internal SQL Server processes, while SPIDs 51 and above are external connections. So most of time we assume that the suspects for High CPU% are from external connections.

Step3: Query the most expensive CPU process by spid (taking spid = 102 for example):

 dbcc inputbuffer(102)   

Then you can start to take further actions for tuning expensive Query or SPs... or just kill the abnormal ones for free :)

Monday, May 30, 2011

To enable Performance Profiling log with Guice AOP and Logback

I always want my methods to be profiled, so that I can learn about how fast/slow it is from a high level perspective at first. Then break it down to make it even faster.

I am using WebDriver to do automation testing, so i want to trace certain end user experience from performance perspective as well.

For example, if I want to measure Login action's response time, my intention is to enable my profiling log by just add @Profiled annotation in front of corresponding "Login" method:
 
@Profiled
public void clickLoginBtn() throws InterruptedException{  
           driver.findElement(By.id("login")).click();  
           if(!myWaiter.waitForMe(By.cssSelector("div.starcite-hyperlink"), 25, timeout)) return ;  
      }  

To make @Profiled work, i chose Guice as it is so called light weight and easy of use:

Step1: Create an annotation called Profiled:
 import java.lang.annotation.ElementType;  
 import java.lang.annotation.Retention;  
 import java.lang.annotation.RetentionPolicy;  
 import java.lang.annotation.Target;  
 @Retention(RetentionPolicy.RUNTIME)  
 @Target(ElementType.METHOD)  
 public @interface Profiled {  
 }  

Step2:Create matchers for the classes and methods to be intercepted:

 import com.google.inject.AbstractModule;  
 import com.google.inject.matcher.Matchers;  
 public class ProfiledModule extends AbstractModule {  
      @Override  
      public void configure() {  
           // TODO Auto-generated method stub  
           PerfTracing perftracing = new PerfTracing();  
           requestInjection(perftracing);  
           bindInterceptor(  
                     Matchers.any(),  
            Matchers.annotatedWith(Profiled.class),  
            perftracing);  
        }  
      }  

Step3: Write your own MethodInterceptor to do profiling using Logback
 import java.util.Arrays;  
 import org.aopalliance.intercept.MethodInterceptor;  
 import org.aopalliance.intercept.MethodInvocation;  
 import org.slf4j.Logger;  
 import org.slf4j.LoggerFactory;  
 public class PerfTracing implements MethodInterceptor{  
      private Logger logger = LoggerFactory.getLogger("PerfLog");  
      @Override  
      public Object invoke( MethodInvocation invocation) throws Throwable {  
           // TODO Auto-generated method stub  
           long start = System.nanoTime();  
     try {  
       return invocation.proceed();  
     }   
     finally {  
          if(logger.isDebugEnabled()){  
               Object[] paramArray = {     invocation.getMethod().getName(),   
                                             Arrays.toString(invocation.getArguments()),   
                                             (System.nanoTime() - start) / 1000000};  
            logger.debug("Invocation of method: {} with parameters: {} took: {} ms." , paramArray);  
          }  
     }  
      }  
 }  

Why I choose Logback for logging? please refer to this post :)

For Logback configuration, here is my sample:
 <?xml version="1.0" encoding="UTF-8"?>  
 <!-- Reference Manual http://logback.qos.ch/manual/index.html -->  
 <configuration>  
   <appender name="stdout" class="ch.qos.logback.core.ConsoleAppender">   
     <encoder charset="UTF-8">  
       <pattern>%d{HH:mm:ss.SSS} [%thread] %-5level %logger - %msg%n</pattern>  
     </encoder>  
   </appender>   
   <appender name="PerfLog" class="ch.qos.logback.core.rolling.RollingFileAppender">  
     <rollingPolicy class="ch.qos.logback.core.rolling.TimeBasedRollingPolicy">  
       <fileNamePattern>PerfLog-%d{yyyy-MM-dd}.log</fileNamePattern>  
       <maxHistory>30</maxHistory>  
     </rollingPolicy>   
     <encoder>  
       <pattern>%d{HH:mm:ss.SSS} [%thread] %-5level %logger - %msg%n</pattern>  
     </encoder>  
   </appender>  
   <logger name="PerfLog" additivity="false">  
        <level value="DEBUG"/>  
        <appender-ref ref="PerfLog" />  
   </logger>  
   <root level="ERROR">  
     <appender-ref ref="stdout" />  
   </root>  
 </configuration>  

The output in your log file should be like this:
 15:02:49.351 [main] DEBUG PerfLog - Invocation of method: clickLoginBtn with parameters: [] took: 3027 ms.  

Please notice Guice AOP's Limitations:
* Classes must be public or package-private.
* Classes must be non-final
* Methods must be public, package-private or protected
* Methods must be non-final
* Instances must be created by Guice by an @Inject-annotated or no-argument constructor

So You may have to change your source code to make Guice AOP work properly.

Saturday, May 21, 2011

My Current “Waiter” Class used by WebDriver Tests

My Current “Waiter” Class used by WebDriver automation tests, so keep away from your hard coded Thread.sleep() :

 import java.util.List;  
 import org.openqa.selenium.By;  
 import org.openqa.selenium.WebDriver;  
 import org.openqa.selenium.WebElement;  
 import org.openqa.selenium.support.ui.WebDriverWait;  
 import com.google.common.base.Function;  
 /*  
  * Two usage examples in your test code:  
  * MyWaiter myWaiter = new MyWaiter(driver);  
   WebElement search = myWaiter.waitForMe(By.name("btnG"), 10);  
   or  
   if(!myWaiter.waitForMe(By.name("btnG"), 1, 10)) return;  
   or if (!myWaiter.waitForMeDisappear(By.name("btnG"), 10)) return;  
  */  
 public class MyWaiter {  
      private WebDriver driver;  
      public MyWaiter(WebDriver driver){  
           this.driver = driver;  
      }  
      public WebElement waitForMe(By locatorname, int timeout){  
           WebDriverWait wait = new WebDriverWait(driver, timeout);  
           return wait.until(MyWaiter.presenceOfElementLocated(locatorname));  
      }  
      //Given certain number of web element to see if it is found within timeout  
      public Boolean waitForMe(By locatorname, int count, int timeout) throws InterruptedException{  
           long ctime = System.currentTimeMillis();  
           while ((timeout*1000 > System.currentTimeMillis()- ctime)){  
                List<WebElement> elementList = driver.findElements(locatorname);  
                if ((elementList.size()< count)){  
                     Thread.sleep(300);  
                }  
                //element is found within timeout   
                else  
                     return true;  
           }  
           // otherwise element is not found within timeout  
           return false;  
      }  
      //Given certain number of web element to see if it is disappear within timeout  
      public Boolean waitForMeDisappear(By locatorname, int timeout) throws InterruptedException{  
           long ctime = System.currentTimeMillis();  
           while ((timeout*1000 > System.currentTimeMillis()- ctime)){  
                List<WebElement> elementList = driver.findElements(locatorname);  
                if ((elementList.size()!= 0)){  
                     Thread.sleep(300);  
                }  
                //element is Disappear within timeout   
                else  
                     return true;  
           }  
           // otherwise element is still show up within timeout  
           return false;  
      }  
      public static Function<WebDriver, WebElement> presenceOfElementLocated(final By locator) {  
           // TODO Auto-generated method stub  
           return new Function<WebDriver, WebElement>() {  
                @Override  
                public WebElement apply(WebDriver driver) {  
                     if (driver.findElement(locator)!= null){  
                          return driver.findElement(locator);  
                     }  
                     else return null;  
                }  
           };  
      }  
 }  

Another useful method on implicitwait(), I also wrote a post related to it, FYI : http://joychester.blogspot.com/2010/09/webdriver-wait-is-easier-after-using.html

Wednesday, May 18, 2011

6 years since we met -- 2011-01-26

Cheng Shawn

PS: for how to draw a heart, please take a look at this link :)




Tuesday, May 17, 2011

Compressing Image with Image Opertimazer

Compressing plain text is easy by Zipping the files, however, images are said to be compressed by default, so often we are ignoring optimizing it.

I just found one awesome tool to optimize the website images without sacrificing much quality:Image Opertimazer

Here is the default Image, original size is 52KB:

After optimization, the size is 33KB, about 20% saving:

And you may heard of WebP created by Google, it is said that "WebP images were 39.8% smaller than jpeg images of similar quality", give it a try! Update: another awesome service to do Image compression, called JPEGmini : http://www.jpegmini.com/main/home