We recently bought DBVA PHP edition. It’s a really nice tool so far, however when testing the generated code with real data we had a performance problem (using PHP/MySQL/InnoDB/ezpdo):
With the simplified project shown below and 40000 “Auction” records and 17000 “MUser” records, listing all 700 auctions of a user using DBVA/ezpdo takes 7.5 seconds, while a direct MySQL Query
SELECT * FROM Item WHERE seller='44216195' AND Discriminator='Auction' ORDER BY ends DESC LIMIT 0,1000
takes 0.23 seconds.
Do you have a suggestion how to improve the speed, or do we have to return to not using a persistence layer?
In production, the IDs will be replaced by int.
The high LIMIT is on purpose for benchmarking.
An index on “ends” exists, and a foreign key for the item-user relationship.
// indicated runtime is for 700 results out of 40000 items
public function listItems($username,$orderstr) {
$seller = MaUserFactory :: listMaUserByQuery("nick='$username'", NULL);
if ($seller) {
if (is_array($seller)) $seller=$seller[0];
$sellerId=$seller->getId();
// up to here, usually 0.2 seconds have passed
$auctions=AuctionFactory::listAuctionByQuery("seller.id='$sellerId'",$orderstr."");
// AuctionFactory::listAuctionByQuery() takes 6 seconds
$av=new AuctionListViewer(array("classType","id",/*"thumb",*/"title","bids","price","ends","highestBidder"));
echo $av->listHeader();
foreach ($auctions as $auction) {
echo $av->listBody($auction);
}
echo $av->listFooter();
// the rest takes another second
}
}
Sorry for my late reply. Loading all the records will be slow, we need to map records into objects, therefore it cannot be compared with direct MySQL Query.
Now you can us limit to retrieve less records at a time in order to speed up the query with method
public static function listItemByQuery($condition, $orderBy, $firstRecord, $limit)
e.g. you can change your code
$auctions=AuctionFactory::listAuctionByQuery(“seller.id=’$sellerId’”,$orderstr."");
to
$auctions=AuctionFactory::listAuctionByQuery(“seller.id=’$sellerId’”,$orderstr."", 0, 100);
Thanks for the update, but now the PHP code doesn’t work any more as your code
generator creates two methods with the same name and different
parameters (which is allowed in languages like C++, but apparrently not in PHP).
Example of illegal code:
function testFunction($oneparam) {
}
function testFunction($twoparam1,$twoparam2) {
}
results in the following error:
Fatal error: Cannot redeclare testfunction() (previously declared in //test.php5:2) in //test.php5 on line 8
Furthermore, I already used the limit you described with the precedent
version of DBVA by simply appending the “LIMIT 0, 50” phrase to the
$orderBy parameter:
Naturally this increases the speed, but that’s not the point of my initial
question - the high limit was only for benchmarking, whether it is 7s for
700 results or 0.95s for 50 results, both is equally bad compared to the
alternative without using a persistence layer.
My solution will probably be to use direct mysql queries for read-only
output of object lists (e.g. a list of auctions), but the DBVA generated
persistence layer for everything that changes the content of an object or
returns only a single object.
The latter used to be annoyingly complicated before, and DBVA is a great
help to be able to easily add new features.
Thanks for replying and I’m sorry for any inconvenience caused. I’ve forwarded the details to our engineers to follow-up, and I’ll keep you informed for any news.
About the performance issue you experienced, actually the query time needed for PHP ORM is similar to what you get from database. But since PHP ORM will convert your query result into objects. The more record you get, more time need to do the conversion. Therefore it could be slower than you directly manipulate the query results.
The major advantage of ORM is allows you to build database application in pure object-oriented approach. It’s let your system more easy to adopt changes.
In my opinion, I think the patch for the limit will help since most of the case you won’t get all the records out of the database and display into a single page. The limit patch let you divide your query into smaller sections which compensate the time needed for data conversion.
The problem you found in the limit patch has been fixed and you can download the latest one at the link below.