What are the main differences between InnoDB and MyISAM?
What are the main differences between InnoDB and MyISAM?
What version control methodologies help teams of people track database schema changes?
I used to label columns in my databases like this:
user_id user_name user_password_hash
To avoid conflicts when joining two tables, but then I learnt some more on how to alias tables, and I stopped doing this.
What is an effective way of labeling columns in a database? Why?
What are the differences between NoSQL and a traditional RDBMS?
Over the last few months, NoSQL has been frequently mentioned in the technical news. What are its most significant features relative to a traditional RDBMS? At what level (physical, logical) do the differences occur?
Where are the best places to use NoSQL? Why?
Is there still any use for Slony-I, and if so, what is it? For clarification, from 9.0 Postgresql supports built in streaming replication.
What we earn and what we lost with this migration?
What should I expect as drawbacks after the migration?
Is it really unnecessary to change the applications in any situation?
I have a symfony application with an InnoDB database that is ~2GB with 57 tables. The majority of the size of the database resides in a single table (~1.2GB). I am currently using mysqldump to backup the database nightly.
Due to my comcast connection, oftentimes if I am running a dump manually my connection to the server will timeout before the dump is complete causing me to have to rerun the dump. [I currently run a cron that does the dump nightly, this is just for dumps that I run manually.]
Is there a way to speed up the dumps for the connection timeout issue, but also to limit the time the server is occupied with this process?
BTW, I am currently working on reducing the size of the overall database to resolve this issue.
Are there any techniques or tools to work with SQLite on a medium size/traffic/concurrency DB environment?
I'm looking for beginner and intermediate level SQL puzzles, that I can point trainees at for practice.
I'm aware of http://sqlzoo.net/ which is a great resource - is there anything else out there that you could suggest?
I have a system where I can't control the design of some tables (replicated via Slony-I), and so I have a series of what we refer to as 'shadow tables', where I extract some information out of the replicated tables, and store it in the processed form that I need, while stripping out the records that I want to ignore.
Right now, after setting up a new replica, I run an update and set a value back to itself (eg,
UPDATE tablename SET field=field) to force the trigger to run, but some of the tables are millions of records, and growing, and it can take 30min. (and then there's the vaccuum, too).
Is there some better way to trigger it, or some way to write a function such that it'll work with either input passed in or
NEW depending on calling context? I'm reluctant to keep two different functions around, as I've seen too many times where one gets updated, and not the other.
I am using a PIC32, which is a 32-bit processor clocked at 80 MIPS with about 64-128KB of RAM available. It will be accessing a microSD card - up to 4 GB, on a FAT32 filesystem. Running all of this is pushing it, but I need a compact database that can be easily ported to this platform and one which is fast. Does anyone have any suggestions?
Just as the title says, where can I see it ?
Are there any config options for it ( like how many ms would determine if a query is slow or not ) ?
I've heard a long time ago that there is this tool that helps you tweek mysql settings for better performance but i can't seam to find it. I am aware that I can use ab for apache to simulate high traffic and it will generate me a slow log. However, if it crashes (already happened and it was in production mode), I don't know why it crashed and if it can be tweeked from the config.
How to create an index to filter a specific range or subset of the table in MySQL? AFAIK it's impossible to create directly but I think it's possible simulate this feature.
Example: I wanna create an index for
NAME column just to lines with
STATUS = 'ACTIVE'
I use indexes like most developpers do (mostly on... well ! index), but i'm sure there is a lot of subtle way to optimize a database using index. I'm not sure if it is specific to any implementation of a DBMS.
My question is : what are good examples of how to use index (except for basic, obvious cases), and how does a DBMS optimize its database when you specify an index on a table ?
Database files that are built using SQL 2008 are not compatable with 2005. Is there a work around?
Is there a way to traverse tree data in SQL? I know about 'connect by' in Oracle, but is there another way to do this in other SQL implementations? I'm asking because using 'connect by' is easier than writing a loop or recursive function to run the query for each result. Thanks
EDIT: Since some people seem to be confused by the phrase "tree data" I will explain further. What I mean is with regards to tables which have a "parent_id" or similar field which contains a primary key from another row in the same table. The question comes from an experience where I was working with data stored in this way in an Oracle database and knew that the 'connect by' isn't implemented in other DBMSs. If one were to use standard SQL, one would have to create a new table alias for each parent one would want to go up. This could easily get out of hand.
Evaluating document-oriented storage, what are the pros and cons of CouchDB vs MongoDB?
What produces the best performance in regular google app engine use, a polymodel or a normal "BigTable" model?
The polymodel, effectively, creates a column in the parent table called "class" which provides the inheritance tracking. Whereas a normal bigtable, inherited from a parent class, creates a new and separate data structure, without the ability to query the parent and find all children of all subtyped classes.
The BigTable design rejects many of the philosophies of standard relational models, explicitly preferring denormalization to a big host of tiny tables.
One of the larger areas where this is a problem is in the modelling of many to many joins.
One way to model these joins is to violate first normal form, and put all interesting data in a db.ListProperty(). While this has the ability to be searchable from a query, I have not yet explored the performance implications of searching a list versus pulling another table.
As joins are not possible, it is possible to link tables through RelationshipProperties. Therefore, with enough effort, the standard intersection table (a table with a joint primary key which references both parent tables) can be created. Has anyone explored the performance hits of the various implementations?
While the List of Keys suggested in the documentation is indeed one way to do it, I'm interested in the performance and anomaly rates of that and other implementations. Is there utility in creating mutual lists of keys? Is the effort involved in the repeated gets worth the price? Is there a better way to do it?